Category: Digital sobriety

Optimizing the smartphones energy to reduce the impact of digital technology and avoid the depletion of natural resources

Reading Time: 6 minutes


The lifespan of a smartphone averages 33 months. Knowing that a smartphone contains more than 60 materials, including rare earth elements and that its carbon footprint is between 27 and 38 kg eqCO2, the current rate of replacement of smartphones is too fast.

Different reasons can explain this rate of renewal. Loss of autonomy and battery problems are the main reasons (smartphone: one in three changes due to the battery). Increasing the capacity of the batteries is a solution that seems interesting but it would not solve the problem. Indeed, the data exchanged continues to increase and this has an impact on the power of smartphones. Websites are still just as heavy as before, even becoming heavier and heavier… So is this an unsolvable problem? What is the link between the autonomy that we experience in a personal capacity and this observation on the impact of digital technology?


We started our analysis through web consumption. Indeed, mobile users spend an average of 4.2 hours per day browsing the web.

In a previous study on the impact of Android web browsers, we measured the consumption of 7 different websites on several web browsing applications from a mid-range smartphone, a Samsung Galaxy S7. This allows us to project this consumption onto global consumption and to apply optimization assumptions to identify room for maneuver.

Even if the uncertainties are high (diversity of mobile, diversity of use, etc.), this action allows us to identify the room for maneuver to improve the life cycle of smartphones. The choice of the Galaxy S7 makes it possible to have a smartphone close (within 1 year) to the average age of global smartphones (18 months).

What is the annual consumption of web browsing on mobile?

Here are our initial assumptions:

The estimated annual consumption of smartphones is 2,774 billion ampere-hours. Not very tangible? Considering that an average 3000mAh battery can go through 500 full charge/discharge cycles before it starts to be unusable and that 1,850 million batteries are used each year to browse the web. Does this figure seem exaggerated to you? There are 5.66 billion smartphones in the world, this would correspond to a problem that would affect 36% of the global fleet each year. If we consider that 39% of users will change their smartphone for battery reasons and only 26% of users will replace the batteries if they wear out, we get the figure of 1,200 million batteries, which corroborates our figures. Not inconsistent at the end, when you look at the phone and battery renewal cycles.

Would reducing the consumption of browsers have an impact?

Web browsers are important engines in the consumption of the web. Our measurements show significant differences in power consumption between browsers. These differences are explained by heterogeneous implementations and performance. In the following graph, the consumption of browsing on 7 sites, including the launch of the browser, the use of features such as writing URLs, and the navigation itself is visualized.

We start with a hypothesis of publishers optimizing browsers. By considering a hypothetical consumption of all browsers equal to that of the soberest (Firefox Focus), we obtain a reduction in the total annual consumption which makes it possible, with the same assumptions on the lifespan, to save 400 million batteries per year. Knowing that 1,500 million smartphones are sold per year, taking the same assumptions as before on replacement and repair rates, this would save 7% of the fleet of phones sold each year.

Would reducing the consumption of sites have an impact?

It is also possible that the websites are much soberer. We have assumed a consumption close to that of Wikipedia. From our point of view, having audited and measured many sites is possible but by taking important actions: optimization of functionalities, reduction of advertising and tracking, technical optimization …  

Here is an example of the representation of the energy consumption of the Team website. We see that the load will consume up to 3 times the reference consumption. The optimization margin is enormous in this precise case, knowing that many sites arrive at a factor of less than x2.

In the case of sober websites, by taking the same assumptions and calculation methods as for the sobriety of browsers, we could save 294 million batteries per year, or reduce the renewal of the fleet annually by 5%.

Is reducing the consumption of the OS possible and would have an impact? 

The question about the impact of hardware and OS often arises. To take this impact into account, we have several data at our disposal. An important piece of data is the benchmark consumption of the smartphone. It is the consumption of the hardware and the OS. For the Galaxy S7, this consumption is 50µAh / s.

By taking the same assumptions as those taken to calculate the total consumption (2,774 billion Ah), the annual consumption attributed to the material and OS share would be 1,268 billion ampere-hours or 45% of the total consumption. 

So is this the glass tray of optimization? Not really because there is a lot of space for optimization: Android itself for example. We have carried out an experiment that shows that it is possible to significantly reduce the consumption of Android functionalities. The builders’ overlays are also a way to reduce consumption.

Based on our experience, we estimate that a 5% reduction in consumption is totally possible. This would save 350 million batteries or 6% of the fleet.

What environmental gains can we hope for?

Applying digital sobriety at different levels would reduce the global number of used batteries per year by more than half. 

Even on the assumption that users do not systematically renew their smartphones for reasons of loss of autonomy or only replace their used battery, the annual smartphone renewal could be reduced by 17%.

In the best-case scenario, assuming that most users will replace their batteries, the potential savings would be 2 million TCO2eq. But the gains could be much greater if you consider that replacement practices are not changing fast enough and that users are changing smartphones rather than batteries: 47 million TeqCO2.

By being optimistic about an increase in battery capacity, no increase in the impact of software, and an unincreased impact of the larger batteries, the number of batteries used could be halved, in the same way, the environmental impact by two. But is it still enough? Rather go for an increase in the capacity of the batteries and a decrease in energy consumption and then obtain a gain of 4 on the impact by multiplying the capacity by two! 

Energy on a smartphone, small drops but a huge impact in the end

We are under the impression that the energy is unlimited, we just need to charge our smartphone. However, even if the energy was unlimited and without impact, the batteries are consumables. The more we use them, the more we wear them out, and the more we use non-renewable resources such as rare earth elements, not to mention other environmental, social, and geopolitical costs. We can expect technological developments to improve capacity and improve battery replaceability, but the savings are huge. Replacing the batteries is not the miracle solution because even if we extend the life of the smartphone, the battery must be thrown away or recycled, and recycling of Lithium is not yet assured (P.57). Gigantic because we use our smartphones for many hours. Gigantic because we are billions of users.

The exercise that we have carried out is totally forward-looking; all browser editors should integrate sobriety, all sites be eco-designed. It does show, however, that optimizing the energy of apps and websites makes sense in the digital environmental footprint. Some people seeing only the energy of recharging neglect this aspect. However, as we can see in this projection, the environmental gains are much greater.

This figure is significant and at the same time low: 47 million Teq CO2 for the world, this is 6% of the French footprint. However, CO2 is not the only metric to look at. Another significant problem, for example, the shortage of lithium in 2025 but also water.

To all this, we should add issues associated with new practices and new materials:

… the sector is constantly evolving to respond to challenges that are sometimes commercial, sometimes economic, sometimes regulatory. The battery example illustrates this trend well. While we had become familiar with the “classic” lithium-ion batteries which mainly contain lithium, carbon, fluorine, phosphorus, cobalt, manganese, and aluminum, new models have appeared, first lithium-ion-polymer batteries then lithium-metal-polymer batteries. The possible metal procession, already substantial, has therefore been considerably increased; with iron, vanadium, manganese, nickel but also rare earth elements (cerium, lanthanum, neodymium, and praseodymium).

SystExt Association (Extractive Systems and Environments) 

Taking into account the environmental, social, and geopolitical issues involved with batteries, dividing the number of batteries used by 2 is really not enough! This means that the optimization wells should now be activated. And if we want to achieve ambitious goals, all players, manufacturers, OS and browser editors, digital players … have their share of the work. Continue to incant magical reductions resulting from technologies, to say that energy should not be optimized, to transfer the fault to other actors or other sectors, to explain that focusing on uses is a mistake … that shift the problem. We all need to roll up our sleeves and solve the problem now!


What resources should be reduced in the context of good software eco-design practices: Processing on the server-side or on the user side?

Reading Time: 2 minutes

One of the first answers to the question “what resources” is: all! But it is necessary to have a more specific answer because certain practices will favor an economy on the server-side, others on the memory side rather than the CPU. There are winning optimizations for all areas but unfortunately, the behavior of computer systems is more capricious!

The guiding principle is to extend the life of the hardware, whether for the terminal or for the servers. We will see that for environmental gains, reducing energy will also be an improvement axis.

In a previous article, we discussed the need for energy optimization in the case of mobile devices. Today we are trying to answer the question: what architecture to put in place, and in particular to put processing on the user side or on the server-side? 

The answer is: server-side processing to be preferred …

The answer is quite simple: let’s load the servers! Indeed, when we take LCA and impact analyzes, we observe a much stronger impact on the user side (Example with our study on the impact of playing a video). The servers are shared and are optimized to absorb a load. The manager can also manage load fluctuations with Power Capping (peak load absorption while maintaining controlled energy consumption). The lifespan of the servers can also be managed (hardware that can last up to 10 years). Compliance with a Green IT policy can also be better monitored and shared.

Terminals, on the other hand, despite having powerful processors, do not have these advantages. Very little control of the lifespan, no management of the health of the system, fragmentation of powers and therefore of behavior …

… but watch out for resources and scalability

While it is better to put the computations on the server-side, this is no excuse for not maximizing the impact on the server-side. Scalability is possible but must be monitored. Because adding a virtual instance will have an impact on the future need to add a physical machine and therefore will increase the environmental impact.

In addition, limiting power consumption will be necessary because a high demand for power will transfer into an increase in the power consumed on the server rack and higher cooling needs.

And the cost of the round trips of the network round trips in this case?

The question appears on network exchanges if we move calculations to the server-side. This is currently a false problem because there is too much exchange. The network resource and servers being seen as “free” and the architectures going more and more towards the service/microservice, the processing on the user side calls too much the data centers. It will be necessary rather control the number of network exchanges, whatever the choice of architecture.

Is this currently the case in architectural practices?

This has not been the trend in recent years. Indeed, the arrival of powerful user platforms, i.e. with multicore processors and high-performance network connections, have pushed a lot of processing to the user side. Development Frameworks, especially JavaScript Frameworks, made this possible.

However, the trend is starting to reverse. We can notably mention Server-Side Rendering (SSR) with for example next.js or the generation of static blogs with Hugo. We can also see techniques maximizing the use of elements already present on the user’s terminal such as the web browser engine by using CSS rather than JS.

We will try to answer in the next articles: which resources (CPU, memory …) should we optimize as a priority?

Users smartphones: all about the environmental impact and battery wear

Reading Time: 4 minutes

User terminals: the high environmental impact of the manufacturing phase

User terminals are now the biggest contributors to the environmental impact of digital technology and this phenomenon is set to increase. This trend is mainly explained by the increasingly important equipment of households with smartphones, by a reduced lifespan of this equipment, and by the fact that it has a significant environmental impact. An impact mainly due to the smartphone manufacturing phase. The Ericson brand announces, for instance, an impact in use (i.e. linked to recharging the smartphone battery with energy) of 7 kg eqCO2 out of a total impact of 57 kg eqCO2, or only 12% of the total impact. The total impact takes into account the different phases of the smartphone life cycle: manufacture, distribution, use, treatment of the smartphone at the end of its life.

Hence the interest that manufacturers work on this embodied energy by eco-designing but also by improving the possibility of increasing the life of the equipment through repairability but also durability.

Regarding all these observations, it could seem unproductive from an environmental point of view to reduce the energy consumption of smartphones. In any case, the simplistic approach would be to put that impact aside. But the reality is quite different and the electrical flows that are involved in the use of mobile devices are much more complex than one might think.

Explanation of battery operation

Current smartphones are powered by batteries with Lithium-ion technology. On average, the capacities of the batteries on the market are 3000 mAh. The trend is to increase this capacity. The battery can be thought of as consumable, just like a printer cartridge. It wears out over time and the original capacity you had when you bought the smartphone is no longer fully available. That is, the 100% indicated by the phone no longer corresponds to 3000 mAh but to a lower capacity. And this initial capacity cannot then be recovered.

Battery wear is primarily created by a full charge and discharge cycles. A recharge/discharge cycle corresponds to an empty battery that would be recharged to 100%. I leave home in the morning with a phone 100% charged, the battery drains, I charge my phone 100% in the evening. A complete cycle in one day therefore!

If you charge your phone more often, you can cycle more (several incomplete cycles are ultimately equivalent to one complete cycle).

The more the number of cycles increases, the more the remaining capacity decreases. This wear leads to the end of battery life. Current technologies allow up to 500 cycles.

At the end of the cycle, the battery capacity is only 70% of the initial capacity. Beyond this annoying loss of autonomy, the battery suffers from certain anomalies, such as a rapid drop from a battery level from 10% to 0%.

Note that this effect will be reinforced by the intensity of the battery discharge: if the phone consumes a lot (for example during video playback), then the battery wear will be greater.

Impact on obsolescence

The loss of autonomy is a cause of renewal by users: 39% in 2018. This phenomenon is reinforced by the fact that the batteries are increasingly non-removable, which leads to a complete replacement of the smartphone by the user. In addition, even if the decrease in autonomy is not the only replacement criterion, it will be added to the other causes to create a set of signs indicating to the user that he must change his smartphone (marketing effect, power, new features…).

We can therefore easily make the link between the mAh consumed by the applications and the kg of CO2 due to the production of CO2. By reducing these mAhs, we would greatly reduce the wear of the battery, the life of smartphones would be extended on average and therefore the initial CO2 cost would be more profitable. The smartphone mAh has a much greater cost on the embodied energy of the smartphone (manufacture) than on the impact of energy to recharge it.

For example, for a classic smartphone, we have 0.22 mgCo2 / mAh for the recharged energy compared to 14mgCo2 / mAh.

Technological solution

Solving this problem can always be seen through the technological axis: increase in capacities, fast loading … If we take the case of fast loading, this will not change the problem, on the contrary, it will worsen its potentially increasing cycles. It is not by increasing the fuel tank of cars that we will reduce the impact of the automobile. Improving battery technology is beneficial, however, reducing the consumption of smartphones would be even more beneficial for the environment and the user.

Note that the CO2 impact is not only to be taken, indeed the manufacture of batteries is overall very expensive in environmental and social terms. Not to mention strategic resources with geopolitical impacts such as cobalt or lithium. Extending battery life is critical.

Digital sobriety everywhere, digital sobriety nowhere? 7 mistakes to avoid!

Reading Time: 4 minutes

Everyone is talking about digital sobriety. From web agencies to politicians, including ESNs, all communicate on the subject, on the explanation of the impact, on good practices, on the willingness to go there. But what is it really?

We have been working on the subject within Greenspector for 10 years and we can in all modesty give our opinion on the real situation of the actors and especially on the barriers that will have to be overcome to really do eco-design and sobriety.

We have educated developers, students, and leaders. We have supported teams, applied good practices. We measured apps and websites. It took motivation to stay in the race. Because the context is different, and we are happy to see so much communication and actors involved. However, we believe that all is not won! Here are some tips and analyzes from veterans in the field, grouped into 7 mistakes to avoid!

Associate digital sobriety only with a department

In many actions that we have carried out, an important component was necessary: the consideration of the problem at all stages. Developer, Designer, Product Owner, decision-maker. And Customer… Without it, the project will not get far. An unfunded project, optimization research needs not wanted by the devs, technical improvements not accepted by the Product Owners … At best, the improvements will be made but with only a few little gains.

The solution is to engage in a shared approach. It takes a little longer (and more!) But allows the project to be understood by all and accepted.

Focus only on coding practices

The miracle solution when you think of digital sobriety is to tell yourself that if the developers respect good practices, everything will be fine. We can talk about it. We started an R&D project (Green Code) more than 8 years ago on this axis. It was necessary but not sufficient. Indeed, it is also necessary to work on the functionalities, the design, the contents, the infrastructure…

The establishment of a repository will be an important axis but more initially to initiate an awareness process. It is important not to say to yourself that it will be necessary to apply 115 best practices on almost all of a site because the effort will be enormous and the results will not necessarily be there.

Do not use professional tools

Many tools have emerged to evaluate websites. Indeed, it is quite simple on the web to monitor some technical metrics such as the size of the data exchanged on the network or the size of the DOM and to model an environmental impact. This is great for raising awareness and for identifying sites that are far too heavy. On the other hand, the system on which the software works is not so simple and the impact can come from many more elements: A JS script that consumes, an animation…

Taking action with this type of tool makes it possible to start the process but to say that the software is sober because we have reduced the data size and the size of the DOM is at the limit of greenwashing.

We are not saying this because we are publishers but because we are convinced that it is necessary to professionalize actions.

Fighting over definitions and principles

We have lived it! We have been criticized for our approach to energy. The birth of a domain leads to the establishment of new principles, new domains, new definitions … This is normal and often requires long discussions. But do we really have time to debate? Are they necessary when there is agreement that we all need to reduce the impact of our activities? The complexity of digital and obesity is there and can be felt at all levels. It is time to improve our practices overall, all wishes are good, all areas need to be explored.

Look for heavy consumers

The findings on the impact of digital technology are increasingly shared. However, teams may be led to look for excuses or responsible and not make corrections that seem more minor. Why optimize your solution when bitcoin is a consumption abyss? Why reduce the impact of the front when the publishers of libraries do nothing? Prioritization is important but it is often a bad excuse not to seek gains in your field.

ALL the solutions are way too heavy. So everyone is stuck on slowness. Everything is uniformly slow. We stick to that and all is well. Being efficient today means achieving a user experience that corresponds to this uniform slowness. We prune things that might be too visible. A page that has had more than 20 seconds to load is too slow. On the other hand, 3 seconds, … is good. 3 seconds? With the multicore of our phones / PCs and data centers all over the world, all connected by great communication technologies (4G, fiber …), it’s a bit weird, isn’t it? If you look at the debauchery of resources for the result, 3 seconds is huge. Especially since the bits circulate in our processors in nanosecond-level units of time. So yes, everything is evenly lent. And it suits everyone (at least, on the surface: The software world is destroying itself, manifesting for more sustainable development.)

Now let’s start optimizations by not looking for culprits!

Think only about technological evolution 

We are technicians, we are looking for technical solutions to solve our problems. And therefore in the digital field, we are looking for new practices, new frameworks. And the new frameworks are full of performance promises, we believe them! On the other hand, it is an arms race that costs us resources. This development is surely necessary in certain cases but it is not necessary to focus only on this. We must also invest in cross-cutting areas: accessibility, testing, sobriety, quality … And on the human because it is the teams who will find the solutions for sober digital services.

Do not invest 

Goodwill and awareness are necessary, on the other hand, we must finance change. Because digital sobriety is a change. Our organizations, our tools are not natively made for sobriety. Otherwise, we would not currently have this observation on the impact of digital. It is, therefore, necessary to invest a minimum to train people, to equip themselves, to provide time for the teams in the field. Just doing a webinar and training is not enough!

Let us have commitments related to the issue and the impacts of digital technology on the environment!

The main figures of the carbon impact of e-commerce in France

Reading Time: 6 minutes

E-commerce sites are high traffic sites (11 million per month) and therefore have an impact reinforced by the volume of use and significant time of use (visit> 5 minutes). In addition, driven by strong growth in e-commerce, longer journey times and more and more mobility, oversized infrastructures to ensure a good level of response time, the e-commerce site makes an ideal candidate for a Carbon assessment of a digital service with environmental responsibility associated with mass services.


The assessment scope is based on the impact of the 100 most visited sites in France over the second half of 2019. It is therefore not exhaustive since the calculations do not take into account all the e-commerce sites with lower traffic.

How to assess the Carbon Impact of an e-commerce site?

To know the Carbon impact of an e-commerce web service, we worked on a simplified method based on real measurements.

On the Datacenter and Network side, we project the Carbon Impact from the consumption of data exchanged with the device (OneByte method of the ShiftProject).

On the User device, a real measurement on a mid-range Android smartphone equipped with a Chrome browser is launched 3 times and averaged before being projected with a Carbon impact factor taking into account the following assumptions: WiFi mix – GPS network, 50% brightness, phone battery wear at 500 full charge/discharge cycles.

The average impact of a course (= 1 visit)

The visit, on average of 5 minutes and 28 seconds on an e-commerce website in France on a smartphone device, has a carbon impact of 2 gCO2eq equivalent to 18 meters traveled by a light vehicle. Or 56 visits to an average e-commerce site impact 1 km driven with an average light vehicle.

The distribution of the sites is fairly homogeneous between 0.5 and 3 grams with some extreme values. Nevertheless, there are large differences: from 0.5 g to 34 g EqCO2, -> ie a ratio of X68 between the 100 Top E-Commerce sites in France. These differences can be qualified by taking into account that the visit time varies by a factor of 5 (from 3 to 15 minutes).

When we project the impacts on monthly visits, an e-commerce website has an average carbon impact of 23.8 Tons CO2eq/month.

The sum of the impacts of the top 100 e-commerce sites is 2380 Tons EqCO2/month, the equivalent of the impact of 21 million km of an average car in France or 531 rounds of the Earth by car or 19,636 average vehicles circulating in France corresponding to the fleet of an agglomeration of 40,000 inhabitants.

Projected over one year, it is 28.6 MegaTons Eq CO2!

Average impact of a page

In order to compare e-commerce sites with each other, the visit time or the number of steps or page views during the visit must be isolated. To do this, we go back to the basic measurement of a page for 1 minute.

A page from an e-commerce website in France has an average impact of 0.36 g CO2 eq.

In simple projection: 1,000 pages viewed for 1 minute on an average smartphone have an average carbon impact equivalent to 3.2 km from a light vehicle. The detailed ranking of the Top 100 e-commerce websites is available here. It will be likely to vary during future updates or request for re-measurement.

Breakdown of sites visited

Large difference: from 0.5 g to 34 g EqCO2, i.e. a ratio of X 68 between the 100 top e-commerce sites in France.

To explain this significant variation in impact, we can note that:

  • The data consumption ranges from 0.6 MB to 55 MB, or a ratio of X 92, this is the most discriminating factor explaining the differences in impact.
  • the energy consumption on the mobile device varies by a factor of 4.7

It is the network part that has the most impact with a share of 69% of the average impact of an e-commerce site on mobile.

If the mobile were replaced by a PC with a wired connection, the “User workstation” part would be much more important. This distribution of impact of course varies with the e-commerce site.

Low impact website case

Compliance with best practices on the network and low consumption on the device

Eco score Greenspector: 81/100, best Eco score of the Top 100 E-Commerce

Equivalent of 161,675 km of a light vehicle for 24.5 million visits / month (Source: Similarweb S2 2019)

Heavy impact website case

No respect for best practices on the network and high consumption on the device

Eco score Greenspector: 21/100, the lowest Eco score of the Top 100 E-Commerce

Equivalent to 44,582 km of a light vehicle for 2.1 million visits/month.

More impactful categories of sites than others?

A ratio of 1 to 3 on the impacts per page between categories

We have about 3 times more carbon impact by browsing a fashion e-commerce site than on an Automotive or Leisure site.

Rq: a single site classified in the Good Plans category

Projected earnings:

If all the sites were aligned with the most virtuous site in our measurements, we could save over a full year:

  • 15,177 tonnes of CO2eq, more than half of the impacts
  • Or 53% reduction in carbon impact
  • The equivalent of 4050 rounds of the earth by car

The major levers for improvement:

E-commerce websites can reduce network volume

  • Adapt content to device / type of connection & connection quality
  • Compression of rich content
  • User cache to avoid content already loaded on a previous visit
  • Limit the number of requests (internal, advertising, external services, etc.)
  • Beware of unsuitable pre-loads

E-commerce sites that can reduce their energy and battery consumption

  • Allow rapid interaction
  • Reduction in the consumption of scripts in the pages (3D animation, graphic animation, etc.)
  • Reduction of trackers / monitoring
  • External services to be evaluated / optimized
  • Reduced travel time
  • A design / graphics / color to optimize

Correlation analysis of carbon data

Correlation analysis between carbon impact and display performance

By taking 20 values from our sample for which we collected performance data, we can validate that there is no correlation between Carbon impacts and Display performance.

The 2 best performing sites are nevertheless also the least impactful sites

The Carbon indicator is an indicator in its own right for the management of an e-commerce website

Correlation analysis between Impact Carbone and Eco score Greenspector

The estimated Carbon indicator does not take into account other parameters, such as memory consumption, CPU, number of requests, or compliance with good practices, etc.
The Eco score includes both the consumption of resources/energy but also a note on compliance with good practices.

There is a “satisfactory” correlation between the estimated Carbon impact and the measured Ecoscore Greenspector.

Please note, the Carbon indicator does not cover all environmental indicators.

The carbon impact of the Top 100 E-Commerce websites

Reading Time: < 1 minute

Sales made on e-commerce sites are increasing each year, more and more on the go or from a smartphone at home. We have never consumed so much through these web platforms as we do today. A debate persists between the ecological impact of e-commerce in terms of logistics compared to a purchase made in-store. It all depends on certain parameters (delivery time, geographic location of the store, logistical means used, etc.). In addition to this logistical question, there is the environmental impact of digital technology for e-commerce purchases. How can we estimate the part that these online purchases represent in terms of the environmental impact of our life as consumers?

To answer this question on e-commerce in France, we took as a basis the ranking of the largest e-commerce sites in France (Top 100 E-Commerce: E-Commerce Nation & SimilarWeb study) and we add to your asks other sites. We measure the consumption of energy and resources on a mid-range smartphone which allows us to assess the carbon impacts on the entire chain: device (Greenspector methodology), network, and datacenter (OneByte method of the ShiftProject). This evaluation is done on the basis of the home page of the e-commerce site and on the basis of a 1-minute protocol. The Greenspector eco score completes the assessment of the site both by respecting good practices but also by measuring other metrics of resources not assessed in the Carbon impact.

Ranking of the carbon impact of the Top 100 E-commerce websites

PositionWebsiteTotal gEqCO2 per page/minuteEcoscore
Measurement date

Your e-commerce website is not in this ranking, contact-us to take a measurement and appear in this ranking!

The Top 10 myths of frugal ICT

Reading Time: 5 minutes

I have been working for more than 8 years in GreenIT and I have seen lately that several studies and initiatives have started. This is a very positive sign and shows that there is a real dynamic to change the impact of ICT. All actions, whether small scale, as a simple awareness, or on a larger scale such as the optimization of a website with millions of visitors, is good to take into account the climate emergency.

However it’s important to avoid any greenwashing phenomenon and to understand the impact of the good practices mentioned (are they really all green?)

Myth 1 – A powerful software is a simple software.


A powerful software is a software that will be displayed quickly. This gives no information on its sobriety. On the contrary, it’s possible that practices are put in place for a quick display and that they go against the sobriety. As for example put the loading of the scripts after the display of the page. The page will be displayed quickly but many processes will run in the background and will have an impact on resource consumption.

Myth 2 – Optimize the size of queries and the weight of the page, this makes the software more frugal.

True and false

True because actually fewer resources will be used on the network and servers. Which means less environmental impact. It goes in the right direction.

False because the evaluation of a simple software will not only be based on this type of technical metrics. Indeed, it is possible that certain elements have an equally important impact. A carousel on a home page could for example be quite light in terms of weight and requests (for an optimized carousel) but in any case will have a strong impact in user-side resource consumption (CPU consumption, graphics … ).

Myth 3 – Automatic control via tools allows me to be green

True and false

True because it is important to measure the elements. This will allow to know objectively where we are, and to improve.

False because the evaluation will be done on technical elements. There is a bias: we only measure what we can automate. This is the criticism that can be made for example on Lighthouse (accessible tool in Chrome) on the accessibility. We can make a totally inaccessible site by having a score of 100. This is the same criticism that we can have about the tools that are used in ecodesign. For example the website is an interesting tool to initiate the process, however the calculation of this tool is based on 3 technical elements: the size of the page, the number of request and the size DOM. These are important elements in the impact of the page, however several other elements can be impacting: CPU processing from script, graphic processing, more or less good solicitation of the radio cell … All elements that can create false positives.

A measurement software will be complementary 😉

Myth 4 – My software uses open-source and free code, so I’m green


Free software is a software in its own right. He suffers the same obesity as other software. He will therefore potentially be a consumer. On the other hand, free software has a stronger capacity to integrate good efficiency practices. Still need to implement or at least begin to evaluate the impact of its solution …

Myth 5 – The impact is more on the datacenter, on the features, on that …

True and false

Any software is different, by its architecture, its use, its implementation, its functions … no serious study can certify a generality on a domain that would have more impact than another. In some cases, the impact will be more on the datacenter (for example on calculation software) but in other cases it will be on the user side (for example mobile applications). In the same way, some software will be obese because of their multiple functionalities whereas others will be because of a bad coding or an external library too heavy.

Myth 6 – Ecodesign requires a structured and holistic approach

True and false

True because indeed it’s necessary to involve all the actors of the companies (developer but also Product Owner, Business Department) and to have a coherent strategy.

However, starting process and product improvement through unit and isolated actions is very positive. The heaviness of the software is indeed in a state where any isolated positive action is good to take.

Both approaches are complementary. Avoiding the application of certain practices while waiting for a structured approach (which can be cumbersome) would be dangerous for the optimization and competitiveness of your software.

Myth 7 – The green coding does not exist, the optimization is premature …


This is an argument that has existed since the dawn of time (software). Code implemented, legacy code, libraries … optimization tracks are numerous. My various audits and team accompaniments showed me that optimization is possible and the gains are significant. To believe otherwise would be a mistake. And beyond optimization, learning to code more green is a learning approach that is useful to all developers.

Myth 8 – My organization is certified green (ISO, ICT responsible, Lucie …), so my product is green.


All its certifications will effectively ensure that you are on the right track to produce more respectful software. Far be it from me to say that they aren’t useful. However, it must not be forgotten that these are organization-oriented certifications. In a structured industry (like agriculture, a factory …) the company’s deliverables are very aligned to the process. Certifying an AB farm will ensure that the product is AB good.

However in the mode of the software it is not so simple, the quality of the deliverables is indeed very fluctuating, even if one sets up a process of control. In addition, an organization potentially consists of a multitude of teams that are not going to have the same practices.

It’s therefore necessary to control the qualities of software products and this continuously. This is an approach that will be complementary to the certification but mandatory. Otherwise we risk discrediting the label (see going to greenwashing).

Myth 9 – Optimizing energy is useless, it’s the equivalent CO2 that is important to treat


The ecodesign work is mainly based on the reduction of equivalent CO2 (as well as other indicators such as eutrophication …) over the entire life cycle of the ICT service. It’s therefore important to take into account this metric. Without this, we risk missing the impacts of IT. However, on the same idea as points 5 to 7, no optimization is to be discarded. Indeed, it is necessary to understand where the impacts of the software are located. However, the integration of the energy problem in teams is urgent. Indeed, in some cases the consumption of energy in the use phase is only part of the impact (compared to gray energy for example). However in many cases, high energy consumption is a symptom of obesity. In addition, in the case of software running in mobility (mobile application, IoT) energy consumption will have a direct impact on the renewal of the devices (via the wear of the battery).

Myth 10 – I compensate so I’m green


It’s possible to offset its impact through different programs (financing of an alternative energy source, reforestation …). It’s a very good action. However, it is a complementary action to an ecodesign process. It is indeed important to sequence the actions: I optimize what I can and I compensate what remains.


The frugal ICT is simple because it’s common sense. However, given the diversity of the software world, the findings and good practices aren’t so simple. However, the good news is that, given the general cumbersome software and the delay in optimization, any action that will be taken will be positive. So don’t worry, start the process, it’s just necessary to be aware of some pitfalls. Be critical, evaluate yourself, measure your software!

The software world is destroying itself … Manifesto for a more sustainable development

Reading Time: 21 minutes

The world of software is bad and if we do not act, we may regret it. Environment, quality, exclusion … Software Eats The World? Yes a little too much.

The software world is bad. Well, superficially, everything is fine. How could a domain with so many economic promises for the well-being of humanity go wrong? Asking ourselves the question could be a challenging of all that. So everything is fine. We are moving forward, and we are not asking ourselves too much.

The software world is getting bad. Why? 20 years of experience in the software world as a developer, researcher or CTO have given me the chance to rub shoulders with different fields and to have this feeling that it is growing year by year. I have spent the last 6 years especially trying to push practices, software quality tools to educate developers about the impact of the software on the environment. You have to be severely motivated to think about improving the software world. Good practices do not pass as easily as the new Javascript framework. The software world is not permeable to improvements. Or at least only those superficially, not deep ones.

The software world is getting bad. Everything is slow, and it’s not going in the right way. Some voices are rising. I invite you to read “Software disenchantment”. Everything is unbearably slow, everything is BIG and “BLOAT”, everything ends up becoming obsolete…The size of websites explodes. A website is as big as the Doom game. The phenomenon affects not only the Web but also the IoT, the mobile … Did you know? It requires 13% CPU When Idle Due to Blinking Cursor Rendering….

This is not the message of an old developer tired by the constant evolutions and nostalgic of the good old days of floppy disk … It is rather a call to a deep questioning of the way we see and develop the software. We are responsible for this “non-efficiency” (developers, project managers, salespeople …). To say that everything is fine would not be reasonable, but to say that everything is going wrong without proposing any improvement would be even more so.

Disclaimer: You will probably jump, call FUD, troll, contradict … reading this article. It’s fine but please, go all the way!

We’re getting fat (too much)

Everything grows: the size of the applications, the amount of data stored, the size of the web pages, the memory of the smartphones … Phones now have 2 GB of memory, exchange a photo of 10 MB by mail is now common… It might not be an issue if all software was used, effective and efficient … But this is not the case, I let you browse the article “The disenchantment of the software” to have more detail. It is difficult to say if many people have this feeling of heaviness and slowness. And at the same time, everyone has got used to that. It’s computer science. Like the bugs, “your salary has not been paid? Arghhh… it must be a computer bug”. IT is slow, and we can not help it. If we could do anything about it, we would have already solved the problem.

So everyone get used to slowness. All is Uniformly Slow Code. We sit on it and everything is fine. Be effective today is to reach a user feeling that corresponds to this uniform slowness. We get rid of things that might be too visible. A page that takes more than 20 seconds to load is too slow. On the other hand, 3 seconds is good. 3 seconds? With the multicores of our phones / PCs and data centers all over the world, all connected by great communication technologies (4G, fiber …), it’s a bit weird? If we look at the riot of resources for the result, 3 seconds is huge. Especially since the bits circulate in our processors with time units of the level of nanosecond. So yes, everything is uniformly slow. And that suits everyone (at least, in appearance.) Web performance (follow the hashtag #perfmatters) is necessary but it is unfortunately an area that does not go far enough. Or maybe the thinking in this area can not go further because the software world is not permeable enough or sensitive to these topics.

There are now even practices that consist not to solve the problem but to work around it, and this is an area in its own right: to work on “perceived performance” or how to use the user’s perception of time to put in place mechanisms so that they don’t need to optimize. The field is fascinating from a scientific and human point of view. From the point of view of performance and software efficiency, a little less. “Let’s find plenty of mechanisms to not optimize too much!”

All of this would be acceptable in a world with mediocre demands on the performance of our applications. The problem is in order to absorb this non performance, we scale. Vertically by adding ultra-powerful processors and more memory, horizontally by adding servers. Thanks to virtualization that allowed us to accelerate this arms race! Except that under the bits, there is metal and the metal is expensive, and it is polluting.

Yes, it pollutes: it takes a lot of water to build electronic chips, chemicals to extract rare earths, not to mention the round trips around the world … Yes, the slow uniformity still has a certain cost. But we will come back to it later.

It is necessary to go back to more efficiency, to challenge hardware requirements, to redefine what is performance. As long as we are satisfied with this slowness uniformity with new solutions which won’t slow down more (like the addition of equipment), we won’t move forward. The technical debt, a notion largely assimilated by development teams, is unfortunately not adapted to this problem (we will come back to this). We are on a debt of material resources and bad match between the user need and the technical solution. We are talking here about efficiency and not just about performance. Efficiency is a story of measuring waste. The ISO defines Efficiency with the domain: Time behaviour, Resource utilization and Capacity. Why not push these concepts further more?

We are (too) virtual

One of the problems is that the software is considered “virtual”. And this is the problem: “Virtual” defines what has no effect (“Who is only in power, in a state of mere possibility as opposed to what is in action” according to Larousse). Maybe it comes from the early 80s when the virtual term was used to speak about Digital (as opposed to the world of the Hardware). “Numeric” is relative to the use of numbers (the famous 0 and 1). But finally, “Numeric”, it’s not enough and it includes a little too much material. Let’s use the term Digital! Digital / Numeric is a discussion in France that may seem silly but is important in the issue we are discussing. Indeed, the digital hides even more this material part.

But it should not be hidden: digital services are well composed of code and hardware, 0 and 1 that circulate on real hardware. We can’t program without forgetting that. A bit that will stay on the processor or cross the earth will not take the same time, or use the same resources:

Developing a Java code for a J2EE server or for an Android phone, it’s definitly not the same. Specific structures exist to process data in Android but common patterns are still used. The developers have lost the link with the hardware. It’s unfortunate because it’s exciting (and useful) to know how a processor works. Why: abstraction and specialization (we’ll see this later). Because by losing this insight, we lose one of the forces of development. This link is important among hackers or embedded computing developers but unfortunately less and less present among other developers.

Devops practices could respond to this loss of link. Here, it’s the same, we often do not go all the way of it: usually the devops will focus on managing the deployment of a software solution on a mixed infrastructure (hardware and few software). It would be necessary to go further by going up for instance consumption metrics, by discussing the constraints of execution … rather than to “scale” just because it is easier.

We can always justify this distance from the material: productivity, specialization … but we must not mix up separation and forgetting. Separate trades and specialize, yes. But forget that there is material under the code, no! A first step would be to give courses on materials in schools. It is not because a school teaches about programming that serious awareness of the equipment and its operation is not necessary.

We are (too) abstract

We’re too much virtual and far from the material because one wanted to abstract oneself from it. The multiple layers of abstraction have made it possible not to worry about the material problems, to save time … But at what price? That of heaviness and forgetting the material, as we have seen, but there’s much more. How to understand the behavior of a system with call stacks greater than 200? :

Some technologies are useful but are now regulary used. This is the case, for example, of ORM which have become systematic. No reflection is made on its interest at the beginning of the projects. Result: we added an overlay that consumes, that must be maintained and developers who are no longer used to perform native queries. That would not be a problem if every developer knew very well how the abstraction layers work: how does HIBERNATE work for example? Unfortunately, we rely on these frameworks in a blind way.

This is very well explained in the law of Joel Spolsky “The Law of Leaky Abstractions

And all this means that paradoxically, even as we have higher and higher level programming tools with better and better abstractions, becoming a proficient programmer is getting harder and harder. (…) Ten years ago, we might have imagined that new programming paradigms would have made programming easier by now. Indeed, the abstractions we’ve created over the years do allow us to deal with new orders of complexity in software development that we didn’t have to deal with ten or fifteen years ago (…) The Law of Leaky Abstractions is dragging us down.

We believe (too much) in a miracle solution

The need for abstraction is linked to another flaw: we are still waiting for miracle tools. The silver bullet that will further improve our practices. The ideal language, the framework to go even faster, the tool of miracle management of dependencies … It is the promise each time of a new framework: to save time in development, to be more efficient … And we believe in it, we run into it. We give up the frameworks on which we had invested, on which we had spent time … and we go to the newest one. This is currently the case for JS frameworks. The history of development is paved with forgotten frameworks, not maintained, abandoned … We are the “champions” that reinvent what already exists. If we kept it long enough, we would have the time to master a framework, to optimize it, to understand it. But this is not the case. And don’t tell me that if we had not repeatedly reinvented the wheel, we would still have stone wheels… Innovate would be to improve the existing frameworks.

This is also the case for package managers: Maven, NPM … In the end, we come to hell. The link with abstraction? Rather than handling these dependencies hard, we put an abstraction layer that is the package manager. And the edge effect is that we integrate (too) easily external code that we do not control. Again we will come back to it.

On languages, it’s the same story. Warning, I don’t advocate to stay on assembler and C … This is the case for instance in the Android world, for over 10 years developers have been working on tools and Java frameworks. And like that, by magic, the new language of the community is Kotlin. We imagine the impact on existing applications (if they have to change), we need to recreate tools, find good practices … For what gain?

Today the Android team is excited to announce that we are officially adding support for the Kotlin programming language. Kotlin is a brilliantly designed, mature language that we believe will make Android development faster and more fun Source

We will come back later to the “fun” …

Honestly, we do not see any slowdown in technology renewal cycles. It’s always a frenetic pace. We will find the Grail one day. The problem is then the stacking of its technologies. Since none of them really dies and parts are kept, we develop other layers to adapt and continue to maintain these pieces of code or these libraries. The problem is not the legacy code, it is the glue that we develop around fishing. Indeed, as recited the article on ” software disenchantment “:

@sahrizv :
2014 – #microservices must be adopted to solve all problems related to monoliths.
2016 – We must adopt #docker to solve all problems related to microservices.
2018 – We must adopt #kubernetes to solve all the problems with Docker.

In the end, we spend time solving internal technical problems, we look for tools to solve the problems we add, we spend our time adapting to its new tools, we add overlays (see previous chapter … ) … and we didn’t improve the intrinsic quality of the software or the needs that must be met.

We do not learn (enough)

In the end, the frantic pace of change does not allow us to stabilize on a technology. I admit that as an old developer that I am, I was discouraged by the change Java to Kotlin for Android. It may be for some real challenges, but when I think back to the time I spent on learning, on the implementation of tools .. We must go far enough but not from scratch. It is common, in a field, to continually learn and be curious. But it remains in the iteration framework to experiment and improve. This is not the case in programming. In any case in some areas of programming, because for some technologies, developers continue to experiment (.Net, J2EE ..). But it’s actually not that fun…

Finally, we learn: we spend our time on tutorials, getting started, conferences, meetups … To finally experiment only 10% in a project side or even a POC, which will surely become a project and production.

As no solution really dies, new ones come … we end up with projects with multitudes of technologies to manage along with the associated skills too … The, we’re surprised that the market of the recruitment of developer is plugged. No wonders… There are a lot of developers but it’s difficult to find a React developer with 5 years of experience who knows the Go language. The market is split, like the technologies. It may be good for developers because it creates rareness and it raises prices, but it’s not good for the project!

To return to the previous chapter (Believe in miracle tools …), we see in the Javascript world with the “JS fatigue”. The developer must make his way into the world of Javascript and related tools. This is the price of the multitude of tools. This is an understandable approach (see for example a very good explanation of how to manager it). However this continuous learning of technologies create the problem of learning transverse domains: accessibility, agility, performance … Indeed, what proves us that the tools and the languages that we will choose won’t change in 4 years ? Rust, Go … in 2 years? Nothing tends to give a trend.

We have fun (too much) and we do not recover (enough) in question

Unless it is in order to put a technology in question to find another. The troll is common in our world (and, I confess, I use it too). But it is only to put one technology in question for another. And continue the infernal cycle of renewal of tools and languages. A real challenge is to ask ourselves sincerely: are we going in the right direction? Is what I do sustainable? Is it quality? But questioning is not easy because it is associated with either the troll (precisely or unfairly) or a retrograde image. How to criticize a trend associated with a technological advance?

The voices rise little against this state of facts: The disenchantment of the software, Against software development… and it’s a shame because questioning is a healthy practice for a professional domain. It allows to “perform” even more.

We do not question because we want to have fun. Fun is important, because if you get bored in your job, you will be depressed. By cons, we can not, under the pretext of wanting fun all the time, change our tools continuously. There is an imbalance between the developer experience and the user experience. We want fun, but what will it really bring to the user? A product more “happy”? No, we are not actors. One can also criticize the effort that is made to reduce the build times and other developer facilities. This is important but we must always balance our efforts: I accelerate my build time but it is only valid if I use the time gained to improve the user experience. Otherwise it is only tuning for his own pleasure.

It is necessary to accept criticism, to self-criticize and to avoid hiding behind barriers. The technical debt is an important concept but if it is an excuse to make bad refactoring and especially to change to a new fashionable technology, as much to acquire debt. We must also stop the chapel wars. What is the aim of defending one’s language from another? Let’s stop repeating that “premature optimization is the cause of all the evils…” This comes from the computer science of the 70s where everything was optimized. However, there is no more premature optimization, it is only an excuse to do nothing and continue like that.

We are (badly) governed

We’re not questioning ourselves about the ethics of our field, on its sustainability… This may be due to the fact that our field does not really have an ethical code (such as doctors or lawyers). But are we as truly free developers if we can not have a self-criticism? We may be enslaved to a cause brought by other people? The problem is not simple but we have in all cases a responsibility. Without an ethical code, the strongest and most dishonest is the strongest. The buzz and the practices to manipulate the users are more and more widespread. Without Dark Pattern your product will be nothing. The biggest (GAFA …) did not made it for nothing.

Is the solution political? We have to legislate to better govern the world of software. We see it with the latest legislative responses to concrete problems: GDPR, cookies and privacy notifications … the source of the problem is not solved. Maybe because politicians do not really understand the software world.

It would be better if the software world was structured, put in place a code of ethics, self-regulate … But in the meantime, it is the rule of the strongest that continues… At the expense of a better structuring, a better quality, a real professionalisation…

If this structuring isn’t done, the developers will lose the hand on what they do. But the lack of ethics of the profession is externally criticized. Rachel Coldicutt (@rachelcoldicutt) director of DotEveryOne, a UK think tank that promotes more responsible technology, encourages non-IT graduates to learn about these issues. To continue on this last article, it would be in the right way of computer science, domain from the military world where engineers and developers would be trained to follow decisions and commands.

A statement that echoes, in particular, the one held by David Banks (@da_banks) in the insolent “The Baffler”. D.Banks emphasized how the world of engineering is linked to authoritarianism. The reason is certainly to look at the side of the story. The first engineers were of military origin and designed siege weapons, he recalls quickly. They are still trained to “connect with the decision-making structures of the chain of command”. Unlike doctors or lawyers, engineers do not have ethical authorities who oversee them or swear an oath to respect. “That’s why the engineers excel in the outsourcing of reproaches”: if it does not work, the fault lies with all the others (users, managers …) “We teach them from the beginning that the most moral thing that they can do is build what they are told to build to the best of their abilities, so that the will of the user is carried out accurately and faithfully.”

With this vision, we can ask ourselves the question: can we integrate good practices and ethics in a structural and internal way in our field?

Development follows (too much) like any organization of absurd decisions

The world of software integrates into a traditional organizational system. Large groups, outsourcing via Digital or IT Companies, web agencies … All follow the same techniques of IT project management. And everyone goes “in the wall”. No serious analysis is made on the overall cost of a software (TCO), on its impact on the company, on its profit, its quality … It’s the speed of release (Time to Market), the “featural” overload (functional), immediate productivity, that matter. First because the people outside this world know too little about the technicality of the software and its world. It is virtual so simple. But this isn’t the case. Business schools and other management factories do not have development courses.

We continue to want to quantify IT projects as simple projects while movements like the no estimate propose innovative approaches. Projects continue to fail: the chaos report announces that just 30% of projects are doing well. And in the face of this bad governance, technical teams continue to fight against technologies. Collateral damage: quality, ethics, environnement… and ultimately the user. It would not be so critical if the software did not have such a strong impact on the world. Software eats the world … and yes, we eat it …

One can ask the question of the benevolence of the companies: are they only interested in their profit, whatever is the price, and leave the world of the software in this doldrums? The answer may come from sociology. In his book “Les Decisions Absurdes” Christian Morel explains that individuals can collectively make decisions that go totally the other way from the purpose. In particular, the self-legitimization of the solution.

Morel explains this phenomenon with the “Kwai River Bridge” where a hero builds a work with zeal for his enemy before destroying it.

This phenomenon of the “Kwai River Bridge”, where action is self-legitimized, where action is the ultimate goal of action, exists in reality more than one might think. He explains that decisions are meaningless because they have no purpose other than the action itself. “It was fun”: this is how business executives express themselves with humor and relevance when one of them has built a “bridge of the Kwai River” (…) Action as a goal in itself supposes the existence of abundant resources (…) But when the resources are abundant, the organization can support the cost of human and financial means which turn with the sole objective of functioning “. And, in the world of software, it globally provides the means to operate: gigantic fundraising, libraries that allow to release very quickly, infinite resources … With this abundance, we build a lot of Bridges of the Kwai River.

In this context, the developer is responsible for abundance direction that he follows.

The development is (too) badly controlled

If these absurd decisions happen, it is not only the fault of the developer but of the organization. And who says organization says management (sub-different form). If we go back to Morel’s book, he speaks of a cognitive trap in which managers and technicians often fall. This is the case for the Challenger shuttle, which was launched despite the knowledge of the problem of a faulty seal. The managers underestimated the risks and the engineers did not prove them. Everyone blamed the other for not providing enough scientific evidence. This is often what happens in companies: warnings are raised by some developers but the management does not take them seriously enough.

This has also happened in many organizations that have wanted to quickly develop universal mobile applications. In this case, the miracle solution (we come back to it) adopted by the decision-makers was the Cordova framework: no need to recruit specialized developers iOS and Android, ability to recover web code … The calculation (or not) simple showed that benefits. On the other hand, on the technical side, it was clear that native applications were much simpler and more efficient. 5 years later, the conferences are full of feedback on failures of this type of project and the restart from scratch of them in native. The link with Challenger and the cognitive traps? The management teams had underestimated the risks, the actual cost and did not take into account the comments of the technical teams. The technical teams had not sufficiently substantiated and proved the ins and outs of such a framework.

At the same time, we return to the previous causes (silver bullet, we have fun …), it is necessary to have a real engineering and a real analysis of technologies. Without this, the technical teams will always be unheard by the management. Tools and benchmark exist but they are still too little known. For example, Technologie Radar that classifies technologies in terms of adoption..

It is at the same time important that the management of the companies stops thinking that the miracle solutions exist (one returns to the cause of the “virtual”). You really have to calculate the costs, the TCO (Total Cost of Ownership) and the risks on the technology choices. We continue to choose BPM and Low-code solutions that generate code. But the hidden risks and costs are important. According to ThoughtWorks:

“Low-code platforms use graphical user interfaces and configuration in order to create applications. Unfortunately, low-code environments are promoted with the idea that this means you no longer need skilled development teams. Such suggestions ignore the fact that writing code is just a small part of what needs to happen to create high-quality software—practices such as source control, testing and careful design of solutions are just as important. Although these platforms have their uses, we suggest approaching them with caution, especially when they come with extravagant claims for lower cost and higher productivity.”

We divide (too much) … to rule

This phenomenon of absurd decision is reinforced by the complex fabric of software development: Historically out-of-digital companies outsource to digital companies, IT Companies outsource to freelancers … The sharing of technical responsibility / management is even more complex and absurd decisions more numerous.
But this does not end here. We can also see the use of open-source as a sort of outsourcing. Same for the use of framework. We are just passive consumers, we are free of many problems (which have an impact on resources, quality …).

This is all the easier as the field is exciting and the practice of side-projects, time spent on open-source projects outside office hours is common … The search for “fun” and the time spent then benefit more organizations than developers. It is difficult in this case to quantify the real cost of a project. And yet, that would not be a problem if we came up with software “at the top”. This does not change the quality, on the contrary, the extended organization that is composed of the bulk of groups, IT companies, freelancers, communities has no limit to build the famous bridges of the Kwai River.

The developer is no longer a craftsman of the code, but rather a pawn in a system criticizable from the human point of view. This is not visible, everything is fine and we have fun. In appearance only, because some areas of software development go further and make this exploitation much more visible: The field of video games where the hours are exploding.

A better professionalization, a code of ethics or anything else would be useful in this situation. Indeed, this would make it possible to put safeguards on overtaking or practices (directly or indirectly) open to criticism. But I’ve never heard of the developer corporation or other rally that would allow this code defense.

We lose (too) often the final goal: the user

And so, all these clumsiness (too heavy software, no quality …) are found among users. As we have to release software as quickly as possible, that we do not try to solve internal inefficiencies, and that we do not put more resources to make quality, we make mediocre software. But we have so many tools for monitoring and monitoring users to detect what is happening directly at home in the end, we think it does not matter. It would be a good idea if the tools were well used. But the multitude of information collected (in addition to the bugs reported by users) is only weakly used. Too much information, difficulty to target the real source of the problem … we get lost and in the end, it is the user who suffers. All software is now in beta testing. What’s the point of over-quality, as long as the user asks for it? And we come back to the first chapter: a software that is uniformly slow… and poor.

By taking a step back, everyone can feel it every day in the office or at home. Fortunately, we are saved by the non-awareness of users in the software world. It is a truly virtual and magical world that they are used to. We put them in hand tools but without an explanatory note. How to evaluate the quality of a software, the risks on the environment, the security problems… if one does not have notions of computer science, even rudimentary ones?

21st century computer science is what agribusiness was for consumers in the 20th century. For reasons of productivity, we have pushed mediocre solutions with a short-term computation: placing on the market more and more fast, profit constantly rising… intensive agriculture, junk food, pesticides … with significant impacts on health, on the environment… Consumers now know (more and more) the disastrous consequences of these excesses, the agri-food industry must reinvent themselves, technically, commercially and ethically. For software, when users understand the ins and outs of technical choices, the software industry will have to deal with the same issues. Indeed, the return to common sense and good practices is not a simple thing for agribusiness. In IT, we start to see it with its consequences on the privacy of users (but we are only in the infancy).

It is important to reintroduce the user into the software design thinking (and not just by doing UX and marketing workshops …) We need to rethink everyone of the software: project management, the impacts of the software, quality … This is the goal of some movements: software craftmanship, software eco-design, accessibility … but the practices are far too confidential. Whose fault is it? We go back to the causes of the problem: we are pleased on one side (development) and we have a search only profit (management side). Convenient to build Kwai River bridges … where are the users (us, actually).

We kill our industry (and more)

We are going in the wrong way. The computer industry has already made mistakes in the 1970s with non-negligible impacts. The exclusion of women from IT is one of them. Not only has this been fatal for some industries but we can ask ourselves the question of how we can now address responses to only 50% of the IT population, with very low representativeness. The path is now difficult to find..

But the impact of the IT world does not stop there. The source and the model of a big part of the computing come from the Silicon valley. If Silicon Valley winners are left out, local people are facing rising prices, decommissioning, poverty … The book Mary Beth Meehan puts in image this:

“The flight to a virtual world whose net utility is still difficult to gauge, would coincide with the break-up of local communities and the difficulty of talking to each other. Nobody can say if Silicon Valley prefigures in miniature the world that is coming, not even Mary, who ends yet its work around the word “dystopia”.

In its drive towards technical progress, the software world is also creating its …environmental debt

There are many examples, but the voices are still too weak. Maybe we will find the silver bullet, that the benefits of the software will erase its wrongs… nothing shows that for now, quite the contrary. Because it is difficult to criticize the world of software. As Mary Beth Meehan says:

“My work could just as easily be swept away or seen as leftist propaganda. I would like to think that by showing what we have decided to hide, we have served something, but I am not very confident. I do not think people who disagree with us in the first instance could change their minds. “

On the other hand, if there are more and more voices, and they come from people who know the software (developers, architects, testers …), the system can change. The developer is neither a craftsman nor a hero: he is just a kingpin of a world without meaning. So, it’s time to move…

Apple planned obsolescence explained (for dummies and others)

Reading Time: 2 minutes

End of 2017, Apple underwent bad buzz and was accused to intentionally be slowing down older iPhones. And this feeds the whole discussion on planned obsolescence. A debate very much either black or white: mean manufacturer versus sweet consumer. Or even the contrary (which is surprising to me): the concept of obsolescence initiated by NGOs.

Let’s start from the beginning:

Our phones’ batteries are now mainly based on the Litthium-Ion technology. The chemical behavior of the battery worsens accordingly to the amount of charge/discharge cycle. After 500 cycles, the battery only has 80% of its battery capacity left (but the phone OS recalculates a level so that it displays a “charge” of 100%). This means that if you have a 3000mAh battery, after 500 cycles, you will really only have 2400mAh.

Battery ageing usually goes in pair with a loss of battery power, especially on peak loads management. You might have encountered this situation, your phone or PC has 10% of battery and all of a sudden the level drops; and as it usually happens with a low battery level, your phone shuts down without giving you any notice. This is what Apple describes on its blog.

In order to limit that phenomenon, Apple tries and limit peak loads by limiting the CPU frequency. Then, there is usually less peak loads. However, on the phone, there are other big consumers (GPS, radio cell…). We can even wonder if Apple isn’t slowing down other components.

Before that, we need to go back on this whole cycle story. Is it inevitable? A cycle directly is connected to the phone consumption level, which itself depends on a few things:

  • Hardware consumption
  • OS consumption
  • Your use (amount of calls, video, etc)
  • Applications consumption

For the first two points, manufacturers usually try and make efforts. When it comes to the use, you are the one managing it. However, there is very little communication. For applications consumption, it isn’t inevitable (it is actually GREENSPECTOR goal).

Once you have that in mind, is Apple responsible for obsolescence? If so, is it planned obsolescence? First off, the actual cause for obsolescence is distributed: is Apple responsible for the overconsumption of specific applications? Some manufacturers try to solve this issue by [putting a finger on consuming applications](
Apple doesn’t show much zeal on this point. Application designers: 0, Apple: 0.
When it comes to usage, Apple provides the strict minimum in terms of communication. It is way more hype to communicate on the launch of an animated emoji, than this after all. It kind of makes sense though, users love it. It is also more interesting for media: publishing the endless queue stories for every new version released never gets old. Apple: 0, Media: 0, Users: 0.

Overall, 0 for everyone, so a shared obsolescence! However, the thing the most debatable about Apple acknowledgement of battery ageing (which is real), isn’t the whole slowdown phenomenon, it is the lack of communication. Users are smart enough to understand a message like “Your battery is getting old, we recommend a slowdown of your phone: Yes, No (I prefer changing battery)”. But again, this doesn’t meet the “hype” requirements and the product would look “too technical” (which is the case). With this last point and Apple not sending alerts to its user about slowdowns, the user cannot be accountable and act either way. It doesn’t know all the facts necessary to assess the situation. It most likely will be a lack of parameter that will potentially make it go choose the renewal option. And in this case, yes, Apple is doing planned obsolescence.