Author: Olivier PHILIPPOT

Digital Sobriety Expert Books author «Green Patterns», «Green IT - Gérer la consommation d’énergie de vos systèmes informatiques», ... Speaker (VOXXED Luxembourg, EGG Berlin, ICT4S Stockholm, ...) Green Code Lab Founder, ecodesign software national association

Deleting emails is useless, working on sober email solutions is mandatory  

Reading Time: 2 minutes

The discovery that digital technology was not so virtual and that it could have an impact on the environment brought a multitude of injunctions followed by a multitude of criticisms and counter-injunctions. “You have to delete your e-mails”, “No, it’s like peeing in the shower, it’s useless”… The criticism of these actions by the digital actors is quite strong relative to the large part of the “non-technical” population that took this to heart (and increased its eco-anxiety!). 

These discussions have also led to a decision on which is the most polluting between use and manufacture. Use of the mail versus manufacture of the terminal on which the mail was read. The latter being announced as more impacting, this was in the sense of uselessness to optimize the mail part!  

Yes, the impact is concentrated on the manufacturing of the terminals. Yes, the unit impact of an email is low, especially compared to a raclette (this is a private joke, a joke that circulates among the detractors of digital sobriety). These are quite reassuring messages in a binary world. Reassuring to limit eco-anxiety. But mostly reassuring for the digital actors to not deal with the problem and continue business-as-usual.

Because yes, there is a potential problem. Because of the scale effect, a low unit impact can lead to a high global impact with a large number of users and more and more uses. The 4% impact of digital technology does not just happen. Especially when you list and observe what happens on the internet every minute https://www.allaccess.com/merge/archive/32972/infographic-what-happens-in-an-internet-minute. A diversity and a frequency much more important than raclette (for information, we should eat raclette 12 times a year https://journal-des-etudes.com/selon-la-science-il-faudrait-manger-de-la-raclette-12-fois-par-an/

The plastic packaging of our food, taken individually, does not have a huge impact. A few milligrams of plastic. But plastic is indeed a global environmental problem. As Gerry McGovern would say, plastic is an environmental plague but if you have a plastic bag, use it! https://gerrymcgovern.com/books/world-wide-waste/exploding-plastic-inevitable/ 

“Avoid plastic packaging. Bring your own bag and avoid the barcodes. Whenever you can replace plastic with another material, do, but don’t replace it simply for the sake of it. If you have a plastic bag, use the hell out of it.” 

As a digital player, we need to work on impacts because the effect of scale means that our solutions have a significant global impact. Using the “order of magnitude” argument by taking only the unit impact is not valid.  

Behind an email, there is a solution provider. Behind a social network, too. Each digital actor contributes to a brick that is ultimately used by a user.  

It is therefore necessary to optimize our solutions, to offer better management of solutions. What about smart email deletion options that would be proposed in email solutions? What about providing solutions to help writing sober emails (attachments, signatures…)? It is possible, editors have done it for spam management, why not go further? 

As for user awareness, it is necessary but it must be less anxiety-provoking, without becoming whataboutism (https://fr.wikipedia.org/wiki/Whataboutism).  

Which image format choose to reduce energy consumption and environmental impact?

Reading Time: 4 minutes

It is not easy to choose the right type of image to reduce environmental website impact. We can focus on image size or on display performance (are they two related ?). This may be the right approach. However, an energy measurement will be a more precise element if you really want to measure the actual consumption and move towards reducing the environmental impact.

in 2017 we carried out a benchmark of the new WebP format compared to JPEG and PNG. Support for Webp in browsers was beginning. In addition, AVIF, a promising image format, has arrived.

Here is an updated study.

Methodology

As a test image, we used the one proposed by Addy Osmani in an article on Smashing Magazine.

The images were generated to have the same perceived quality. Compression qualities are therefore different between formats.

Image 1 : https://res.cloudinary.com/ddxwdqwkr/image/upload/v1632192015/smashing-articles/206-webp-ayousef-espanioly-DA_tplYgTow-unsplash.webp

  • Test 1 Default quality given by Sqoosh (https://github.com/GoogleChromeLabs/squoosh/) : JPEG original (560KB), JPEG@q75 (289KB), WebP@q75 (206KB),  AVIF@q30 (101KB)
  • Test 2 Target quality JPEG@q70 : JPEG (323KB), WebP@q75 (214KB), AVIF@60 (117KB)
  • Test3 Low quality : JPEG@q10 (35KB), WebP@q1 (35KB), AVIF@q17 (36KB)2

Image 2 : https://res.cloudinary.com/ddxwdqwkr/image/upload/v1632080886/smashing-articles/q50jpg.jpg

  • Test1 : Target quality of 45k : Original (442kB), JPEG@q50 , WebP@q54, AVIF@q36

Image 3 : https://res.cloudinary.com/ddxwdqwkr/image/upload/v1632082138/smashing-articles/q10-25.jpg

  • Test 1: Target quality of 25kB : Original (716KB),  JPEG@q10 , WebP@q5, AVIF@q19

Measurement protocol

  • We display the images one by one in a Chrome browser on a real device (Samsung Galaxy S10).
  • We use Greenspector tools to measure energy consumption and other metrics (CPU, performance…).
  • We perform a measurement throughout the display of the image and this 3 times to have stable measurements.
  • The measurements are made under different Wifi, 3G and 2G network conditions.

Results

On image 1 for test 1, we obtain the following energy consumption measurements:

Consommation d'énergie pour l'image 1

If we observe more closely the behavior in WiFi, we can better appreciate the differences in consumption with the confidence interval.

Consommation d'énergie pour la wifi avec intervalle de tolérance

In Wifi, the consumption is relatively similar between the types of image. JPEG consume little more than other format. It is followed by WebP format and then by AVIF.

In 3G and 2G, the difference in consumption is noticeable and AVIF consumes less energy than Webp.

These behaviors are explained by the smaller file sizes in AVIF and Webp.

On image 2, the behavior is the same:

Comparaison de consommation d'énergie pour l'image 2 entre la wifi, la 3G et la 2G

In image 3, there is almost no difference between the formats. The image is small and we find ourselves in an operation close to wifi (fast transfer):

Comparaison de consommation d'énergie entre la wifi, la 3G et la 2G

The behavior is the same with image 2 which is at 45kB:

Consommation d'énergie pour l'image 2 en wifi, 3G et 2G

This is the same behavior for image 3:

Consommation d'énergie pour l'image 3 en wifi, 3G et 2G

It is necessary to monitor the new formats (JPEG XL, Webp2…) as well as the optimization of the algorithms because even if the gains in size are significant, the decoding processing could potentially gain even more efficiency. Indeed, here is for example the CPU processing for image 1 on test 1, where AVIF consumes more CPU than the other formats.

Traitement CPU pour l’image 1

Recommendations:

In any case, it is necessary to compress images with lower quality, regardless of the image format. A quality >85% is useless. Energy consumption is reduced by a factor of 2 for qualities that remain high, and reduced by a factor of 6 for low qualities.

Even if the energy consumption is relatively similar between the formats, the gain provided by AVIF and Webp is much greater for less efficient connections. The choice of AVIF and Webp will be preferable because the users are not connected with broadband! In addition, the gain in terms of data exchanged will be interesting to limit the overall size of the site.

The choice between Webp and AVIF is not easy and will depend on the type of images and visitors. Additionally, Google is working on WebP version 2, and formats like JPEG XL are coming to compete with AVIF. However, considering the benefits of Webp other than the environmental impact, we recommend the use of WebP more.

For information, here is the taking into account of the AVIF format in the browsers:

Pour information, voici la prise en compte du format AVIF dans les browsers :

As well as the Webp format:

Pour information, voici la prise en compte du format webp dans les browsers :

In any case, optimize your images, compress, reduce the size and lazy-load!

CMS, No Code or without CMS, which solution to choose for a sober website?

Reading Time: 6 minutes

Today, we are studying the impact of solutions allowing the implementation of websites without knowledge of coding. Among these solutions, we can include CMS (Content Management System) but also No Code solutions.

This article is the first in a series where we analyze the measurements of 1500 sites through our tools.
In these articles, we will deal with the impact of technologies, and parameters…

Methodology disclaimer:

We have measured more than 1500 sites on real devices via our benchmark suites allowing the realization of automated tests (launch of the site, waiting, scrolling, sitting in the background). We then retrieve technology information from these sites via the WepAnalyzer solution.

We have chosen to focus our analysis on energy consumption. Consuming energy affects battery life on user devices, which ultimately impacts the environment.

How to read the graphs?

We visualize the data by “box plot” graphs:

  • The centre bar indicates the median. The rankings are made with this data.
  • The top and bottom of the box are bounded by the 25th and 75th quantiles.
  • The size of the box is called the interquartile range (IQR)
  • The bars at the top and bottom are the whiskers and delimit the expected values
  • Whiskers expand at 1.5 IQR
  • Values ​​outside the whiskers are visualized via dots. They represent either errors or outliers.

We deliberately discarded sites that did not have enough samples (for example less than 10 sites with a certain technology).

How are CMS and No Code solutions positioned?

Ranking of CMS according to the median value of energy.

We find the most widespread technologies (according to Web Core Vitals), apart from Shopify (these sites must be classified in the “No CMS” category).

We observe a 20% difference between the most efficient solution (Ametys) and the least efficient (Webflow).

Three CMS are positioned ahead of sites without CMS. Popular CMS like Drupal and WordPress are lagging behind. The last four solutions are No Code solutions.

There are many outliers in some categories (WordPress, sites without CMS). It is explained by a large data set (several hundred sites). An exploratory analysis of these sites generally shows that they are sites with fairly heavy streaming processing (such as video). Here is an example of a site positioned in “outlier“ the loading and idle stage (inactive site) consume a lot given an animation that runs continuously.

Quelques pistes d’explications à l’analyse des CMS :

Ametys: a domain-specific CMS

Ametys is a specific CMS which is used for institutional sites. Our ranking of school websites, in which many schools use this technology, explains its presence in this ranking. Its good positioning would have to be analyzed from a technical point of view. However, we can deduce that a solution that targets a type of need will be more optimizable than a generic solution. The integration of multiple functionalities in a CMS will indeed lead to overconsumption. We also observe that these institutional sites include fewer modules than the other sites. It is ultimately about functional sobriety.

Squarespace: an all-in-one solution

Squarespace is a publisher-hosted CMS. On the sites analyzed, we can identify there are few requests (<30), so there are potentially integrated optimization solutions. In other tracks, all the resources are hosted on Squarespace, and the assets (or assets) are on dedicated servers. The hosting of the CMS by the publisher is indeed a good thing because it will allow systematic and shared optimizations. However, this is not necessarily native. The editor must apply it.

Typo 3: native optimization options

Typo 3 which is an open source solution is in 3rd position. An HTTP Archive ranking is confirming this positioning. Fine cache management and native optimization options explain this performance.

Sites without CMS

Sites without CMS integrate a heterogeneity of technical solutions. It is difficult to draw conclusions. However, the median of the sites is positioned very well compared to other solutions (No Code, WordPress, Drupal, etc.). The low moustache is the lowest compared to all the other solutions. As a result, significant efficiency can be achieved more easily.

Drupal: a professional CMS

Drupal is positioned just after sites without a CMS. The good positioning of this CMS is explained by its less accessible setup and start-up process than WordPress.

Contentful: a headless CMS

Contentful is a “no interface” CMS. It allows you to publish content from other tools. The efficiency gain is present for the publication (because we do not use our usual tools). However, we observe that this CMS is just as efficient as a classic CMS.

WordPress : un CMS simple et très répandu

The WordPress platform is very popular and offers many plugins and themes. But genericity and modularity come at a price. Non-technical users can use this CMS. A potential explosion of plugins and non-configuration of the CMS in terms of performance and efficiency are the counterparts. We see in relation to the low moustache that the CMS can be efficient. However, this requires a lot of work.

Wix, Webflow, SiteCore, Adobe: No Code or equivalent solutions

These solutions offer the user the possibility of creating a website without coding knowledge. The median is high. The low whiskers are also higher than other solutions. It shows that they are heavier solutions.

Conclusion 

From a statistical point of view, CMS solutions do not all have the same efficiency. The initial design, taking into account optimizations, will be essential to achieve good performance (case of Typo 3). We observe that end-to-end control, combined with good practices implementations (Squarespace), also makes it possible to achieve a good efficiency level. In the same way, specializing in a CMS (Ametys) and therefore the options that go with it will allow you to obtain good results.

However, on the other hand, making a very generic and modular CMS (WordPress), even if potentially efficient initially, will bring bloatware. In the same way, the No Code will add a heaviness. It remains to identify the causes of this heaviness. Indeed, it can come from levels of abstraction but also from rendering possibilities (interactivity, animations, etc.) which are easily possible and which lead the user to add more than is necessary. In addition, the use of a “generalist” CMS is also potentially representative of a lack of precision in the need.

For a CMS solution (and more generally any solution), sobriety will not be innate. It will be necessary to apply a set of good practices:

  • Efficient architecture and technology, although if we take current technologies the difference between the solutions is very small, and the impact comes more from the misuse of technologies.
  • Native integrations of optimizations or easily activated by use.
  • Functionality limitation mechanism or in any case sensitizing the user to bloatware.
  • More generally, think about the end-to-end issue, taking into account hosting, and CDN (Content Delivery Network); without going to end-to-end managed solutions, we see that the distribution of systems is not necessarily a good thing.
  • In order to always offer more flexibility to the user, and among other things to allow non-technical people to create sites, it is necessary to integrate optimization solutions natively, which is not at all currently the case.

Do you want to include a CMS in this ranking? Contact us and send us at least 20 links to sites using technology, we will integrate them into the measures and within our ranking!

For our next article, we will go into the finer analysis of WordPress data to observe which parameters and configurations influence environmental performance.

Digital sobriety for more resilience

Reading Time: 2 minutes

A weak industry

The Covid-19 crisis has made visible weaknesses in the world of digital and electronics: an interdependence of economic and technical systems. The 2020 confinements led to a drastic reduction, or even a halt in the production of electronic circuits in China, impacting worldwide production (Example of the iPhone 13 and its stock shortages).

But the Covid-19 pandemic is not the only cause that has impacted the supply system. At the beginning of 2021, Taiwan was affected by a drought, another important place of production of electronic circuits, and this contributed to reinforcing the shortage already initiated.

Health crises and environmental crises can also be accompanied by geopolitical crises and wars. The war in Ukraine, for example, has lifted one more of the weaknesses in these complex supplies: risk in the production of neon lights, necessary for the manufacture of chips. These neon lights are mostly produced in Ukraine.

 

Sobriety is one of the resilience solutions

We can expect a resilience solution from the electronics industry through relocations, however, some solutions (relocation of material extraction) are difficult to visualize. In the same way, “digital sovereignty” would not be the solution to this problem, or in any case, it would “only” better deal with the dependency on server hardware.

Sobriety is primarily seen as a way to reduce one’s environmental footprint. It is true, but it also has the advantage (in the context of reducing the environmental impact) of extending the lifespan of equipment, reducing the consumption of resources (CPU for example), optimizing the capacity of the equipment…

Digital services and electronics are becoming more dependent on one another thanks to Sobriety benefits. Making digital soberer would therefore limit the impact of these crises.

Make no mistake

Although much discussed in the digital world, digital sobriety has still not been implemented enough. Its implementation costs are still being discussed, as well as its greater impact on hardware than on its use. It seems that endless debates continue on the network’s impact (focusing on energy and not CO2, disregarding global problems, etc. ). There are as well as counterarguments on whether it is necessary to optimize the CO2 impact of our solutions since we have low-carbon energy in France.

Dismissing the digital sobriety approach on the pretext of its drawbacks means not fully taking into account the place of digital technology in our world. Above all, it means continuing to develop tools that will potentially not work given their lack of resilience.

Allowing the operation of digital services on “low-end” equipment and limited networks is, for example, an approach that goes in the direction of digital sobriety. But this is only the beginning of a real process of sobriety. The road is long, and unfortunately, the crises are already here.

There can be no doubt that sobriety is essential in our young digital world

How does Greenspector assess the environmental footprint of digital service use?

Reading Time: 6 minutes

Foreword: Assessing the impact of the use

This note briefly describes the methodology we use at the date of its publication.

Because of our continuous improvement process, we are vigilant in constantly improving the consistency of our measures as well as our methodology for projecting environmental impact data. 

We assess the environmental impacts caused by the use of a digital service.

This analysis is based on a Life Cycle Analysis (LCA) method, but it is not about performing the LCA of a digital service.​

Such an analysis would be an exercise on a much broader scope, which would include elements specific to the organization that created the software.

In the LCA of a digital service, it would be appropriate, for example, to include for its manufacturing phase: the home-work trips of the project team (internal and service providers), the heating of their premises, the PCs and servers necessary for the development, integration and acceptance, on-site or remote meetings, etc …

Environmental footprint assessment methodology

Our approach

The chosen modelling is based on the principles of Life Cycle Analysis (LCA), and mainly by the definition given by ISO 14040.

It consists of a complete Life Cycle Inventory (LCI) part and a simplified Life Cycle Assessment (LCA). The LCI is predominant in our model. It will ensure that we have reliable and representative data. In addition, the LCI thus obtained can, if necessary, be integrated into more advanced LCAs.

We assess the environmental impact of digital services on a limited set of criteria::

This methodology has been reviewed by the EVEA firm – a specialist in ecodesign and life cycle analyses.
Note on water resource: Greywater and blue water are taken into account at all stages of the life cycle. Green water is added to the manufacturing cycle of terminals and servers. See the definition of the water footprint.

Quality management of results

The quality of LCA results can be modelled as follows1 :​​

Quality of input data x Quality of methodology = Quality of results

To improve the quality of the input data, we measure the behaviour of your solution on real devices. It helps to limit the models that are potential sources of uncertainty.​

To manage the quality of the results, we apply an approach that identifies the sources of uncertainties and calculates the uncertainty of the model. Our method of managing uncertainties uses fuzzy logic and its sets2.

Ultimately, unlike other tools and methodologies, we can provide margins of error in the results we give you. It ensures more peaceful communication of the environmental impact to stakeholders (users, internal teams, partners, etc.)

1Quality of results: SETAC (Society of Environmental Toxicology and Chemistry 1992)

2Although often mentioned in the literature dealing with uncertainties in LCA. This approach is little used. Indeed, stochastic models such as Monte Carlo simulations are often preferred (Huijbregts MAJ, 1998). In our case, the use of fuzzy logic seems more relevant because it allows us to deal with epistemic inaccuracies, especially due to expert estimates.​

Calculation steps

Phases taken into account for the equipment used

Note on the impact model of the terminal part

Classical impact analysis methodologies assume a uniform impact on the software (average consumption regardless of the software or the state of the software). Our innovative approach makes it possible to refine this impact. In addition, we are improving the modelling of the software impact on the hardware manufacturing phase by accounting for battery wear.

The battery of smartphones and laptops is consumable. We model the impact of the software on it.

Input data for the Life-Cycle Inventory

Measured data
Energy consumed on a smartphone
Data exchanged on the network
Requests processed by the server

Modelled data
Energy consumed on tablet and PC
Energy and resources consumed on the server
Energy and resources consumed on the network

Terminal Assumptions
Impact of smartphone manufacturing
Impact smartphone battery manufacturing
Tablet battery manufacturing impact
PC battery manufacturing impact
Max number of cycles before smartphone wear
Max number of cycles before wear Shelf
Max number of cycles before wear Shelf
Average smartphone battery capacity
Average tablet battery capacity
Average PC battery capacity
Battery voltage
Smartphone lifespan
Tablet life
PC life
Battery replacement vs smartphone change ratio
The ratio of battery replacement vs tablet replacement
Battery replacement vs PC change ratio
Reference discharge speed on the terminal (measured)

Assumptions Servers
Server power
Of cores
Data centre PUE
Power by heart
Server time (TTFB)
Number of max requests per second
Power per request
Number of cores per VM
Number of VMs per simple app
Number of VMs per complex app
Impact Manufacturing Server
Server lifetime
CDN Debit

Energy assumptions
World average electricity emission factor
Electricity emission factor France

Example of work on hypotheses:

The methodology of propagation of uncertainties requires us to identify precisely the quality of these assumptions. Here are a few examples, in particular the impact of material manufacturing.

The bibliographic analysis allows us to identify the impacts of different smartphones, and to associate the DQI confidence index. These figures mainly come from the manufacturers.

The average impact calculated from these confidence indices is 52 kg eq Co2 with a standard deviation of 16 kg.

Example of restitution

  • In this example: the median impact of 0.14g eqCO2 is mainly on the ‘Network’ part.

  • This impact corresponds to viewing a web page for 20s

  • Uncertainty is calculated by the Greenspector model by applying the principle of propagation of uncertainties from the perimeter and assumptions described above.

Necessary elements

To determine the impact of your solution, we need the following information:

  • Smartphone / Tablet / PC viewing ratio
  • France / World visualization ratio
  • Location of France / World servers
  • Simple or complex servers (or number of servers in the solution)

On this estimate, we can carry out a simplified LCA based on this model but adapting other elements to counter particular circumstances. For example:​

  • Measurement of the energy consumption of the server part (via a partner)
  • Accuracy of server assumptions (PUE, server type)
  • Measurement of the PC part (via laboratory measurement)
  • Accuracy of the electrical emission factors of a particular country…

Comparison of estimation models

Greenspector calculations are integrated into a web service currently used by our customers. Very soon, find our calculations of the environmental footprint of your mobile applications and websites in a SaaS interface.

Comparison of estimation models

Calculation methods in digital sobriety are often not very accurate and sometimes, at the same time, not very faithful. It potentially leads you to use tools that poorly assess the impact of your solutions. The risk is to make your teams work on areas that have no real impact on the environment.

Some approaches, more used in LCAs (and not in market tools), improve fidelity but pose a risk of giving an unfair result (R. Heijungs 2019).

Our approach is based on an innovative computational method, fuzzy arithmetic, first proposed by Weckenmann et al. (2001).

This approach is very efficient for modelling vague (epistemic) non-probabilistic data, which is often the case of data dealing with digital sobriety. In this way, we aim for accurate and faithful results.

Rival solutions make choices that generally make them inaccurate and unreliable:

  • Fidelity: Poor control of the environment, no methodology for managing measurement deviations
  • Accuracy: Model-based on non-representative metrics such as data consumption or DOM size, no energy measurement…

Optimizing the smartphones energy to reduce the impact of digital technology and avoid the depletion of natural resources

Reading Time: 6 minutes

Introduction

The lifespan of a smartphone averages 33 months. Knowing that a smartphone contains more than 60 materials, including rare earth elements and that its carbon footprint is between 27 and 38 kg eqCO2, the current rate of replacement of smartphones is too fast.

Different reasons can explain this rate of renewal. Loss of autonomy and battery problems are the main reasons (smartphone: one in three changes due to the battery). Increasing the capacity of the batteries is a solution that seems interesting but it would not solve the problem. Indeed, the data exchanged continues to increase and this has an impact on the power of smartphones. Websites are still just as heavy as before, even becoming heavier and heavier… So is this an unsolvable problem? What is the link between the autonomy that we experience in a personal capacity and this observation on the impact of digital technology?

Methodology

We started our analysis through web consumption. Indeed, mobile users spend an average of 4.2 hours per day browsing the web.

In a previous study on the impact of Android web browsers, we measured the consumption of 7 different websites on several web browsing applications from a mid-range smartphone, a Samsung Galaxy S7. This allows us to project this consumption onto global consumption and to apply optimization assumptions to identify room for maneuver.

Even if the uncertainties are high (diversity of mobile, diversity of use, etc.), this action allows us to identify the room for maneuver to improve the life cycle of smartphones. The choice of the Galaxy S7 makes it possible to have a smartphone close (within 1 year) to the average age of global smartphones (18 months).

What is the annual consumption of web browsing on mobile?

Here are our initial assumptions:

The estimated annual consumption of smartphones is 2,774 billion ampere-hours. Not very tangible? Considering that an average 3000mAh battery can go through 500 full charge/discharge cycles before it starts to be unusable and that 1,850 million batteries are used each year to browse the web. Does this figure seem exaggerated to you? There are 5.66 billion smartphones in the world, this would correspond to a problem that would affect 36% of the global fleet each year. If we consider that 39% of users will change their smartphone for battery reasons and only 26% of users will replace the batteries if they wear out, we get the figure of 1,200 million batteries, which corroborates our figures. Not inconsistent at the end, when you look at the phone and battery renewal cycles.

Would reducing the consumption of browsers have an impact?

Web browsers are important engines in the consumption of the web. Our measurements show significant differences in power consumption between browsers. These differences are explained by heterogeneous implementations and performance. In the following graph, the consumption of browsing on 7 sites, including the launch of the browser, the use of features such as writing URLs, and the navigation itself is visualized.

We start with a hypothesis of publishers optimizing browsers. By considering a hypothetical consumption of all browsers equal to that of the soberest (Firefox Focus), we obtain a reduction in the total annual consumption which makes it possible, with the same assumptions on the lifespan, to save 400 million batteries per year. Knowing that 1,500 million smartphones are sold per year, taking the same assumptions as before on replacement and repair rates, this would save 7% of the fleet of phones sold each year.

Would reducing the consumption of sites have an impact?

It is also possible that the websites are much soberer. We have assumed a consumption close to that of Wikipedia. From our point of view, having audited and measured many sites is possible but by taking important actions: optimization of functionalities, reduction of advertising and tracking, technical optimization …  

Here is an example of the representation of the energy consumption of the Team website. We see that the load will consume up to 3 times the reference consumption. The optimization margin is enormous in this precise case, knowing that many sites arrive at a factor of less than x2.

In the case of sober websites, by taking the same assumptions and calculation methods as for the sobriety of browsers, we could save 294 million batteries per year, or reduce the renewal of the fleet annually by 5%.

Is reducing the consumption of the OS possible and would have an impact? 

The question about the impact of hardware and OS often arises. To take this impact into account, we have several data at our disposal. An important piece of data is the benchmark consumption of the smartphone. It is the consumption of the hardware and the OS. For the Galaxy S7, this consumption is 50µAh / s.

By taking the same assumptions as those taken to calculate the total consumption (2,774 billion Ah), the annual consumption attributed to the material and OS share would be 1,268 billion ampere-hours or 45% of the total consumption. 

So is this the glass tray of optimization? Not really because there is a lot of space for optimization: Android itself for example. We have carried out an experiment that shows that it is possible to significantly reduce the consumption of Android functionalities. The builders’ overlays are also a way to reduce consumption.

Based on our experience, we estimate that a 5% reduction in consumption is totally possible. This would save 350 million batteries or 6% of the fleet.

What environmental gains can we hope for?

Applying digital sobriety at different levels would reduce the global number of used batteries per year by more than half. 

Even on the assumption that users do not systematically renew their smartphones for reasons of loss of autonomy or only replace their used battery, the annual smartphone renewal could be reduced by 17%.

In the best-case scenario, assuming that most users will replace their batteries, the potential savings would be 2 million TCO2eq. But the gains could be much greater if you consider that replacement practices are not changing fast enough and that users are changing smartphones rather than batteries: 47 million TeqCO2.

By being optimistic about an increase in battery capacity, no increase in the impact of software, and an unincreased impact of the larger batteries, the number of batteries used could be halved, in the same way, the environmental impact by two. But is it still enough? Rather go for an increase in the capacity of the batteries and a decrease in energy consumption and then obtain a gain of 4 on the impact by multiplying the capacity by two! 

Energy on a smartphone, small drops but a huge impact in the end

We are under the impression that the energy is unlimited, we just need to charge our smartphone. However, even if the energy was unlimited and without impact, the batteries are consumables. The more we use them, the more we wear them out, and the more we use non-renewable resources such as rare earth elements, not to mention other environmental, social, and geopolitical costs. We can expect technological developments to improve capacity and improve battery replaceability, but the savings are huge. Replacing the batteries is not the miracle solution because even if we extend the life of the smartphone, the battery must be thrown away or recycled, and recycling of Lithium is not yet assured (P.57). Gigantic because we use our smartphones for many hours. Gigantic because we are billions of users.

The exercise that we have carried out is totally forward-looking; all browser editors should integrate sobriety, all sites be eco-designed. It does show, however, that optimizing the energy of apps and websites makes sense in the digital environmental footprint. Some people seeing only the energy of recharging neglect this aspect. However, as we can see in this projection, the environmental gains are much greater.

This figure is significant and at the same time low: 47 million Teq CO2 for the world, this is 6% of the French footprint. However, CO2 is not the only metric to look at. Another significant problem, for example, the shortage of lithium in 2025 but also water.

To all this, we should add issues associated with new practices and new materials:

… the sector is constantly evolving to respond to challenges that are sometimes commercial, sometimes economic, sometimes regulatory. The battery example illustrates this trend well. While we had become familiar with the “classic” lithium-ion batteries which mainly contain lithium, carbon, fluorine, phosphorus, cobalt, manganese, and aluminum, new models have appeared, first lithium-ion-polymer batteries then lithium-metal-polymer batteries. The possible metal procession, already substantial, has therefore been considerably increased; with iron, vanadium, manganese, nickel but also rare earth elements (cerium, lanthanum, neodymium, and praseodymium).

SystExt Association (Extractive Systems and Environments)  https://www.systext.org/node/968 

Taking into account the environmental, social, and geopolitical issues involved with batteries, dividing the number of batteries used by 2 is really not enough! This means that the optimization wells should now be activated. And if we want to achieve ambitious goals, all players, manufacturers, OS and browser editors, digital players … have their share of the work. Continue to incant magical reductions resulting from technologies, to say that energy should not be optimized, to transfer the fault to other actors or other sectors, to explain that focusing on uses is a mistake … that shift the problem. We all need to roll up our sleeves and solve the problem now!

 

1 hour of Netflix viewing is equivalent to 100 gEqCO2. So what?

Reading Time: 7 minutes

Netflix, along with others like the BBC, has researched, with support from the University of Bristol, the impact of its service. The precise figures and the methodology will be published soon, but it appears that one hour of viewing Netflix is equivalent to 100 gEqCO2.

When the communication was released, several digital players took up this figure, but, in my opinion, not for good reasons. Communicating the impact of video through The Shift Project emerges as a systematic point of debate. As of March 2020, the Shift post had been widely disseminated in the media with a significant evaluation error. This error had been corrected in June 2020 but the damage was already done.

In this context, the IEA carried out a contradictory analysis on the subject. In the end, many studies on the impact of the video came out (IEA, the German Ministry of the Environment, ourselves with our study on the impact of playing a Canal + video). It is always difficult but not impossible to compare the figures (for example, whether or not the manufacturing stage is taken into account, the representativeness of the terminals, the different infrastructures, and optimizations between players, etc.), however, if we take things comparable, all studies have similar orders of magnitude. By taking the correction for the Shift Project error (Ratio 8 resulting from an error between Byte and Bit), the numbers are also close.

What do the studies say?

But beyond the discussions on the numbers, if we examine the studies in detail, the conclusions point in the same direction:

  • Regardless of the unit cost, there is a significant growth in usage and overall impact.


Set against all this is the fact that consumption of streaming media is growing rapidly. Netflix subscriptions grew 20% last year to 167m, while electricity consumption rose 84%.

  • The impact of digital services is relatively small compared to the impact of other activities. However, it is necessary to continue to study and monitor this impact.

What is indisputable is the need to keep a close eye on the explosive growth of Netflix and other digital technologies and services to ensure society is receiving maximum benefits, while minimising the negative consequences – including on electricity use and carbon emissions.”

  • The aim of the concerned companies is to better measure their impact and identify the real areas for optimization.

“Netflix isn’t the only company using DIMPACT right now, either. The BBC, ITV, and Sky are also involved. A spokesperson from ITV says that, like Netflix, the tool will help it to find and target hot spots and reduce emissions. Making such decisions based on accurate data is crucial if digital media companies are to get a grip on their carbon footprints.”

“This work allows us first of all to identify the technical projects to prioritize to minimize the carbon footprint of myCANAL video consumption as much as possible. At the same time, the lessons guide us on the awareness messages to relay to our users, throughout our future developments. This commitment to cooperation between our technical developments and our users is the key to consumption that has less impact on the environment. “ (Testimony of the CDO of Canal +, Greenspector study of the impact of playing a video)

  • The impact of the video can be small but it is necessary to measure it well (previous point)...

The most recent findings now show us that it is possible to stream data without negatively impacting the climate if you do it right and choose the right method for data transmission”.

Are the discussions going in the right direction?

The errors of some studies did not help calm the discussions. Neither does the media coverage of these figures. However, we should not be fooled, saying that digital technology has an impact is not necessarily well accepted by all players. This can be a nuisance for a field that for 30 years has been accustomed to a development paradigm without very little constraint and above all very little interest in internal environmental issues. Let us remember that Moore’s Law, which governs this digital world a great deal, is a self-fulfilling prophecy and not a scientific law: the industry is putting in place financial and technical means so that the power of processors increases regularly. We must not be fooled because focusing on certain errors allows the problems to be ignored. I have seen only quotes from the Shift Project error in Netflix’s DIMPACT ad but no quotes about Netflix’s desire to measure and reduce its impact. We must accept the mistakes of the past if we are to move forward on this subject. The study of the Shift has the merit of bringing to the fore an issue that was difficult to be seen. And also accept these own mistakes, how many digital promises have not been (yet) proven? Have the positive digital externalities been scientifically quantified by a sufficient number of studies? This latest analysis shows that the few existing studies (Mainly 2 Carbon Trust studies and the GSMA) deserve much more work to confirm the huge announced benefits of digital technology.

The study of claims of positive impacts of digital on the climate leads to the conclusion that these cannot be used to inform policy decisions or research. They are based on extremely patchy data and assumptions that are too optimistic to extrapolate global estimates. In addition, the two reports studied do not see the avoidances in the same sectors, or even contradict each other.

It is even a shame to focus on one aspect of the impact by dismissing the overall issue. This is the case in the discussion of the impact of the network on the energy part. The calculation method based on the kWh / Gb metric, even if shared by almost all of the studies and internal teams of operators, is criticized by some. This method can in fact be improved, but the church must be put back in the middle of the village: the impact of the network is in all cases weaker than the Terminal part, the material manufacturing part is never discussed in these debates while this is the main issue of the impact of digital technology. Especially since the energy improvement of the network and data centers is based on a principle contrary to the impact of the hardware: the regular renewal of the hardware to put in place new, more efficient technologies.

Google has been criticized for the waste policy of its servers. Practices have been improved but one can wonder about this management: even if the servers are resold and the environmental cost is amortized for the buyer, this does not change anything in the excessive renewal cycle.

“We’re also working to design out waste, embedding circular economy principles into our server management by reusing materials multiple times. In 2018, 19% of components used for machine upgrades were refurbished inventory. When we can’t find a new use for our equipment, we completely erase any components that stored data and then resell them. In 2018, we resold nearly 3.5 million units into the secondary market for reuse by other organizations.” (Google Environmental Report 2019).

One of the first explanations for these clear-cut discussions often comes from the lack of awareness of digital environmental issues. But behind that there is also a more sociological explanation: We reproach certain organizations for “ecological” beliefs. However, we can also speak of belief among certain digital players when we uncritically idolize the benefits of digital technology. In this case, not sure let these discussions go in the right direction. “Technophobic” versus “Techno-béa”, the reasoned find it difficult to take their place in the middle. Several avenues are however useful to progress serenely on the impact of digital!

Let us limit the comparisons between domains

Comparisons of the environmental impact of digital technology with other fields are a trap. It is necessary to understand an abstract CO2 impact. We use it ourselves to carry out this awareness. However, this leads to sometimes biased conclusions.

Here is the brief used by Les Echos! “Netflix claims that one hour of streaming on its platform generates less than 100gCO2e. This is the equivalent of using a 75W fan for 6 hours in Europe, or a 1,000W air conditioner running for 40 minutes.”

So an hour of streaming is low? Yes and no. Because it has to be seen from a “macro” level: worldwide viewing hours are exploding. And Netflix isn’t the only digital service we use. Is it possible to compare it to fan time? A household will be able to visualize 4 flows at the same time for several hours, we are not on the same importance of use with a fan (Maybe if with global warming …).

What is important is that this metric will allow service designers to track their improvement. With the details of this impact, they will be able to identify the hotspots. It will allow you to compare yourself to a competitor and to position yourself.

Using these numbers to say that the impact of digital is huge or zero doesn’t help much in the debate. All areas must reduce their impact, the challenges ahead are enormous and this type of comparison does not necessarily help in the dynamics of improvement. On the other hand, the more this type of study comes out, the more we will have a precise mapping of the impact of digital technology.

Let’s collaborate!

LCA models are criticized for their unreliability. Ok, is that a reason to abandon digital impact analysis? That would suit some well!

Above all, it is necessary to improve them. And this will come through more transparency: public LCAs from equipment manufacturers, energy consumption metrics reported by hosts, and even more information on the renewal of parks … Some players are playing the game, it is is what we were able to do for example with Canal + and this made it possible to have reliable data on the datacenter, CDN and terminal parts. However, the lack of transparency is significant in this sector when it comes to the area of environmental impact.

It is also necessary to avoid always blaming other sectors. In these discussions about the impact of video, and more broadly digital, I continually see “it’s not me, it’s him” arguments. For example, it is the hardware that must be acted upon, implying the software is not responsible for the impact. Once again, the environmental context is critical, there are no quick fixes and everyone must act. To free oneself from actions by pointing fingers at other actors is not serious. The idea of measuring the impact of digital is not to do “digital bashing” but to improve it. So there is no reason not to take these issues into account, unless ” go into a lobbying process and want to move towards total digital liberalization.

Having seen this field evolve over the past 10 years, I can say that there is a real awareness of certain players. The impact of digital can still be denied, but it is a dangerous risk. Dangerous because it is clear that the environmental objectives will be more and more restrictive, like it or not. Not taking this issue in hand is leaving it to other people. This is what we are seeing today: some are complaining about digital laws. But what have they done over the past 10 years when this issue was known? For fear that this will slow down the development of digital technology compared to other countries? Instead, why not see digital sobriety as a competitive factor in our industry? We can see that sobriety is taken into account by many countries (the DIMPACT project is an example). France has a lead with many players dealing with sobriety. It is time to act, to collaborate on these subjects, to criticize the methods to improve them, to measure themselves, for everyone to act in their area of ​​expertise.

This is what guides our R&D strategy, providing a precise tool for measuring energy consumption and the impact of terminals. We are working to improve the reliability of measurements in this area, to try to provide food for thought and metrics. Hoping that the debates will be non-Manichean and more constructive and that the digital sector fully takes environmental issues into account.

Users smartphones: all about the environmental impact and battery wear

Reading Time: 4 minutes

User terminals: the high environmental impact of the manufacturing phase

User terminals are now the biggest contributors to the environmental impact of digital technology and this phenomenon is set to increase. This trend is mainly explained by the increasingly important equipment of households with smartphones, by a reduced lifespan of this equipment, and by the fact that it has a significant environmental impact. An impact mainly due to the smartphone manufacturing phase. The Ericson brand announces, for instance, an impact in use (i.e. linked to recharging the smartphone battery with energy) of 7 kg eqCO2 out of a total impact of 57 kg eqCO2, or only 12% of the total impact. The total impact takes into account the different phases of the smartphone life cycle: manufacture, distribution, use, treatment of the smartphone at the end of its life.

Hence the interest that manufacturers work on this embodied energy by eco-designing but also by improving the possibility of increasing the life of the equipment through repairability but also durability.

Regarding all these observations, it could seem unproductive from an environmental point of view to reduce the energy consumption of smartphones. In any case, the simplistic approach would be to put that impact aside. But the reality is quite different and the electrical flows that are involved in the use of mobile devices are much more complex than one might think.

Explanation of battery operation

Current smartphones are powered by batteries with Lithium-ion technology. On average, the capacities of the batteries on the market are 3000 mAh. The trend is to increase this capacity. The battery can be thought of as consumable, just like a printer cartridge. It wears out over time and the original capacity you had when you bought the smartphone is no longer fully available. That is, the 100% indicated by the phone no longer corresponds to 3000 mAh but to a lower capacity. And this initial capacity cannot then be recovered.

Battery wear is primarily created by a full charge and discharge cycles. A recharge/discharge cycle corresponds to an empty battery that would be recharged to 100%. I leave home in the morning with a phone 100% charged, the battery drains, I charge my phone 100% in the evening. A complete cycle in one day therefore!

If you charge your phone more often, you can cycle more (several incomplete cycles are ultimately equivalent to one complete cycle).

The more the number of cycles increases, the more the remaining capacity decreases. This wear leads to the end of battery life. Current technologies allow up to 500 cycles.

At the end of the cycle, the battery capacity is only 70% of the initial capacity. Beyond this annoying loss of autonomy, the battery suffers from certain anomalies, such as a rapid drop from a battery level from 10% to 0%.

Note that this effect will be reinforced by the intensity of the battery discharge: if the phone consumes a lot (for example during video playback), then the battery wear will be greater.

Impact on obsolescence

The loss of autonomy is a cause of renewal by users: 39% in 2018. This phenomenon is reinforced by the fact that the batteries are increasingly non-removable, which leads to a complete replacement of the smartphone by the user. In addition, even if the decrease in autonomy is not the only replacement criterion, it will be added to the other causes to create a set of signs indicating to the user that he must change his smartphone (marketing effect, power, new features…).

We can therefore easily make the link between the mAh consumed by the applications and the kg of CO2 due to the production of CO2. By reducing these mAhs, we would greatly reduce the wear of the battery, the life of smartphones would be extended on average and therefore the initial CO2 cost would be more profitable. The smartphone mAh has a much greater cost on the embodied energy of the smartphone (manufacture) than on the impact of energy to recharge it.

For example, for a classic smartphone, we have 0.22 mgCo2 / mAh for the recharged energy compared to 14mgCo2 / mAh.

Technological solution

Solving this problem can always be seen through the technological axis: increase in capacities, fast loading … If we take the case of fast loading, this will not change the problem, on the contrary, it will worsen its potentially increasing cycles. It is not by increasing the fuel tank of cars that we will reduce the impact of the automobile. Improving battery technology is beneficial, however, reducing the consumption of smartphones would be even more beneficial for the environment and the user.

Note that the CO2 impact is not only to be taken, indeed the manufacture of batteries is overall very expensive in environmental and social terms. Not to mention strategic resources with geopolitical impacts such as cobalt or lithium. Extending battery life is critical.

Digital sobriety everywhere, digital sobriety nowhere? 7 mistakes to avoid!

Reading Time: 4 minutes

Everyone is talking about digital sobriety. From web agencies to politicians, including ESNs, all communicate on the subject, on the explanation of the impact, on good practices, on the willingness to go there. But what is it really?

We have been working on the subject within Greenspector for 10 years and we can in all modesty give our opinion on the real situation of the actors and especially on the barriers that will have to be overcome to really do eco-design and sobriety.

We have educated developers, students, and leaders. We have supported teams, applied good practices. We measured apps and websites. It took motivation to stay in the race. Because the context is different, and we are happy to see so much communication and actors involved. However, we believe that all is not won! Here are some tips and analyzes from veterans in the field, grouped into 7 mistakes to avoid!

Associate digital sobriety only with a department

In many actions that we have carried out, an important component was necessary: the consideration of the problem at all stages. Developer, Designer, Product Owner, decision-maker. And Customer… Without it, the project will not get far. An unfunded project, optimization research needs not wanted by the devs, technical improvements not accepted by the Product Owners … At best, the improvements will be made but with only a few little gains.

The solution is to engage in a shared approach. It takes a little longer (and more!) But allows the project to be understood by all and accepted.

Focus only on coding practices

The miracle solution when you think of digital sobriety is to tell yourself that if the developers respect good practices, everything will be fine. We can talk about it. We started an R&D project (Green Code) more than 8 years ago on this axis. It was necessary but not sufficient. Indeed, it is also necessary to work on the functionalities, the design, the contents, the infrastructure…

The establishment of a repository will be an important axis but more initially to initiate an awareness process. It is important not to say to yourself that it will be necessary to apply 115 best practices on almost all of a site because the effort will be enormous and the results will not necessarily be there.

Do not use professional tools

Many tools have emerged to evaluate websites. Indeed, it is quite simple on the web to monitor some technical metrics such as the size of the data exchanged on the network or the size of the DOM and to model an environmental impact. This is great for raising awareness and for identifying sites that are far too heavy. On the other hand, the system on which the software works is not so simple and the impact can come from many more elements: A JS script that consumes, an animation…

Taking action with this type of tool makes it possible to start the process but to say that the software is sober because we have reduced the data size and the size of the DOM is at the limit of greenwashing.

We are not saying this because we are publishers but because we are convinced that it is necessary to professionalize actions.

Fighting over definitions and principles

We have lived it! We have been criticized for our approach to energy. The birth of a domain leads to the establishment of new principles, new domains, new definitions … This is normal and often requires long discussions. But do we really have time to debate? Are they necessary when there is agreement that we all need to reduce the impact of our activities? The complexity of digital and obesity is there and can be felt at all levels. It is time to improve our practices overall, all wishes are good, all areas need to be explored.

Look for heavy consumers

The findings on the impact of digital technology are increasingly shared. However, teams may be led to look for excuses or responsible and not make corrections that seem more minor. Why optimize your solution when bitcoin is a consumption abyss? Why reduce the impact of the front when the publishers of libraries do nothing? Prioritization is important but it is often a bad excuse not to seek gains in your field.

ALL the solutions are way too heavy. So everyone is stuck on slowness. Everything is uniformly slow. We stick to that and all is well. Being efficient today means achieving a user experience that corresponds to this uniform slowness. We prune things that might be too visible. A page that has had more than 20 seconds to load is too slow. On the other hand, 3 seconds, … is good. 3 seconds? With the multicore of our phones / PCs and data centers all over the world, all connected by great communication technologies (4G, fiber …), it’s a bit weird, isn’t it? If you look at the debauchery of resources for the result, 3 seconds is huge. Especially since the bits circulate in our processors in nanosecond-level units of time. So yes, everything is evenly lent. And it suits everyone (at least, on the surface: The software world is destroying itself, manifesting for more sustainable development.)

Now let’s start optimizations by not looking for culprits!

Think only about technological evolution 

We are technicians, we are looking for technical solutions to solve our problems. And therefore in the digital field, we are looking for new practices, new frameworks. And the new frameworks are full of performance promises, we believe them! On the other hand, it is an arms race that costs us resources. This development is surely necessary in certain cases but it is not necessary to focus only on this. We must also invest in cross-cutting areas: accessibility, testing, sobriety, quality … And on the human because it is the teams who will find the solutions for sober digital services.

Do not invest 

Goodwill and awareness are necessary, on the other hand, we must finance change. Because digital sobriety is a change. Our organizations, our tools are not natively made for sobriety. Otherwise, we would not currently have this observation on the impact of digital. It is, therefore, necessary to invest a minimum to train people, to equip themselves, to provide time for the teams in the field. Just doing a webinar and training is not enough!

Let us have commitments related to the issue and the impacts of digital technology on the environment!

What are the best Android web browsers to use in 2021?

Reading Time: 8 minutes

The internet browser is the most important tool on a mobile device. It is the engine for browsing the internet. No longer just for websites but also now for new types of applications based on web technologies (progressive web app, games, etc.).

For this new edition of our ranking, carried out in 2018 and 2020, we have chosen to compare 16 mobile applications: Brave, DuckDuckGo, Chrome, Ecosia, Edge, Firefox, Firefox Focus, Firefox Nightly (formerly Firefox Preview), Kiwi, Mint, Opera, Opera Mini, Qwant, Samsung, Vivaldi et Yandex.

The objective of these measures is to see how the solutions stand in terms of environmental impact (Carbon) in relation to each other on common user scenarios but also to provide benchmarks on our uses of browsers.

For each of the 16 applications measured on a Galaxy S7 (Android 8) smartphone, the scenarios integrating the launch of the browser, browsing on 7 different websites, periods of inactivity, etc. were carried out through our Greenspector Test Runner, allowing the performance of automated tests.

Learn more about our methodology

Total energy consumption (in mAh)

The average power consumption is 49mAh (as a reminder, the 2020 ranking average was 47mAh or -4.2%).

Here is the evolution from last year.

2021 Ranking2020 RankingÉvolution
Firefox Focus1109
Vivaldi242
DuckDuckGo352
Firefox Nighly4106
Yandex53-2
Kiwi682
Opéra72-5
Brave87-1
Ecosia91-8
Chrome106-4
Samsung119-2
Firefox12131
Edge1311-2
Qwant1413-1
Opera Mini1514-1
Mint1612-4

Firefox Focus is the best solution in terms of energy consumption in our comparison. The version evaluated in 2020 was one of the first versions and it seems that Firefox teams have been working on optimizing the power consumption of their browser since. Ecosia loses its leading position on this indicator and finds itself in the middle of the ranking. On the side of the most energy-hungry browsers, we find Mint and Opera Mini. Note that the most popular browsers: Edge, Firefox, Chrome, and Samsung, are quite poorly classified.

This total energy consumption can be evaluated and analyzed in 2 ways: the energy consumption of pure navigation and the energy consumption related to the functionality of the browser.

Energy consumption of navigation (in mAh)

Navigation is the consumption only associated with viewing the page (no consideration of launching the browser, features, etc.).

Most browsers have a fairly similar power consumption on “pure” navigation. This is mainly due to the use of visualization engines. Most browsers use the Chromium view engine.

Compared to the 2020 ranking, it seems that the Firefox engine has improved. Qwant, using this engine too.

Energy consumption of features (in mAh)

The functionalities include browser states such as idle periods, launching the browser, writing URLs in the navigation bar.

By keeping the same classification as for the total energy, we see that the non-navigation functionalities (writing of URLs, inactivity of the browser, etc.) have a significant impact on total consumption.

Autonomy (hours)

Battery life is the number of hours the user can surf before the battery is completely discharged. The ranking does not change with respect to that of energy, as autonomy is directly related to energy.

We observe that the autonomy can double from 5h to 10h between the most consuming browser (Mint) and the least consuming (Firefox Focus).

Data (Volume of data exchanged) (MB)

Some applications do not manage the cache at all for reasons of data protection and privacy, use proxies that optimize data, have a difference in the implementation of cache management. In addition, if a browser is good, the downside is that a lot more data is potentially loaded in the background. In our methodology, we see it for the New York Times site, which is larger in terms of data.

Here is an example of the measurement iterations on the Amazon site (Amazon.com) that shows the difference in data processing between different browsers.

Memory consumption(RAM) by the browser process (MB)

Memory consumption is important to take into account in a digital service because even the variation in memory consumption does not influence the energy impact, it remains very important to integrate because of the effects of overconsumption on already congested devices. in memory, or older, less powerful, this can create instabilities or applications that cannot operate simultaneously because they compete. In ecological terms, this can of course provides a premature change of device on the user side for a more powerful model to satisfy good user comfort.

The variation goes from 400MB to 1.8GB (approximately half the RAM of the Samsung Galaxy S7).

Let us observe more precisely the behavior of the memory following the sequence:

  • Launch browser
  • Browser inactivity
  • Navigation (Average memory consumption)
  • Inactivity following navigation
  • System after closing browser

At the launch of browsers, we have a median memory usage of 413MB. Edge consumes a lot more with 834MB.

If we leave the browser inactive, the memory consumption of most browsers remains fairly stable. Which is pretty good and normal. On the other hand, we see that Edge and Ecosia have a strong increase in memory.

Then, with navigation, the memory consumed increases significantly. This is due to the consumption of navigation engines to analyze and store items. The management of tabs will also play a role. If the browser offloads the memory for the non-active tabs, then the consumption will be lower.

We can note that Firefox Focus, Mint, Duck Duck go, Opera Mini and Qwant overall consume little memory.

When the browser is closed, almost all browsers are no longer in memory. Firefox remains however with 1, GB as well as Chrome and Mint with around 100MB. Probably a bug but it is annoying because elements still occupy the memory and processing operations can also exist: processing operations are confirmed on Firefox and Mint with the rate of CPU consumed by the browser process which remains high.

We can also look at the memory impact of consulting Wikipedia (the basic consumption of the browser is subtracted here).

We understand the difference in memory management between browsers and the potential entropy on heavier sites.

Performance

We measured the time it took to write the URL in the address bar.

This difference in performance can be explained by several factors: network exchanges during entry (auto-completions), processing during entry, search based on known addresses, etc. In the end, for the user, the time to access the site will be longer or shorter. For example on the Wikipedia URL entry on Duck Duck Go a lot of network traffic and CPU processing (peak at 22% CPU).

Unlike the faster Edge which has lower processing in terms of CPU.

By the way, we could have an optimization of all the browsers by limiting its treatments (for example by grouping and spacing the treatments).

Environmental impact

The environmental impact is calculated according to the Greenspector emission factors taking into account the energy consumed and the wear of the battery (impact on manufacturing). The impact of the network and the data center is taken into account with the internet intensity.

This impact is reduced to the consultation of a page.

Firefox Focus by its low consumption is first. Samsung, which has average power consumption, is in second place thanks to good data management.

The most impactful browsers (Ecosia, Edge, Mint and Opera Mini) have high power consumption and poor data management..

Rated browsers

Measured versions : Brave (1.18.75), Chrome (87.0.4280.101), DuckDuckGo (5.72.1), Ecosia (4.1.3), Edge (45.12.4.5121), Firefox (84.1.2), Firefox Focus (8.11.2), Firefox Nightly 201228), Kiwi (Git201216Gen426127039), Opera (61.2.3076.56749), Opera Mini (52.2.2254.54723), Qwant (3.5.0), Vivaldi (3.5.2115.80), Yandex (20.11.3.88), Mint (3.7.2), Samsung (13.0.2.9).

Scenario

For each of its applications, measured on an S7 smartphone (Android 8), the user scenarios were carried out through our Greenspector Test Runner, allowing automated tests to be carried out.

Once the application is downloaded and installed, we run our measurements on the basic and original settings of the application. No changes are made (even if some options reduce the consumption of energy or resources: data saving mode, dark theme, etc.

However, we encourage you to check the settings of your favorite application to optimize the impact. Here is the evaluated scenario:

· Features evaluation
o Browser launching
o Adding a tab
o Writing a URL in the search bar
o Removing tabs and cleaning the cache

· Navigation
o Launch of 7 sites and wait for 30 seconds to be representative of a user journey

· Brower benchmark
o The Mozilla Kraken benchmark allows you to test JavaScript performance

· Evaluation of periods of inactivity of the browser
o On launch (this allows the home page of the browser to be evaluated)
o After navigation
o After closing the browser (to identify closing problems)

For each iteration, the following tests are carried out:
o Removal of cache and tabs (without measurement)
o First measure
o the Second measure to measure behavior with cache
o Removal of cache and tabs (with measurement)
o System shutdown of the browser (and not just a closure by the user to ensure that the browser is actually closed)

The measurement average therefore takes into account navigation with and without cache.

The main metrics analyzed are display performance, power consumption, data exchange. Other metrics such as CPU consumption, memory consumption, system data, etc. are measured but will not be displayed in this report. Contact Greenspector to find out more.

In order to improve the stability of the measurements, the protocol is fully automated. We use an abstract language of Greenspector test description which allows us strong automation of this protocol. Browser settings are the default. We have not changed any settings in the browser or its search engine.

Each measurement is the average of 5 homogeneous measurements (with a low standard deviation).

Impact assessment

To assess the impacts of infrastructures (datacenter, network) in the carbon projection calculations, we relied on our emission factor base (resulting from our R&D, such as the Impact study of the playing of a Canal + video – Greenspector) with as input the actual measured data of the volume of data exchanged. As this is a very macroscopic approach, it is subject to uncertainty and could be refined to adapt to a given context, to a given tool. For the Carbon projection, we assumed a 50% projection via a Wi-Fi network and 50% via a mobile network.

To assess the impacts of the mobile in the carbon projection calculations, we measure on a real device, the energy consumption of the user scenario and in order to integrate the material impact share, we rely on the wear rate theory generated by the user scenario on the battery, the first wearing part of a smartphone. 500 full charge and discharge cycles, therefore, cause a change of smartphone in our model. This methodology and method of calculation have been validated by the consulting firm specializing in eco-design, Evea.

In a process of continuous improvement, we are vigilant in constantly improving the consistency of our measurements as well as our methodology for projecting CO2 impact data. As a result, it is difficult to compare a study published a year earlier with a recent study.