Digital Sobriety Expert
Books author «Green Patterns», «Green IT - Gérer la consommation d’énergie de vos systèmes informatiques», ...
Speaker (VOXXED Luxembourg, EGG Berlin, ICT4S Stockholm, ...)
Green Code Lab Founder, ecodesign software national association
Third-party service integration makes it easy to quickly add functionality to a site such as a video or social network integration (see the case of Twitter integration). The providers of these tools have worked to make technology integration quick and easy. And the technique is there. But at what cost?
Energy consumption of the third-party service Youtube
We observe an increase in this type of third-party service on our measurements and abnormal overconsumption. This is the case with many sites and even government websites.
The YouTube integration is a good case study to explain this effect. In just a few lines, it is possible to display a video on any site:
Reference: Phone discharge speed in uAh / s (OS, Browser …) Loading: Speed of the first 20 seconds of loading Idle Foreground: Inactive site speed in the foreground Scroll: Speed when the user scrolls down the page Idle Background: download speed when the browser (and therefore the site) is in the background
This is a government website. Discharge rates exceed our thresholds for many steps. For loading, the speed is more than 2 times that of reference. For the idle foreground or phase of inactivity in the foreground, the consumption should be identical to that of reference. This consumption is abnormal for a site that seems quite light.
Note that this processing also impacts scrolling and loading. Is this an expected operation? A bug or a bad implementation? We haven’t been that far into the analysis.
Significant point: No video appears on this page. The integration of the plugin is surely necessary for another page. This makes the waste even more critical, it is all the more annoying that the French tested website is public and widely used: Impots.gouv!
Best practices for integrating a video
1 – Directly embed video without third-party services
It is possible to use free solutions without plugins. Integration via HTML5 is native.
2 – Embed an image
Display an image with the same rendering as the video allows to reduce to 1 request. If the user clicks on the video, then the scripts will be loaded and the video launched ultimately lazy loading.
We also did the exercise on a page of our Greenspector website:
On one of our “Case Study” pages was embedded a YouTube video. We replaced this integration by displaying an image (opposite) representing the old integrated video. This modification allowed us to go from a Greenspector ecoscore from 59/100 to 75/100 characterized by an energy gain of -12% in the loading stage, -10% in Idle and -15% in scroll.
3 – Integrate the plugin only on the desired page
A solution that is not ideal, but preferable to the existing one, is to only use scripts when the page requires a video.
What will it save?
First of all the performance. A large portion of processing related to site wait times is dedicated to third-party services. This is even more true for the YouTube plugin. On the audited site, the size can be reduced by 2, and the loading time reduced by at least 30%.
Power consumption will also be reduced and even more important than data size or performance. In fact, in addition to saving energy from charging, consumption in idle or inactivity phase will be reduced.
Bonus: user privacy
The other problem with this type of project is the use of tracker and user data recovery. Not integrating a third-party service resolves potential issues of data leakage and GDPR non-compliance. By the way, the YouTube plugin seems to allow version without cookies via the call to the URL: https://www.youtube-nocookie.com.
Like any third-party service, it is not that simple. Even with this no-cookie integration, user data is stored:
The audited site is therefore not GDPR compatible! To manage this, you must ask the user for consent explicitly:
The solution of a hosted video or static image will also manage this.
If the integration of a video is necessary, think about it quietly and consider the impacts on resource consumption and GDPR. There are technical solutions more respectful of the user, they are initially perhaps a little more complex to set up, however, the solutions will naturally become simpler and more widespread.
The web browser is a main tool on a mobile device. Not only for websites but also for new applications based on web technologies (progressive web app, games, …).
In our 30 most popular mobile apps ranking, among the mails, direct messaging, social networks, browsers categories, web browsing and social networks are on average more consuming than games or multimedia applications. The ratio would be 1 to 4 between the least and most energy consuming applications.
Decreasing the environmental impact of the digital life and increasing the autonomy of phones go in part through the choice of a good browser. Just as if you want to reduce the impact of your mode of transport, it is important to take the most efficient vehicle.
Last year we published the 2018 ranking of the least energy-consuming browsers, we made a new edition for 2020, more complete, made with our GREENSPECTOR App Mark.
The average rating is 36/100 which is pretty mediocre. It can be explained by low notes for each of the metrics. The three least energy-consuming browsers are: Vivaldi, Firefox Preview, Duck Duck Go.
Overall energy consumption (in mAh)
The median is 47 mAh and a large part of the browsers are in this consumption level (8/18 are in the 2nd quartile). Note that the last 3 browsers in the ranking are recognized by a consumption 75% higher than the median. Firefox, Qwant and Opera Mini are indeed very energy intensive.
Energy consumption of navigation (in mAh)
The last 3 browsers of the global ranking (Opera Mini, Firefox and Qwant) as well as Mint consume much more than the average (between 20 and 35 mAh against 16 mAh).
It is sufficient to say that for most browsers (apart from the previous exceptions), pure navigation is not going to be the reason for the difference in overall consumption. This is mainly due to the use of visualization engines. Most browsers use the Chromium engine. For Opera Mini, the specificity is that a proxy is used and can compress the size of the sites. It seems that this proxy is not good for the energy, in fact the decompression on the user’s phone consumes a lot of energy.
For the Firefox app, over-consumption of energy is a known and shared thing, this is one of the reasons why Mozilla is under development of a new browser. Internal code name is Fenix and public one is Preview. The measures in this ranking are rather encouraging on consumption (in the average). For Qwant, this is due to the use of the Firefox engine! The measurements between Qwant and Firefox are indeed very close.
Power consumption of features (in mAh)
The main feature that is browsing the web also requires other important features: new tab opening, enter an address in the taskbar … Indeed, when we open a new tab, each browser offers different features: mainly used websites, latest news, …
On pure navigation, browsers differ little, there are significant differences in energy consumption on other features with a ratio of more than 3 (between 4 mAh and 12 mAh).
Note that the first 3 (Firefox Focus, Firefox Preview and Duck Duck Go) have a simple homepage. The consumption of the browser in Idle (inactivity) is then very low. Functional sobriety pays the consequences!
When launching browsers, the energy consumptions are quite similar to each other. Note, however, that opening a tab and writing a URL are actions that are performed several times. If we take a daily projection of 30 new tabs and 10 URL entries, we can still see the difference between browsers and the large advance of Firefox Preview and Focus!
The basic features are not insignificant in the overall consumption.
Projection of autonomy (in number of hours)
If we take this energy data and project it for a navigation of several websites, we identify the maximum time that the user can navigate to the complete discharge of his battery:
Data consumption (in MB)
The difference in data consumption between browsers (8 MB difference) is explained by the pure navigation and the different features.
On the navigation, we explain this difference:
some applications do not manage the cache at all for reasons of data protection and confidentiality (Firefox Focus)
proxy usage that optimizes data (Opera Mini)
a difference in the implementation of cache management. It is possible that some browsers invalidate the cache and that data is loaded while they are cached.
additional data consumption continues in the background (idle tabs, background data not blocked …)
download performance differences that increase the duration of the measurement. Indeed, if a browser is powerful, the counterpart is that many more data are potentially loaded in the background.
The difference in overall consumption can also be explained by the data consumption of the basic functionalities:
Many browsers are very consuming. We note the 3 MB of Qwant that seem abnormal! It can be considered that for browsers, this consumption should be close to 0. Indeed, the main feature of a browser is to display a website, any feature (and associated consumption) can be considered as “over-consumption”. In this context, many browsers consume data when writing the URL. This is mainly explained by the URL proposal features. There is indeed exchange between the mobile and the servers, either directly by the browser or by the associated search engine.
For example, for the Yandex browser below, the details of data exchanges when writing a URL show more than 400 KB of data exchanged.
In contrast, below, trading for Brave is frugal with less than 2 KB.
Browser performance (in seconds)
The measures allow us to evaluate the performance of the key features:
Launching the browser
Adding a tab
Writing a URL
Removing the cache
Mozilla Kraken Bench
NB: This study does not evaluate the display performance of websites. However, the Mozilla Kraken benchmark allows this in part by evaluating the functionality of browsers.
Efficiency of browsers (in mAh/s)
We can evaluate the efficiency of browsers by taking the performance of the Mozilla Kraken benchmark and the associated energy. Efficiency is the energy consumption per unit of time:
Samsung, Opera Mini and Opera are the most efficient browsers. This ranking is different from that of overall energy consumption. For Samsung Internet, this first place in terms of efficiency on a Samsung hardware can be explained by the optimized link that can have the manufacturer with a pre-installed software. The Opera browser has a good positioning (2nd for overall consumption and 3rd for efficiency).
Track of improvements
It is possible to improve the consumption of navigation.
For the user :
Choosing an efficient browser
Use bookmarks or favorites to avoid going through the entry bar
Configure the energy saving options of browsers (mode or dark theme, data server …)
For developers of sites:
Eco-design their site
Test and measure on different browsers to identify different behaviors and take them into account
For browser editors:
Measure energy consumption and efficiency
Reduce resource consumption of recurring features (url write, new tab …)
Make the homepage as simple as possible.
The measurements were carried out by the laboratory GREENSPECTOR App Mark on the basis of a protocol Standardized: Samsung S7 Smartphone, Android 8, Wi-Fi, 50% brightness. Between 4 and 8 iterations were carried out and the value used is the average of these measurements. Measurement campaigns follow a scenario that evaluates browsers in different situations.
Evaluation of features
Launching the browser
Adding a tab
Writing a URL in the search bar
Remove tabs and clean the cache
Launch of 6 sites and wait for 20 seconds to be representative of a user journey
At launch (this allows to evaluate the homepage of the browser)
After closing the browser (to identify closing problems)
For each iteration, the following tests are performed:
Deleting the cache and tabs (without measurement)
Second measure to measure the behavior with cache
Remove cache and tabs (with measure)
System shutdown of the browser (and not only a closure by the user to ensure a real closing of the browser)
The average measurement therefore takes into account a navigation with and without cache.
The main metrics analyzed are: display performance, power consumption, data exchange. Other metrics such as CPU consumption, memory consumption, system data … are measured but will not be displayed in this report. Contact GREENSPECTOR for more information.
In order to improve the stability of the measurements, the protocol is completely automated. We use an abstract GREENSPECTOR test description language that allows us to fully automate this protocol. Browser configurations are the default ones. We have not changed any settings of the browser or its search engine.
A notation out of 100 makes it possible to classify the browsers between them. It is based on the notation of 3 main metrics:
Duration required for a test step
Battery discharge rate found on the device during the test step, compared to the battery discharge rate of the device before the application is launched
Measurements in uAh / s, then classification in multiples of the reference discharge velocity
Total data volume (transmitted + received) during the test step
A weighting ratio is applied to the 5 step levels (from 5 for dark green to -1 for dark red) as described in the following example table:
The score of this application is then calculated at 61/100 for the energy metric. Once the score of each of the three metrics obtained on 100 points, the total score of the application is calculated with equal weighting of the three metrics: Total Score = (Performance Score + Energy Score + Score Data) / 3
Duck Duck Go
Some browsers were discarded because they did not allow the tests automation. For instance, UC Browser and Dolphin browsers could not be measured. Beyond automation, this is a symptom of a accessibility issue of the application. To improve the accessibility of applications for people with visual impairments (among others), it is necessary to set up buttons labels. The automation that we realized is based on this information. In the end, these browsers do not appear in the ranking, but we can consider that accessibility problems are in all cases a crippling problem.
Note : The 2020 ranking is hardly comparable to that of 2018. Indeed, our protocol having completely evolved, the tests are thus more advanced and automated.
The application should not require a recent OS version like Android to be used. Some users do not follow updates, either voluntarily or because of their platform that does not allow them. According to our “PlayStore Efficiency Report 2019“, only 70% of apps on the store are compatible with all versions of Android.
The application must comply with the accessibility rules and must not exclude users with disabilities.
The app should work well on older phones too only on recent and latest models. This criterion will be degraded if you do not respect that of sobriety. 1/4 of the Google PlayStore applications are 10% of the oldest mobiles. (Source: PlayStore Efficiency Report 2019)
The application must limit its resource consumption (number of CPUs, memory occupied, data exchanged) in order to avoid any slowness or pollution of the other applications (for instance because of the memory leak). 50% of Google PlayStore apps continue to process after the app closes. (Source: PlayStore Efficiency Report 2019)
The application must limit its network consumption in order to not involve any load on the data centers and thus avoid the additional costs related to the unnecessary congestion of the servers.
The first launch of the application must be fast: otherwise, it is possible that your users won’t go further, the inclusion criterion will not be respected either.
The loading times of the application must be acceptable in all network situations.
The application requires few or no permission. Do you really need to consult the list of contacts of your user? It’s all the more important to optimize this since the more permissions there are, the more the application consumes resources. This will therefore negatively influence the performance criterion.
The application has little or no tracker. The integration of a large amount of trackers implies a greater consumption of resources but can also cause bugs. This observation is even more true that the connection is degraded. On average, adding a tracker causes an over-consumption of resources of 8.5%.(Source : PlayStore Efficiency Report 2019)
The application must respect the sobriety criterion, the CO2 impact linked to the use is lower as well as the pressure of the resources on the components of the equipment of the user (battery obsolescence, loss of performance). As a result, the user is less likely to renew their equipment, which reduces the risk of obsolescence of his material. Our latest study shows that mobile apps contribute at least 6% of CO2 emissions digital.
Some tracks for the improvement of its GREENSPECTOR App Mark score
Minimum SDK version: Allow Android older versions to avoid the exclusion of users using older generation platforms.
Number of trackers: the fewer trackers the application has, the more it will respect the user’s data as well as the protection of his privacy. In addition, trackers via processing and data exchange increase the consumption of the application.
APK size: the bigger the binary of the application, the more the network is solicited and the less efficient the application. In addition, a large application size will use the limited storage space of some users.
Loaded data: number of loaded data throughout the test run. Limiting this data will reduce the consumption of resources on both the smartphone and the network.
Data loaded in the background: when the application is not used, it must limit its impact and send or receive as less data as possible.
More global metrics
Some metrics are directly related to the impact of the application and its efficiency. It is possible to act on it via the previous metrics, see by other axes (functional optimization, improvement of the source code …)
CO2: the more the application consumes energy, the more the battery is solicited and become obsolete. This may lead to a premature renewal of the battery or even the smartphone and therefore to a higher environmental impact. Let’s not forget that most of the environmental impact of a smartphone is predominant in its manufacturing phase than in its use phase: keeping it longer reduces its overall impact.
Energy Overconsumption: if the application overconsumes, it increases the environmental impact but also creates discomfort for the user especially on the loss of autonomy and generates an additional stress factor.
Performance after the first installation: applications sometimes perform additional treatments during the first launch, so the launch time is sometimes increased. It is necessary to limit its treatments because this loss of performance can be inconvenient for the user.
Performance: the launch time of the application is an important data for the user. It is necessary to reduce it to the maximum while consuming the least possible resources.
3G Performance: in poor network conditions, it is necessary to master the performance to maintain a good user experience. It is even possible that some users do not have access to the application in the case of degraded performance. Having a frugal service that takes into account the constraints of mobility is therefore a key to success.
What about now?
You are certainly wondering how your application is doing on these 5 indicators. Is it rather virtuous? Is there any risk? How is it ranked against its competitors? Do you have quick progress actions? If you ask us, we will tell you! Contact-us, and we will introduce you to your own inclusive, sober, fast, ecological and discreet evaluation – just like your application very soon.
During the Mobile One event, GREENSPECTOR announces a survey of the major mobile consumer trends of the Google Play Store. More than 1000 applications were sifted through for Performance, Sobriety and Inclusion by the measurement tools developed by GREENSPECTOR.
This is a great first for GREENSPECTOR: one of our main solutions is now available in the Apple universe.
Long awaited by our customers, this compatibility with iOS makes it possible to complete the analyzes of type GREENSPECTOR App Scan.
The measures of performance and efficiency on the iPhone complement those on Android. Your applications and your websites benefit from a maximum coverage, representative of the uses of your users, no matter what the underlying technology.
So you can now ensure the quality of user experience across all your mobile devices, under control their performance and efficiency. Note that because of the restrictions imposed by Apple, all the usual GREENSPECTOR metrics are not yet available. But our teams continue to work to offer you, throughout the versions, analyzes always more pointed on your applications.
I have been working for more than 8 years in GreenIT and I have seen lately that several studies and initiatives have started. This is a very positive sign and shows that there is a real dynamic to change the impact of ICT. All actions, whether small scale, as a simple awareness, or on a larger scale such as the optimization of a website with millions of visitors, is good to take into account the climate emergency.
However it’s important to avoid any greenwashing phenomenon and to understand the impact of the good practices mentioned (are they really all green?)
Myth 1 – A powerful software is a simple software.
A powerful software is a software that will be displayed quickly. This gives no information on its sobriety. On the contrary, it’s possible that practices are put in place for a quick display and that they go against the sobriety. As for example put the loading of the scripts after the display of the page. The page will be displayed quickly but many processes will run in the background and will have an impact on resource consumption.
Myth 2 – Optimize the size of queries and the weight of the page, this makes the software more frugal.
True and false
True because actually fewer resources will be used on the network and servers. Which means less environmental impact. It goes in the right direction.
False because the evaluation of a simple software will not only be based on this type of technical metrics. Indeed, it is possible that certain elements have an equally important impact. A carousel on a home page could for example be quite light in terms of weight and requests (for an optimized carousel) but in any case will have a strong impact in user-side resource consumption (CPU consumption, graphics … ).
Myth 3 – Automatic control via tools allows me to be green
True and false
True because it is important to measure the elements. This will allow to know objectively where we are, and to improve.
False because the evaluation will be done on technical elements. There is a bias: we only measure what we can automate. This is the criticism that can be made for example on Lighthouse (accessible tool in Chrome) on the accessibility. We can make a totally inaccessible site by having a score of 100. This is the same criticism that we can have about the tools that are used in ecodesign. For example the website http://www.ecoindex.fr/ is an interesting tool to initiate the process, however the calculation of this tool is based on 3 technical elements: the size of the page, the number of request and the size DOM. These are important elements in the impact of the page, however several other elements can be impacting: CPU processing from script, graphic processing, more or less good solicitation of the radio cell … All elements that can create false positives.
A measurement software will be complementary 😉
Myth 4 – My software uses open-source and free code, so I’m green
Free software is a software in its own right. He suffers the same obesity as other software. He will therefore potentially be a consumer. On the other hand, free software has a stronger capacity to integrate good efficiency practices. Still need to implement or at least begin to evaluate the impact of its solution …
Myth 5 – The impact is more on the datacenter, on the features, on that …
True and false
Any software is different, by its architecture, its use, its implementation, its functions … no serious study can certify a generality on a domain that would have more impact than another. In some cases, the impact will be more on the datacenter (for example on calculation software) but in other cases it will be on the user side (for example mobile applications). In the same way, some software will be obese because of their multiple functionalities whereas others will be because of a bad coding or an external library too heavy.
Myth 6 – Ecodesign requires a structured and holistic approach
True and false
True because indeed it’s necessary to involve all the actors of the companies (developer but also Product Owner, Business Department) and to have a coherent strategy.
However, starting process and product improvement through unit and isolated actions is very positive. The heaviness of the software is indeed in a state where any isolated positive action is good to take.
Both approaches are complementary. Avoiding the application of certain practices while waiting for a structured approach (which can be cumbersome) would be dangerous for the optimization and competitiveness of your software.
Myth 7 – The green coding does not exist, the optimization is premature …
This is an argument that has existed since the dawn of time (software). Code implemented, legacy code, libraries … optimization tracks are numerous. My various audits and team accompaniments showed me that optimization is possible and the gains are significant. To believe otherwise would be a mistake. And beyond optimization, learning to code more green is a learning approach that is useful to all developers.
Myth 8 – My organization is certified green (ISO, ICT responsible, Lucie …), so my product is green.
All its certifications will effectively ensure that you are on the right track to produce more respectful software. Far be it from me to say that they aren’t useful. However, it must not be forgotten that these are organization-oriented certifications. In a structured industry (like agriculture, a factory …) the company’s deliverables are very aligned to the process. Certifying an AB farm will ensure that the product is AB good.
However in the mode of the software it is not so simple, the quality of the deliverables is indeed very fluctuating, even if one sets up a process of control. In addition, an organization potentially consists of a multitude of teams that are not going to have the same practices.
It’s therefore necessary to control the qualities of software products and this continuously. This is an approach that will be complementary to the certification but mandatory. Otherwise we risk discrediting the label (see going to greenwashing).
Myth 9 – Optimizing energy is useless, it’s the equivalent CO2 that is important to treat
The ecodesign work is mainly based on the reduction of equivalent CO2 (as well as other indicators such as eutrophication …) over the entire life cycle of the ICT service. It’s therefore important to take into account this metric. Without this, we risk missing the impacts of IT. However, on the same idea as points 5 to 7, no optimization is to be discarded. Indeed, it is necessary to understand where the impacts of the software are located. However, the integration of the energy problem in teams is urgent. Indeed, in some cases the consumption of energy in the use phase is only part of the impact (compared to gray energy for example). However in many cases, high energy consumption is a symptom of obesity. In addition, in the case of software running in mobility (mobile application, IoT) energy consumption will have a direct impact on the renewal of the devices (via the wear of the battery).
Myth 10 – I compensate so I’m green
It’s possible to offset its impact through different programs (financing of an alternative energy source, reforestation …). It’s a very good action. However, it is a complementary action to an ecodesign process. It is indeed important to sequence the actions: I optimize what I can and I compensate what remains.
The frugal ICT is simple because it’s common sense. However, given the diversity of the software world, the findings and good practices aren’t so simple. However, the good news is that, given the general cumbersome software and the delay in optimization, any action that will be taken will be positive. So don’t worry, start the process, it’s just necessary to be aware of some pitfalls. Be critical, evaluate yourself, measure your software!
L’équipe GREENSPECTOR is proud to announce the release of its new release: version 2.5.0 Pear! With this new version, you can choose the type of network connectivity (Wi-Fi, 4G, 3G, 2G) when launching your tests on Power Test Bench phones. You can now verify that your application is efficient in poor network conditions.
The GREENSPECTOR team gladly announces its newest release is ready : version 2.4.0 Olive! With this new release, you will have, for all your test steps, metrics of the Android system (in addition to the metrics of resources and energy). This allows you to more finely analyze the behavior of the application and identify design issues. In the same way, with this version, you can measure several packages and you can distinguish the transmitted data from the received ones. Details on improvements below.