Category: Technical Zone

How does Greenspector assess the environmental footprint of a digital service use?

Reading Time: 5 minutes

Foreword: Assessing the impact of the use

  • This note briefly describes the methodology we use at the date of its publication.
  • Because of our continuous improvement process, we are vigilant in constantly improving the consistency of our measures as well as our methodology for projecting environmental impact data. 
  • We assess the environmental impacts caused by the use of a digital service.
  • This analysis is based on a Life Cycle Analysis (LCA) method, but it is not about performing the LCA of a digital service.​
    • Such an analysis would be an exercise on a much broader scope, which would include elements specific to the organization that created the software.
    • In the LCA of a digital service, it would be appropriate, for example, to include for its manufacturing phase: the home-work trips of the project team (internal and service providers), the heating of their premises, the PCs and servers necessary for the development, integration and acceptance, on-site or remote meetings, etc …

Environmental footprint assessment methodology

Our approach

The chosen modelling is based on the principles of Life Cycle Analysis (LCA), and mainly by the definition given by ISO 14040.

It consists of a complete Life Cycle Inventory (LCI) part and a simplified Life Cycle Assessment (LCA). The LCI is predominant in our model. It will ensure that we have reliable and representative data. In addition, the LCI thus obtained can, if necessary, be integrated into more advanced LCAs.

We assess the environmental impact of digital services on a limited set of criteria::

This methodology has been reviewed by the EVEA firm – a specialist in ecodesign and life cycle analyses.
Note on water resource: Greywater and blue water are taken into account at all stages of the life cycle. Green water is added to the manufacturing cycle of terminals and servers. See the definition of the water footprint.

Quality management of results

The quality of LCA results can be modelled as follows1 :​​

Quality of input data x Quality of methodology = Quality of results

To improve the quality of the input data, we measure the behaviour of your solution on real devices. It helps to limit the models that are potential sources of uncertainty.​

To manage the quality of the results, we apply an approach that identifies the sources of uncertainties and calculates the uncertainty of the model. Our method of managing uncertainties uses fuzzy logic and its sets2.

Ultimately, unlike other tools and methodologies, we can provide margins of error in the results we give you. It ensures more peaceful communication of the environmental impact to stakeholders (users, internal teams, partners, etc.)

1Quality of results: SETAC (Society of Environmental Toxicology and Chemistry 1992)

2Although often mentioned in the literature dealing with uncertainties in LCA. This approach is little used. Indeed, stochastic models such as Monte Carlo simulations are often preferred (Huijbregts MAJ, 1998). In our case, the use of fuzzy logic seems more relevant because it allows us to deal with epistemic inaccuracies, especially due to expert estimates.​

Calculation steps

Phases taken into account for the equipment used

Note on the impact model of the terminal part

Classical impact analysis methodologies assume a uniform impact on the software (average consumption regardless of the software or the state of the software). Our innovative approach makes it possible to refine this impact. In addition, we are improving the modelling of the software impact on the hardware manufacturing phase by accounting for battery wear.

The battery of smartphones and laptops is consumable. We model the impact of the software on it.

Input data for the Life-Cycle Inventory

Measured data
Energy consumed on a smartphone
Data exchanged on the network
Requests processed by the server

Modelled data
Energy consumed on tablet and PC
Energy and resources consumed on the server
Energy and resources consumed on the network

Terminal Assumptions
Impact of smartphone manufacturing
Impact smartphone battery manufacturing
Tablet battery manufacturing impact
PC battery manufacturing impact
Max number of cycles before smartphone wear
Max number of cycles before wear Shelf
Max number of cycles before wear Shelf
Average smartphone battery capacity
Average tablet battery capacity
Average PC battery capacity
Battery voltage
Smartphone lifespan
Tablet life
PC life
Battery replacement vs smartphone change ratio
The ratio of battery replacement vs tablet replacement
Battery replacement vs PC change ratio
Reference discharge speed on the terminal (measured)

Assumptions Servers
Server power
Of cores
Data centre PUE
Power by heart
Server time (TTFB)
Number of max requests per second
Power per request
Number of cores per VM
Number of VMs per simple app
Number of VMs per complex app
Impact Manufacturing Server
Server lifetime
CDN Debit

Energy assumptions
World average electricity emission factor
Electricity emission factor France

Example of work on hypotheses:

The methodology of propagation of uncertainties requires us to identify precisely the quality of these assumptions. Here are a few examples, in particular the impact of material manufacturing.

The bibliographic analysis allows us to identify the impacts of different smartphones, and to associate the DQI confidence index. These figures mainly come from the manufacturers.

The average impact calculated from these confidence indices is 52 kg eq Co2 with a standard deviation of 16 kg.

Example of restitution

  • In this example: the median impact of 0.14g eqCO2 is mainly on the ‘Network’ part.

  • This impact corresponds to viewing a web page for 20s

  • Uncertainty is calculated by the Greenspector model by applying the principle of propagation of uncertainties from the perimeter and assumptions described above.

Necessary elements

To determine the impact of your solution, we need the following information:

  • Smartphone / Tablet / PC viewing ratio
  • France / World visualization ratio
  • Location of France / World servers
  • Simple or complex servers (or number of servers in the solution)

On this estimate, we can carry out a simplified LCA based on this model but adapting other elements to counter particular circumstances. For example:​

  • Measurement of the energy consumption of the server part (via a partner)
  • Accuracy of server assumptions (PUE, server type)
  • Measurement of the PC part (via laboratory measurement)
  • Accuracy of the electrical emission factors of a particular country…

Comparison of estimation models

Greenspector calculations are integrated into a web service currently used by our customers. Very soon, find our calculations of the environmental footprint of your mobile applications and websites in a SaaS interface.

Why is it crucial to monitor the environmental impact of a URL?

Reading Time: 4 minutes

The more frequently a URL is viewed, the more essential it is to reduce its digital impact. A simple measurement makes it possible to check and react to changes made on the page: modifications linked to the graphic charter, events (e-commerce sites during sales periods) or even technical modifications. All of these changes can have a big impact on the sobriety level of a web page.

When to integrate the digital sobriety measurement of a URL?

These measurements can be integrated as part of daily monitoring on any page. “Dynamic” pages whose content changes regularly, such as e-commerce home pages or press information sites, are crucial to monitor. Even less “dynamic” pages can also be targeted: updating a CDN library can, for example, impact this type of site. In this case, the measurement will make it possible to ensure that the new skin of the page does not harm the level of sobriety of the website: an image too heavy for a banner can be easily spotted.

Measurement can also be used during the development phase. To test choices before going into production or to correct excessively impactful changes very early. It is often difficult to change the choice of a technology or an implementation once a site goes into production. Measuring a URL during the development phase allows you to test different options early on and see which one corresponds best by taking into account digital sobriety as one of the criteria.

Example of daily monitoring of a web page

How to measure the digital sobriety of a URL?

There are several options available to you at Greenspector to measure a URL.

A first tool allows us to perform a simple first measurement of a URL and obtain rapid observations: the Benchmark tool based on standardized tests.

To go further, we can measure a complete user journey on a website using App Scan. This kind of measurement represents the full path on a website or mobile application, such as a purchase journey or the completion of a bank transfer. It helps identify areas to focus on to achieve significant improvement. As part of an App Scan, the measurement of a URL is also possible via an automated route which will allow obtaining specific metrics beyond the benchmark.

URL measurement vs Benchmark

Here are the different steps measured during a URL measurement vs with the Benchmark tool:

Measured steps

  • Loading without cache
  • Pause after launching without cache
  • Pause on the web without cache
  • Scroll on the web
  • Pause on the web after scroll
  • Loading with cache
  • Pause on the web with cache
  • Background app metering

Benchmark

URL Measurement

The URL metric contains more steps than the benchmark we will come back to that. Unlike the benchmark, the URL measurement is more precise on the loadings. The measured duration being the actual loading time, unlike the benchmark tool which performs the measurements over a fixed period of 20 seconds. Another difference is that the URL measurement manages the tabs present on the page, in particular those concerning cookies, which the benchmark tool does not do.

Enfin, la mesure URL par Greenspector permet de réaliser des mesures sur d’autres navigateurs que GFinally, the URL measurement by Greenspector makes it possible to carry out measurements on browsers other than Google Chrome. The benchmark tool is limited to the latter browser, but our GDSL expertise allows us to offer another browser such as Firefox to go even further.

The steps of a URL measurement

  • Loading without cache: This is the loading of the URL having previously cleared the cache and deleted all cookies. This step measures the loading of the web page when a user goes to it without a cache. It is essential for URLs with a lot of unique visits.
  • Pause after loading without cache: Measuring a short pause after loading allows you to recover data exchanges and other operations that are still taking place when your page is displayed. The ideal, of course, is to have none of that. Otherwise, it allows us to make observations and suggest ways to eliminate or reduce these treatments.
  • Pause on a page without cache: It represents the action of a user reading the content. No movement on the screen. The idea of ​​this step is to measure the impact of the continuous display of the page.
  • Scroll on the page: Scroll to the bottom of the page to make observations on the treatments during the scroll. Here we can make observations on the possible data exchanges (pagination, image download, advertising) as well as the associated fluidity.
  • Pause on the page after scrolling: Measure of pause after scrolling allowing to observe processes that continue after the end of user interactions.
  • Loading with cache: Measurement of the loading of the URL with the cache of previous interactions (loading, scroll). This step allows you to see the impact of caching on the site. This is important for pages that will be visited a large number of times by known visitors, such as a website home page.
  • Pause on the page with cache: The measure of pause on the page allowing to see if despite the cache there are treatments after loading.

Thanks to our tools and our expertise, we can offer reliable and relevant measurements on a URL. Whether it is a simple measurement allowing initial findings using our benchmark tool or a more in-depth measurement with our GDSL language. This regular URL monitoring gradually improves the sobriety level of its website. This approach compared to other approaches commonly used in the web (Analysis only of the loading of the page with Lighthouse or others …), brings more finesse to the knowledge of the consumption of the page.

How to clean up the Chrome app for reliable energy and performance measurements?

Reading Time: 5 minutes

Context

Welcome to this new “GDSL focus” section. We will explain some methods of the Greenspector GDSL automation language. If you have not yet read the GDSL introductory article, do not hesitate before reading this one further.

Today we will focus on the browserReset method. It allows cleaning a browser to perform reliable performance and energy measurements.

To perform correct browser measurements, you need to be able to make sure to measure only your web page, without any parasite that could come from the browser, such as open tabs. Without this, the measurement of the consumption of a web page would be biased by tabs in the background carrying out processing and network exchanges. Moreover, it allows to precisely measure the consumption of the empty browser, once the cleaning has been carried out, and to compare it with the consumption of the site.

When it comes to automation, we cannot stand not knowing the initial conditions of our test. The unknown could disrupt its proper functioning or even lead to a test where nothing can be learned. Because, in the end, we do not know what will be measured.

On an automated test bench, it’s hard to know the state of the browser at the start of your test: you don’t know if a previous test left tabs open, changed the browser language, or anything else. We could take a look in the smartphones room but it becomes complicated if it is on the other side of the world. Not to mention the current health situation (this article was written during the crisis of Covid-19). You could also use the tools to monitor the phone remotely. So yes, but this is only valid if you are present when you run your test. For continuous integration campaigns that can run for hours or even overnight, you aren’t going to be able to monitor them constantly.

So what should be done? Clean the browser efficiently before each test.

Quick approach

In our case, we are going to use the Chrome browser. This method also works the same with another browser. We will also assume that this browser is updated regularly on phones.

A quick method, which will work in many cases to clean up a browser, is to close open tabs and clean the cache at the start of each of our tests. This way, the next time the browser opens during measurements it will be on an empty tab.

This method will work on the majority of smartphones but will be difficult on tablets because of the management of tabs. On tablets, tabs are generally displayed on a bar at the top (or bottom) of the browser, like on a computer. The peculiarity of this tab bar is that it is invisible to classic automation tools, which makes it very difficult to click on the cross to close the tab. In addition, the size of a tab will depend on the number of tabs open, making click by coordinates even more hazardous.

To top it off, the button to close all tabs at once only appears with a long press on the close cross of the first tab, making it unusable for us.

The last difficulty that this method can encounter is its maintenance, in fact by updating the application, the management of the tabs can change, as can the architecture of the application, requiring to modify regularly the automation scripts.

Complete solution

The solution used at Greenspector to clean the browser before our measurements and ensure the relevance of our results is as follows:

  • Clean up application data. This is usually done using the adb shell pm clear PACKAGE_NAME command but can also be done in the phone’s settings menu.
  • Skip browser first launches popups with automation.

Once this is done, there is one last point that can pose a problem. Some manufacturers or mobile operators display a personalized browser home page. To be able to compare measurements between several smartphones, you must get rid of this home page. We have chosen to disable the home page in the browser settings.

There is one last point regarding this home page. Indeed, it was loaded the first time the browser was launched and is therefore open, which is not practical for taking measurements. Our solution was to navigate to Chrome’s “new tab” page at the following URL:

  • « chrome://newtab »

Once all these operations are done, your browser is ready to take measurements without the risk of having existing conditions to disturb it.

It is even ideal to do the cleaning also at the end of your test, that way you leave the phone ready for the next person.

UPDATE: For our measurement needs, we are interested in performance data, energy, and mobile data. This method meets performance and power requirements well but is not suitable for data on the Chrome browser. Indeed, by resetting the browser, Chrome automatically resynchronizes the data of the Google account, and at least the first two minutes of use there are exchanges of data related to the Google account. Signing out of the Google Account on Chrome or the phone doesn’t seem to solve the problem entirely. Therefore at Greenspector we no longer use this method to clean up a browser. No measurements have been taken on browsers other than Chrome to say that this method is not valid on them.

Here you know everything about the browserReset method. See you soon for a new GDSL focus where I will introduce you to another feature of the Greenspector automation language.

Introduction to GDSL: The Automation Language by Greenspector

Reading Time: 3 minutes

What is GDSL ?

The term GDSL stands for Greenspector Domain-Specific Language. It is a language created by Greenspector to simplify test automation on Android and iOS. To put it simply, it is an overlay based on the automation frameworks from Google and Apple, embellished with functions to ease test automation.

This language is the result of the Greenspector expertise accumulated over several years. It combines ease of writing with the ability to measure the energy performance of an application or website.

The GDSL principle is to be a language for describing actions that will be performed on the smartphone. In that sense, it can be closer to Gherkin with whom it shares the quality of not requiring developer training to be read.

The GDSL is a series of actions that will be performed in order on the smartphone. It has the basic actions of WAIT, CLICK or PAUSE as well as more complex actions such as launching an application or managing the GPS. 

With GDSL it is possible to quickly automate most of the critical user journeys of your applications or mobile website. 

GDSL syntax

Here is an example line from GDSL:

waitUntilText,username,10000

The first element, in green (waitUntilText), is the name of the method. Usually, it will be in English and self-explanatory. Here we will wait for a text. The main actions of WAIT and CLICK are available with variations for id, text or content description.

The second element, in orange (username), is going to be the main parameter of the method. This is usually the graphical element on which the action should be taken. Depending on the method called, this will be an id, a text or a description. In the example, this is a text.

The last element, in blue (10000), is a second parameter of the method. These are most often optional parameters giving additional conditions during execution. Here it is a time in milliseconds.

To separate each element we use a comma.

The method presented as an example is therefore used to wait for the element with the text “username” for a maximum of 10 seconds.

In the current state of the language, there are no methods requiring more than two parameters. If the method fails then the test will stop and the report will show the test as failed.

The advantages of GDSL

  • GDSL does not require any development skills or knowledge of programming languages ​​to be used or read. The names of the methods are self-explanatory and allow anyone new to the project to read and understand your tests.
  • No IDE or specific development environment is required to write GDSL, a basic text editor is sufficient.
  • One test = one file. With GDSL no need for complicated file architecture, only one file contains your test.
  • Its ease of use allows you to write a test very quickly without relying on the rest of the test project as other automatic test languages ​​would require.
  • In addition, its ease of execution with the associated tools at Greenspector allows each new test to be implemented very quickly.
  • Updated and maintained regularly, the language already has advanced features for website automation such as tab opening or URL navigation.
  • In direct combination with the tools and expertise of Greenspector, GDSL is the only automation language that allows you to measure the performance and environmental impact of your application while performing your daily tests.

The current limits of GDSL

  • The GDSL does not yet allow us to perform complex logic tests (for example: if I am connected then I see element 1, otherwise I see element 2). You have to write a different test for each case.
  • The GDSL is based on graphic elements present in descriptive form. It is unable to interpret the content of an image or analyze the layout of your application. It cannot do a test verifying that the button is located at the bottom right of the screen.

The Greenspector team works daily to improve the language and add functionalities. The current state automates most of the scenarios required for a complete measurement campaign for an application or website as well as the critical journeys of most applications. In a future article, we will tell you about the Greenspector tools for running GDSL tests.

What’s new? Greenspector release note v.2.9.0

Reading Time: 2 minutes

The Greenspector team is proud to announce its latest release v.2.9.0.

To measure application and website consumption, you can run user journeys on our smartphone farms. In this context, we have improved our simplified test language (GDSL) with, for example, features for preparing browsers but also taking Firefox into account… Unlike many tools that provide you with an environmental impact only on the main page and on simulated environments, these capabilities will allow you to assess and monitor the impact of your digital solution!

API Swagger for existing routes applications
You can now easily switch browser

Why automate the testing of its mobile applications?

Reading Time: 3 minutes

The tests automation is often considered as an additional cost within the development teams, and this for various reasons:

  • Necessity of a team’s rise in competence on a particular tool
  • Writing times are more important than manual execution times
  • Necessary maintenance of tests over time

Mobile development with lower project costs and shorter development times doesn’t help move to automated testing. The benefits aren’t necessarily well evaluated towards the cost of autonomization. In the end, mobile application automation projects often go by the wayside or are delayed too late in the project. This is a common mistake because the benefits of test automation for mobile applications are numerous.

Mobile applications are applications like the others: complex, technical …

Mobile applications are considered applications requiring little development, low costs… This isn’t always the case. We are no longer in the same situation in recent years where mobile application projects were Proofs Of Concept and other early stages. Mobile applications have now undergone the natural entropy of any software project: reinforced security constraints, integrated libraries and SDKs, modular architectures, multiple interactions with backend servers …

This maturity (mixed with the entropy of software) no longer allows to leave the tests aside. An industrialization of the tests, and in particular the automation, makes it possible to ensure a necessary quality for the mobile projects. Without this, it’s a failure assured.

Failure is no longer possible

Combined with this complexification of mobile projects, applications have become critical business projects. Indeed, they are the new showcases of brands and organizations. And given the rapid development cycles, a project failure (delays, late detection of user bugs…) may be fatal to the company reputation. Especially since a bad experience experienced by the user can simply lead to uninstallation, non-use of the application or writing a negative opinion on the stores.

The level of quality must be at the rendezvous and automated tests are a must to control the performance of its application.

Test is doubtful, doubt is good

A quality development team, a square process and manual tests could help ensure this quality. To test would question the skills of the team? No, because as the stress of the tightrope walker that allows him to cross the ravine, the doubt is good for the quality. An SDK with unexpected behavior, an undesired regression … As much insure with tests.

Automation makes Test Driven Development (TDD) possible

Anticipating automation will allow more to go to the practices of Test Driven Development. Writing tests before development is quite possible in mobile projects. With or without tools, it’s interesting to automate a specified scenario and launch it under development.

And not to mention Test Driven Development, having tests that closely follow the development will detect other problems as soon as possible.

Platform fragmentation can not be managed with manual tests

Testing manually on a device only no longer makes it possible to ensure the proper functioning of an application. The diversity of the hardware with hardware and software configurations is a source of bug. Different screen sizes, overlay builders … an automation will allow to run parallel tests on different devices and detect potential bugs. This way, we will avoid confusing end users and beta testers of the application!

Master the regressions in maintenance

The release of the first release of the application is only the beginning of the life cycle of the application. 80% of the development load is maintenance and application evolution. It’s therefore necessary to project on the duration. By automating, we will avoid adding regressions in the application. The launch of the tests will be systematic with each evolution of the application.

Automation allows performance metrics

In the end, automation will make it possible to follow other requirements than functional requirements: non-functional requirements. Indeed, associated with measurement tools, the automated tests will allow to trace new metrics: performance, resource consumption…

This is the strategy that GREENSPECTOR recommends to its users: by integrating the GREENSPECTOR API into automated tests, they can follow at each test campaign: the efficiency and performance of their development. The cost of automation is then largely covered by the benefits.