Category: Digital sobriety

Analysis of overconsumptions on a light website

Reading Time: 7 minutes

In May 2024, on the Designers Ethiques Slack, Julien-Antoine Boyaval of web agency Konfiture shares a site created for Leroy Merlin. This site (which contains a single page) is presented as ecodesigned: https://lesdesignersdedemain.com/

At first glance (via web browser tools), the site does indeed appear rather light. However, certain elements catch my eye. More on that later.

As usual, I’m launching a benchmark with Greenspector Studio to take things a step further.

Analyze site overconsumptions 

The measurements were carried out on a Samsung Galaxy S9 phone, in WIFI (3 iterations).

After measurements, the results confirm the initial suspicions:

  • EcoScore: 59/100 (Network: 82, Client: 35)
  • Data transferred: 292 kB
  • Total battery discharge: 5.28 mAh
  • CPU process (1.11%)

Data transfers are indeed low and, as a result, the score on the Network side is very good.

Original site results via Greenspector Studio: Ecoscore 61/100
Original site results via Greenspector Studio

On the other hand, the Client-side score is low, which correlates with high battery discharge and high CPU impact (especially for such a light, static page). Generally speaking, this can be due to third-party services, animations or even calculations (mainly JS) performed in a loop.

Let’s start by looking at what happens when the user is inactive, via Greenspector Studio :

Observation via Greenspector Studio of CPU and data transferred over a pause stage: 3 peaks of data transferred, several CPU-related peaks
Observation through Greenspector Studio of the CPU and data transferred over a pause stage

We noticed 3 data peaks that are probably directly linked to Chrome (which collects usage metrics and regularly checks the functionalities offered by the browser version).

This hypothesis was then investigated using a web proxy (as the requests in question did not appear in the browser). This confirmed that these requests were indeed linked to Chrome.

On a heavier site, these requests may go unnoticed, but not here.

The methodology used is based on that described here: https://greenspector.com/en/how-to-audit-android-mobile-application-requests/

But above all, we need to question the strong fluctuations of the CPU. There are a few animations on the site, but most of them are only triggered by scrolling. So they shouldn’t have a direct impact on the CPU when the user is inactive and the animations are not triggered.

We therefore fall back on the Performance tool in Chrome’s developer tools: https://developer.chrome.com/docs/devtools/performance

Observation of a pause stage via Chrome's Performance tool: several solicitations due to animations.
Chrome Performance Tool observation of a pause step 

If we look at what happens during 10 seconds of inactivity, we can see that the processor is very busy, with a large number of events to be processed continuously. This quickly gives rise to a large number of JS processes (listening or observing) waiting for certain user interactions to trigger animations.

All this is managed by a widely-used library: GSAP.

Having reached this point, and before going any further, I contacted Julien-Antoine directly to schedule a time to present my findings to his team.

After a few exchanges, it appeared interesting to work together on this subject. The aim is to see how we can reduce the impact of the page through analysis and action. To do this, we decided to proceed in an iterative way: proposing an initial list of recommendations and applying them one by one, so as to be able to estimate the impact of each one through measurement.

Experimentation around the site

First of all, we need to make sure that the badge displayed on the site, taken from Website Carbon Calculator, is not involved (which would be the last straw). To do this, such a badge is integrated into an empty HTML page and measured using a benchmark.

The EcoScore is 95, the data transferred is very low (a simple JS script of less than 2 Kb retrieves everything needed for display in a single operation) and the impact on the processor is negligible (around 0.25% CPU load).

The badge is therefore found not-guilty.

At the same time, the Konfiture team is deploying the site we want to study on a separate server, which will host the different versions produced. An initial measurement is carried out to set the benchmark for the rest of the project, as certain metrics may vary depending on the site’s hosting conditions.

The first version measured removes the Lenis library, which partly manages animations.

Version 1.0.2 corresponds to the further optimization of SVG (vector graphics). The result is a slight reduction in transferred data.

Version 1.0.3 adds native progressive loading for SVGs, as well as the implementation of a CDN and compression (brotli) of text files (including SVGs). The result is a significant reduction in data transfer.

Version 1.0.5 removes all animations. For the end customer, this is not an option, as animations are considered essential to make the site more attractive. But once the other elements have been optimized, this measure gives us a target to aim for. Here, we can see a reduction in data transfer (less JS required), but above all in CPU usage (which remains one of the metrics most affected by animations, due to the calculations required).

To go further on this subject, I refer you to two other articles on this blog:

Version 1.0.6 does away with the need for JS code to manage animations. The problem is that animations are continuous. Even if, technically, this approach has less impact on the processor (which can easily be verified using Chrome’s Performance tool), it degrades the user experience and poses a problem for accessibility.

After discussion of the subject, this point appears to be prohibitive. While CSS-only animation management is a good compromise for environmental impacts, accessibility degradation must be avoided.

Initial results do not correspond exactly to expectations. After analysis, it appeared that having continuous animations hindered the detection of inactivity during measurements and artificially prolonged the scroll time.

As a result, version 1.0.7 already offers a first option: use the browser’s preferences-reduced-motion parameter to, at the very least, disable animations for users who wish to do so. Failing the ability to disable automatic playback of animations, it would be necessary (to be compliant) to reduce their duration to less than 5 seconds (or even 4 seconds, if we comply with criterion 4.1 of the RGESN [FR] : https: //www.arcep.fr/mes-demarches-et-services/entreprises/fiches-pratiques/referentiel-general-ecoconception-services-numeriques.html#c36264 ) and/or propose a means of control to pause them. This point is still under discussion.

To take things a step further, version 1.08 seeks to reconcile ecodesign and accessibility. To this end, it has been decided to limit the duration of animations and, consequently, to trigger them only on scroll to ensure that they are still visible.

Results

The following results have been obtained from measurements taken over time:

Measurement results: the page without animations is the least impactful, followed by the one where animations are triggered on scroll.
Measurement results for different versions

Environmental projection for the different versions: the ranking remains more or less the same as for measurements, with the most advantageous option being to avoid using animations.
Environmental projections for different versions

First of all, it’s worth remembering that the impact on CPU, memory and battery discharge is highly dependent on the model of device used for the measurement, but can also vary between two devices of the same model. For this reason, each measurement also includes a reference step, not shown here. For web pages, this reference step consists in measuring what happens when the user is inactive on a Chrome tab displaying an entirely black page (minimal energy impact, especially compared with the empty Chrome tab, which is very bright and therefore more impactful when using a device with an OLED screen).

Results for the final version of the site (EcoScore 70/100)
Results for the final version of the website

Measurements on such light sites are often more complicated, as deviations and overconsumption may be slight or even difficult to distinguish from measurement artifacts, for example. Sometimes, it’s possible to get around this by adapting the methodology. For example, to measure a very light component, we integrate it 100 or 1000 times on the page and proceed in the same way with other components we want to compare.

The increase in scroll time resulting from the continuous application of animations has led to a consequent lengthening of scroll time (17 seconds instead of 6), which directly increases energy and environmental impact.

For such lightweight sites, Chrome’s “parasitic” requests (telemetry, variant checking) appear all the more impactful, even if only a few or tens of Kb of data are transferred.

In our case, the best solution for limiting the impact of animation integration is version 1.0.8. This benefits from the implementation of the following best practices:

  • Extensive SVG optimization (including compression and lazy-loading)
  • Limiting the duration of animations, stopping them for users who choose to do so, and triggering them only on scroll.

Overall, in terms of the number of requests and transferred data, the gains are undeniable (even if the site was originally very light).

In terms of battery discharge speed, the gains are not negligible. Even if environmental impact and energy consumption appear to be the same overall, or even slightly higher (due to the increase in scroll time), the results are encouraging.

Conclusion

As already emphasized in the article on sober sites, estimating a site’s sobriety is a complex task, since it takes into account many factors as well as a specific methodology. Even on a site announced as sober, there are often improvements to be made (even if not all of them are worthwhile).

Once again, the subject of animations comes up. Sometimes used to compensate for the reduction in the number of images, they very often have an impact, even if free tools hide this impact (by concentrating on the data transfers carried out during page loading). When we want to go further to integrate them as efficiently as possible, the results to date are not necessarily conclusive. The priority should be frugality (getting rid of animations), then sobriety (reducing their number) and finally efficiency (optimizing their integration). However, for accessibility reasons in particular, their use should be prohibited (based on criterion 4.1 of the RGESN).

As for the efficient integration of animations, everything remains to be done. This is a very complex area to tackle, as the metrics to be taken into account are numerous and complex to measure and compare (CPU, GPU, battery discharge, etc.). Add to this the risks of impact transfers (opting for CSS rather than JS or vice versa) and you end up with a technical subject that is thorny, to say the least. However, here we note that limiting their duration, coupled with simple logic for triggering them, brings the best results.

Today’s standards and knowledge allow us to set out how to make an animation compliant from the point of view of accessibility. For ecodesign, however, this is not yet the case (even if the RGESN suggests a few insights). To my knowledge, there is no universal solution for proposing animations that do not lead to over-consumption.

So, from a very pragmatic point of view, it’s best to return to a simple but important approach: avoid integrating animations whenever possible, for reasons of accessibility as well as ecodesign (and more generally, user experience).  

RGESN / REEN law: what are we talking about?

Reading Time: 9 minutes

The subject of the environmental impact of digital technology has been gaining momentum in recent years. Particularly in France, where it is benefiting from the rapid establishment of a structuring legal context. This topic was discussed in another article on the Greenspector blog: https://greenspector.com/fr/le-cadre-legislatif-de-lecoconception-de-services-numeriques/

As a company seeking to reduce the environmental and societal impacts of digital technology, Greenspector is keen to explore this subject in detail. Here, we’d like to take a brief look at the REEN law (Reducing the Environmental Footprint of Digital Services), before moving on to the RGESN (Référentiel général d’écoconception de services numériques).

REEN law framework

The REEN law requires towns and cities with more than 50,000 inhabitants to define their Responsible Digital strategy by 2025. This necessarily includes elements linked to the eco-design of digital services. However, local authorities are often confronted with a first obstacle: the subject of eco-design of digital services is still relatively recent. As a result, it can be difficult to find one’s way around, whether it’s a question of choosing a measurement tool or a guide or repository that will enable effective progress to be made on the subject.

This is why another aspect of the REEN law is eagerly awaited by many: the definition of legal obligations for the eco-design of digital services. This should take the form of 2 items:

  • The RGESN, which we’ll look at in more detail in this article.
  •  An implementing decree that defines who is subject to these obligations, and with what constraints (what types of digital services, what deadlines for implementation, what deliverables are expected, etc.).

The reference to bind them all together: the RGESN

Its origins

In 2020, the INR (Institution du Numérique Responsable) is bringing together a hundred (!) experts to work on a reference framework for the eco-design of digital services. The aim: to offer recommendations covering all types of digital services, at all stages of the lifecycle and for everyone involved. In short, a holistic approach. It’s a colossal project, but it’s nearing completion in the summer of 2021. It will give rise to GR491, which currently comprises 61 recommendations and 516 criteria. It is due to be updated once again in the near future. To date, it represents a unique reference worldwide.

Just before the repository went online, DINUM (Direction interministérielle du numérique) intervened. Its objective was simple, and entirely relevant: to build on the work already done, and to create its own repository. This is how, in autumn 2021, two repositories came into being: GR491 and RGESN.

There have already been two versions of the RGESN: the first proposed by DINUM, then a new version put out to public consultation by ARCEP (Autorité de régulation des communications électroniques, des postes et de la distribution de la presse) at the end of 2023.

The final version is scheduled for release in early 2024, and may already have been released by the time you read this.

Its role

Existing versions of the RGESN referential already highlight its specific features. In the case of accessibility, the RGAA (Référentiel général d’amélioration de l’accessibilité) enables us to check the accessibility of a digital service, based on criteria derived from the WCAG (Web Content Accessibility Guidelines) issued by the W3C (World Wide Web Consortium). The French legal framework also requires compliance to be demonstrated by means of an accessibility declaration, as well as the publication of a multi-year plan for the digital accessibility of the entity. All these elements can be consulted here: https://accessibilite.numerique.gouv.fr/

In the case of the RGESN, the notion of ecodesign declaration is included directly in the standard, and its content is detailed throughout the criteria. However, this standard is not based on an international benchmark. Indeed, the WSGs (Web Sustainability Guidelines: Web Sustainability Guidelines (WSG) 1.0 [EN]) were published by the W3C after the RGESN. As a result, the WSG are partly based on the RGESN and not vice versa.

In the case of the RGESN, the ambition is not so much to “verify” that a digital service is eco-designed, as to check that an eco-design approach has indeed been implemented. This makes it possible to involve all stakeholders in the process (including the host and third-party service providers, as well as questioning the strategy and even the business model), and to adopt a continuous improvement approach. This approach is ambitious, but it is also linked to the fact that it is complicated, if not impossible, to establish factually (via purely technical criteria) whether a digital service is eco-designed or not. Rather, it’s a matter of ensuring that it is part of an eco-design approach.

Contents

V1 (the DINUM version)

In its first version, the RGESN proposes 79 recommendations divided into 8 families:

Each recommendation takes the following form:

  • Objective
  • Implementation 
  • Means of testing or checking

So, for example, the first recommendation of the standard is entitled “1.1 Has the digital service been favorably evaluated in terms of utility, taking into account its environmental impacts?”

  • Its “Objective” is to ensure that the digital service we are seeking to eco-design does indeed contribute to the Sustainable Development Goals (SDGs).
  • To this end, the “Implementation” section suggests a few ways of checking this, as well as the elements to be specified in the ecodesign declaration.
  • The “Means of testing or checking” section summarizes what to look for to ensure that this criterion is met.

Here we come to one of the limits of this version of the standard: the objective is laudable, but it lacks concrete means of verification and implementation.

Other points have been raised by experts in the field, but the tool remains important, and many are taking it up to test it in the field.

The standard defines a number of elements for structuring the eco-design approach, in particular by :

  • Appointment of a referent
  • Drawing up an ecodesign declaration (with full details of its content)
  • Implementation of a measurement strategy. In particular, the definition of an environmental budget, aiming among other things at wider service compatibility in terms of browsers, operating systems, terminal types and connectivity.

The tools that accompany the repository (a browser extension, Excel spreadsheet templates as audit grids) are welcome, but sometimes insufficient in the field. This is particularly true when it comes to carrying out multiple audits on different digital services, or building a comprehensive action plan.

To take all this into account, here is the version of the RGESN proposed by ARCEP [PDF, 1.6 Mo].

V2 (ARCEP’s version)

This version was put out to public consultation two years after the first version.

It introduces a number of significant changes:

  • The number of criteria has risen from 79 to 91, notably thanks to the addition of a “Learning” section (relating to machine learning) which introduces 5 new criteria.
  • In addition to “Objective”, “Implementation” and “Means of test or control”, 3 new attributes appear:
  • difficulty level
  • priority level
  • Non-applicability criteria

As a result of the addition of the priority level, the recommendations are first grouped by priority. 20 of them have been identified as priorities, in particular all those related to the new Learning section.

Beyond these contributions, the new version differs from the previous one in being more operational: it aims to provide concrete elements to facilitate the implementation of recommendations.

For example, we find the same 1.1 criterion presented in a more complete way:

  • Action identified as a priority and easy to implement, no cases of non-applicability
  • Objective more or less identical
  • More contextual information to go further in the process of verifying the contributions of the digital service in terms of environmental (and societal) impacts.
  • Concrete control tools: the Designers Éthiques questionnaire and the consequence tree as formalized by ADEME (Agence de l’Environnement et de la Maîtrise de l’Energie). This consequence tree is used again later, in Criteria 2.1, as part of design reviews.

The criterion relating to the ecodesign declaration has disappeared. The ecodesign declaration is nonetheless essential, and its content has been defined in various recommendations.

Another element emerging from this new version of the standard is the implementation of a measurement strategy via the definition of environmental indicators (at least primary energy, greenhouse gas emissions, blue water consumption and depletion of abiotic resources) as well as a strategy for their reduction and an environmental budget via thresholds. This measurement strategy should also include elements for verifying that the digital service functions correctly on older terminals and operating systems (or even older browsers), and in degraded connections. Through the changes made to recommendation 4.4, this measurement strategy should be extended to include user paths.

This is where Greenspector can help, both in strategy development and implementation. This includes not only the measurement itself, but also the definition of environmental indicators and their calculation, as well as the definition of routes, terminals and connection conditions. Today, this approach can be applied to websites, mobile applications and connected objects alike.

Some of the new criteria make the link with the RGPD (Réglement général sur la protection des données), the RGS (Référentiel général de sécurité), the IoT (Internet of Things) and open source. Recommendation 2.6 also requires that the environmental impact of software bricks such as AI and blockchain be taken into account. That said, this recommendation could have been placed directly in the Strategy section.

The Content section provides a wealth of information on content compression formats and methods, enabling us to go even further into the technical aspects of a sober editorial approach.

New criteria also provide information on blockchain, as well as on the asynchronous launch of complex processes.

This is clearly a step in the right direction. There’s no doubt that the public consultation will have yielded an enormous amount of input for an excellent repository, as well as the tools that must accompany it (by improving the browser extension, but above all the Excel template for conducting compliance audits and monitoring them over time via an action plan).

It is already clear from these additions and clarifications that carrying out an ESMR audit will take longer than with V1, which is important in order to take account of the criteria as a whole and thus remove any ambiguities as far as possible. While the intentions of RGESN V1 were already good, V2 provides the necessary elements to facilitate its adoption and implementation. This version also reflects a high degree of maturity on the subject, making it a resource that can already be read to facilitate skills upgrading.

What to expect next?

Already, the final version of the RGESN is expected (which is in itself a very positive sign).

It will undoubtedly be an essential tool for structuring eco-design initiatives for digital services. This will enable everyone’s practices to evolve in this area.

The accompanying tools are also eagerly awaited, as they should facilitate audits as well as compliance monitoring over time, notably through the definition of an action plan.

Among other things, the standard requires the publication of a complete ecodesign declaration, which not only raises awareness more widely, but also enables practices to be compared. In other words, to help this field of expertise evolve.

The big unknown remains the forthcoming application decree, which will set out the framework for the application of the REEN law, based on the RGESN. There are still several unknowns in this respect. Based on what is being done for accessibility (and in particular following the decree of October 2023), questions indeed remain unanswered:

  • Will the use of RGESN be limited to the web or extended to other types of digital services (mobile applications, street furniture, etc.)? At the very least, it would be important to include mobile applications in addition to web sites and applications.
  • What will the penalties be?
  • How long will it take to implement?
  • Which structures will be concerned? Public structures will be the first to be affected, but as with accessibility, it would be interesting to target businesses too. In fact, some of them have already begun to take up the subject, recognizing the value of this reference framework in guiding their eco-design initiatives for digital services.
  • What means will be officially put in place to facilitate the adoption of the RGESN (training, guides, tools, etc.)?

Other, more general questions arise. In particular, how will certain companies and professionals evolve their practices and offers, perhaps for some of them by evolving towards auditor roles (or even by training future auditors). It is also to be hoped that a more complete definition of the eco-design of digital services will lead to the emergence of training courses leading to certification (i.e., skills repositories validated by France Compétences).

One point of concern remains the declarative nature of the recommendations. The advantage of the RGAA is that it offers a technical and even factual approach (even if certain criteria are sometimes open to interpretation). In the case of the RGESN, the criteria are less factual and less easy to verify, which can sometimes make them rely on the auditor’s objectivity. The question of defining methods for validating certain criteria through measurement also remains open.

It will also be interesting to see how all these elements will find an echo beyond France, and how the RGESN will fit in with the possible introduction of new standards and other reference frameworks.

Where does Greenspector fit into all this?

The RGESN is an unprecedented, but above all indispensable, basis for improving our own practices and providing our customers with the best possible support. All the more so as they will soon be obliged to use these standards.

To this end, a number of actions have been carried out:

  • Integrate V1 of the RGESN into our own internal repository of best practices. As the time between V2 and the final version has been announced as being rather short, we have decided to wait for the final version before implementing the modifications. However, this does not prevent us from incorporating these changes into our day-to-day practices, and from taking V2’s contributions further.
  • Incorporate the RGESN into the training courses we offer: present the standard and its context, and propose activities based on it, notably via the rapid and supervised implementation of an RGESN audit. Other standards are also presented for comparison purposes, as well as their use cases.
  • We regularly carry out RGESN audits on behalf of our customers, and centralize information that enables us to track compliance rates and their evolution over time. What’s more, these audits enable us to develop our use of RGESN.
  • We systematically rely on the RGESN during audits and design reviews. Our Ecobuild offer is also evolving. The original aim of this offer was to support a project team from the outset, through training, design reviews, audits, monitoring and, more broadly, expertise. We are now proposing to back up this offer with the RGESN, enabling us to go even further in setting up or consolidating our customers’ eco-design approach.
  • In addition to the approach of using RGESN to audit/improve a site, we also use it as part of our support for a site creation solution, in order to have more global levers, but also to start thinking about the RGESN criteria that can be taken into account directly at this level. This type of reasoning could subsequently be extended to other tools such as WordPress, Drupal and other CMS. The interest here is manifold:
  • Raising customer and user awareness on the subject of RGESN
  • Reassure customers by taking responsibility for part of the criteria, which could ultimately have a differentiating effect (we can imagine customers opting for “RGESN-compliant” solutions to more easily meet their legal obligations on the subject).
  • Provide the means for users/customers to create less impactful sites

Conclusion 

The RGESN has already established itself as an essential tool not only for the eco-design of digital services, but also for structuring eco-design approaches. As such, it should help everyone to develop their skills in this area. It remains to be seen how the legal framework will facilitate this evolution and, in time, bring about what we hope will be far-reaching changes in the structures concerned.

What is the environmental impact of video game graphics?

Reading Time: 11 minutes

According to a study by Statista, the video game sector will generate over US$155 billion in revenues worldwide by 2021. This figure can be explained by the increase in the number of gaming platforms and the diversification of the types of games available to consumers, as well as by the democratization of the industry thanks to the emergence of free-to-play games. By 2022, video games will have attracted almost 1.8 billion players across the globe, transforming the entertainment experience into a social dimension and fostering the emergence of new sectors such as streaming and esports.

However, all these games, albeit virtual, are run on physical hardware, and therefore consume energy. This article presents and compares the energy consumption of different video games and their parameters. To find out how much energy these uses actually consume, we have chosen to evaluate the following video games: Assassin’s Creed Valhalla, Total War Warhammer III, Borderlands 3, Anno 1800 and War Thunder.

We have previously carried out a study on mobile games.

Selection and methodology

These video games were selected because they offer a benchmark. Using these benchmarks as measurement subjects ensures the replicability of our experimental protocol, while eliminating the human factor from the results.

A benchmark is a feature offered by the game that allows you to measure the performance of a system (entire PC), or one of its components (CPU, GPU, memory, etc.) according to a given scenario and selected parameters.

We’ve also taken care to represent several types of game mode, such as RPG (role-playing game), strategy or simulation.

We measured these video games on a PC with the following configuration:

  • Processor: i7 6700 
  • Memory: 32 Go RAM DDR4 
  • Graphics card: RTX 3060 12Go 

This equipment was supplied to us by OPP!, a company offering PC and Mac repair and maintenance services, as well as individual component sales.

The screen used is an LG E2441 with the following specifications:

  • Screen technology : LED 
  • Screen Size : 24” 
  • Resolution : 1920×1080 

We collected energy metrics using a measurement module connected to our Greenspector Studio software, plugged directly into the PC and monitor power supplies and connected to the mains socket.

Benchmarks were carried out in 2 different graphics configurations:

  • A configuration with maximum settings for the graphics offered by the game
  • A configuration with minimal settings for the graphics offered by the game

6 iterations were performed on each scenario to ensure reliable results.

Benchmarks last between 80 and 240 seconds. These variations do not affect the results presented.

Graphic evolution impacts power

Modern games incorporate higher-quality graphics with ultra-detailed textures, advanced visual effects such as dynamic lighting, real-time shadows and sophisticated particle effects. This graphical complexity requires considerable rendering and graphics processing capabilities.

Gamers are also increasingly opting for high display resolutions for an optimal visual experience. This puts extra pressure on the GPU (Graphics Processing Unit) to render detailed images at ultra-high resolutions.

These GPUs have increasing energy consumption with each new generation, as shown below for NVDIA:

Evolution of minimum system power and maximum GPU power by GPU release date

Developers exploit advanced rendering techniques such as ray tracing to realistically simulate the behavior of light in virtual environments. Although these techniques offer an unprecedented level of realism, they are computationally intensive and require high-end GPUs.

Consumption differences depending on settings

Measurements of average PC power on the lowest and highest graphics configurations for each game show a wide disparity between them.

Total PC power at minimum or maximum setting

Switching from the maximum settings to the lowest settings offered by each game results in a measured power reduction of 45% on average. In the case of Borderlands 3, a power gain of 72% can even be observed.

In Anno 1800, the benchmark is a panoramic aerial view of the game’s map. This sequence highlights details of the game world, such as landscapes, iconic buildings and animations of everyday life.

Below are graphs of one iteration measured with maximum parameterization and another iteration with minimum parameterization. The benchmark sweeps over the city from its zoomed-in aerial viewpoint at the start, then the same trajectory is repeated 8 times with increasingly higher viewpoints, which explains the 8 peaks on the graph.

Here, we can easily see the significant difference between the 2 parameter levels. On the two different settings, we can see first of all that the further the camera is from the city, the more the power is reduced, given the increasingly short time of the scenario.

What’s more, when the game is set to maximum, power consumption is at its peak for almost the entire duration of the scenario, whereas measurements taken with the lowest setting show lower and shorter power peaks.

Anno 1800 benchmark power consumption with maximum settings

Anno 1800 benchmark power consumption with minimum settings

A Statista survey conducted in December 2023 revealed that 22% of US adults aged 18 to 29 spent six to ten hours a week playing video games. Overall, respondents in this age group were also more likely than others to be avid gamers, as a total of 8% played video games for more than 20 hours a week on average.

These figures enable us to evaluate overall energy consumption over the usage times of different types of players, in the case where the benchmark is representative of game consumption. Energy consumption has been projected with measurements taken on the minimum and maximum settings of each game.

The average consumption for one hour of play at minimum settings is 0.168 kWh, and 0.254 kWh at maximum settings. These results are higher than those of the European study on the environmental impact of digital services. The latter shows a consumption of 0.137 kWh for one hour of PC gaming at medium resolution.

 Energy consumption over 6 hours of play (Wh)Energy consumption over 10h of play (Wh)Energy consumption over 20h of play (Wh)
SettingsMin Max Min Max Min Max 
War Thunder 1469,70 1460,78 2449,50 2434,64 4899,00 4869,28 
Anno 1800 843,26 1352,27 1405,43 2253,78 2810,86 4507,56 
Borderlands 522,33 1537,53 870,55 2562,55 1741,09 5125,09 
Assassin’s Creed Valhalla 1110,49 1618,73 1850,82 2697,88 3701,65 5395,76 
Total War Warhammer III 1108,08 1651,01 1846,80 2751,68 3693,60 5503,37 

Most gamers therefore consume between 1.5 kWh and 2.5 kWh per week, playing between 6 and 10 hours a week. For more involved gamers playing around 20h a week (2h40 a day), their PC and screen have a weekly consumption of 5 kWh. By the same token, a conventional refrigerator consumes an average of 3.29 kWh per week.

Evolution by release date

On maximum configurations, we note an evolution in measured power proportional to the release date of these games.

GamesGame releasePower at maximum setting (W)
War Thunder November 2012 181,86 
Anno 1800 April 2019 214,94 
Borderlands 3 September 2019 236,62 
Assassin’s Creed Valhalla November 2020 249,46 
Total War Warhammer III February 2022 257,70 

In this context, the maximum configurations of video games reflect this technological evolution. Game developers design their games to take advantage of the latest hardware advances, and this translates into increasingly high demands on components. As a result, to take full advantage of graphics performance and game fluidity, gamers often have to invest in state-of-the-art hardware.

These complex, detailed graphics require real-time rendering, which often relies on the CPU to perform calculations related to physics, artificial intelligence of non-player characters, collision management and other aspects of gameplay.

This is what a technical director of the Total War game explains in an interview with Intel:

“We model thousands of soldiers with a high level of detail applied to each in terms of animations, interactions, pathfinding decisions, etc.”

In video games, pathfinding consists in figuring out how to move a character from point A to point B, taking into account the environment: obstacles, other characters, length of paths, etc.

What’s more, the processor is often juggling numerous tasks simultaneously, depending on what’s displayed on the screen. “Take a scene where two huge fronts with thousands of soldiers are smashing into each other, and you’ve zoomed in quite close,” explains the game’s technical director. “In this situation, the processor is divided mainly between entity agent-based combat, collision mechanisms and building matrix stacks to draw all the entities.”

In other words, the processor must simultaneously manage the presence and interactions of thousands of NPCs (non-player characters).

What’s more, the more advanced the graphics, the greater the demand on the GPU to process data and instructions efficiently, which can lead to bottlenecks and slowdowns if the processor isn’t powerful enough.

In the Assassin’s Creed Valhalla game, when the settings are at their lowest, the graphics card is called upon for 46% on average. Conversely, at maximum setting, with water reflections activated or cloud quality at maximum, the graphics card is used at 99% during the benchmark.

Optimization vs. graphic quality

We’ve just seen that setting a game to its maximum setting involves high energy consumption. But are the visual effects enhanced? Are all settings relevant to the gaming experience, depending on the PC configuration?

An interesting indicator to answer these questions is the number of frames per second (FPS), as it is often used as an indicator of a game’s fluidity: the higher the FPS, the more fluid and responsive the game appears.

FPS (Frame Per Second) indicates the number of individual images (or “frames”) displayed on the screen each second.

The more complex an image is to generate and display, the slower the processor and graphics card can display it. So, when the settings exceed the capabilities of the PC configuration, the visual effect for the player is not necessarily enhanced.

What’s more, gameplay can be impacted by the bottleneck phenomenon.

A bottleneck is a phenomenon produced by a hardware or software component with limited performance compared to other, more powerful components. This means that one part of the system is operating at maximum capacity, while other parts can’t keep up, resulting in a drop in overall performance.

By correctly balancing the hardware configuration and adjusting graphics settings accordingly, gamers can minimize the risk of slowdowns and jerks, delivering a more enjoyable and immersive gaming experience.

Here are some differences from benchmarks set to their maximum gaming parameters and then to their minimum:

Implications for materials and environmental impact

The constant evolution of video games towards ever more immersive and realistic experiences has significant implications for the hardware used. Game developers are seeking to fully exploit the graphics and processing capabilities of new technologies, which translates into higher hardware requirements.

In France, 2020 will see the sale of 2.3 million consoles, 27.5 million complete games (console + PC / physical + dematerialized) and almost 7 million accessories (console + PC). With growth of 10%, the console ecosystem represents 51% of the total video game market, while PC gaming is up 9%. (Source: Sell)

Gamers are looking to stay at the cutting edge of technology to take full advantage of the latest releases. Beyond the financial stakes this may represent, this quest for hardware performance is also very critical from an environmental point of view.

As we’ve seen, all the components in a configuration need to be at roughly the same level of performance for an optimized gaming experience. If the gamer has a high-performance graphics card but a lower-resolution screen, or a less powerful processor or motherboard, the gaming experience will not necessarily be enhanced or even altered. So, from an optimization point of view, it’s not so much a matter of buying the latest components to improve performance as it is of optimizing game parameters according to your hardware configuration. On the one hand, this extends their life expectancy through reduced stress, but it also improves the gaming experience for users.

Over-solicitation causes components such as the graphics card or processor to heat up to high temperatures due to the amount of calculations they handle, and damages their transistors and chips, thus shortening their lifespan.

According to HP, the lifespan of an average desktop PC is between 2 and 3 years, and that of a gamer PC between 3 and 5 years.

We have no information about the environmental impact of gamer PC manufacturing, but the frequency of release of the latest generation of products, which pushes gamers to renew their PC configuration every year, considerably increases the impact of this industry.

It’s worth noting that game consoles also have a significant carbon footprint.

Climate and sustainability researcher Ben Abraham analyzes the PlayStation 4’s central processing unit using mass spectrometry, revealing the presence of atomic components such as titanium, whose extraction, refining and manufacture contribute to greenhouse gas emissions.

This observation underlines the challenge of making the production of these devices sustainable, with decades needed to achieve this goal.

The importance of measurement

Video game publishers play a crucial role in reducing the industry’s environmental footprint. To do so, it is imperative to take energy consumption into account throughout the game development process.

First and foremost, measuring energy consumption enables game publishers to understand the environmental impact of their products. This includes not only the direct energy consumption of the devices on which the games are run, but also the carbon footprint associated with game servers, updates and downloads.

Secondly, this awareness enables game developers to design game mechanics and graphics that optimize energy efficiency. For example, by minimizing complex visual effects that require high computing power, games can reduce their energy consumption while still offering an immersive gaming experience.

The subject of the environmental footprint of video games is increasingly being taken on board by publishers, which is encouraging. Initiatives such as Ukie’s Green Games Guide and Ecran d’après offer practical advice and best practices for reducing the environmental impact of game design and development. Similarly, tools such as Microsoft’s Xbox Sustainability Toolkit or Jyros, the environmental impact measurement tool dedicated to the video game industry in France, provide developers with concrete ways of assessing and improving the sustainability of their games.

However, it is important to generalize these practices and integrate them more systematically throughout the industry. Too often, the environmental aspect is relegated to the background, while the emphasis is placed on the performance and aesthetics of games. It is therefore essential that publishers take greater account of the environmental implications of their design and development decisions.

Study limits 

In the context of this study, it is important to recognize certain limitations that could affect the scope and representativeness of the results obtained:

  1. Manufacturer/designer partnerships: It is possible that some video games have established partnerships with hardware manufacturers to optimize their performance on specific configurations. These agreements could distort benchmark results by favoring certain brands or models of components. These results may alter comparisons between games, but not comparisons between configurations within the same game.
  1. Benchmark scenario not necessarily representative of game modes: Benchmark scenarios used to evaluate video game performance may not reflect actual gameplay conditions. For example, a benchmark may focus on specific game sequences that do not necessarily represent general gameplay. As a result, the results obtained may not be fully representative of the overall gaming experience.
  1. No measurement of multiplayer or online play: This study focuses primarily on the performance of single-player games, and does not take into account aspects related to multiplayer or online play. Consequently, data exchanges between game servers and clients, as well as network performance, are not taken into account in the analysis. This could limit a complete understanding of hardware requirements for an optimal online gaming experience.

Conclusion  

In conclusion, this study highlights the growing impact of video games on computer hardware performance. With the constant evolution of graphics and functionality, modern games require increasingly powerful hardware configurations to deliver an optimal gaming experience. This raises important questions about the sustainability and energy efficiency of computer equipment, as well as consumer hardware choices. Ultimately, it is crucial for both publishers and gamers to strike a balance between video game performance and the sustainability of the technology industry to ensure a more sustainable future.

What are the links between cybersecurity and eco-design?

Reading Time: 5 minutes

What do printers, connected cars and airliners have in common?

These are playgrounds for the ingenuity of cybercriminals, who exploit the slightest security loophole to infiltrate networks or take control of our most critical systems. Just as a drug lord like El Chapo escapes from his high-security prison through the least secure place: the toilet, a hacker will always try to find the most vulnerable part to attack you. As these attacks can be dramatic for the person or company that falls victim to them, it’s essential to think carefully about the subject.

In this article, we will mention a few stories of surprising computer attacks. This will enable us to question our choices when it comes to implementing new features. These misadventures all have a common cause: an increase in the attack surface.

The multiplication of access points is a risk factor

In recent years, we’ve all seen objects that communicate with the outside world appear in our living rooms. From connected voice assistants to smart thermostats, these objects provide more or less useful services. The business world is no exception to this rule. Whether as part of the Industry 4.0 vision, or simply to facilitate remote communication, these connected systems are playing an increasingly important role.

Unfortunately, some devices pose major risks. Combining a low level of security with a connection to a company’s internal network, connected objects are a goldmine for malicious individuals. And they don’t hold back.

In its article, 01net shows how a group of Russian hackers is attacking connected objects to target businesses: https://www.01net.com/actualites/un-groupe-de-hackers-russes-cible-les-objets-connectes-pour-s-attaquer-aux-entreprises-1743886.html

What’s more, these connected objects often have access to private data. Imagine someone turning on your webcam remotely, or accessing the microphone on your soup mixer. Worse still, imagine a malicious individual taking control of a child’s toy and using it to contact him or her: https://www.france24.com/fr/20170228-hackers-ont-pirate-peluches-connectees-fait-fuiter-messages-denfants-a-leurs-parents

The proliferation of these objects poses a real social problem that we cannot ignore.

From an environmental point of view, the distribution of these systems also has significant impacts. From mineral extraction to distribution, the production of IT systems generates significant CO2 emissions, not to mention other impacts such as soil pollution and the erosion of biodiversity.

For all these reasons, the purchase of a new connected device should not be taken lightly. The question is: do we really need it?

How can an ancillary feature turn into a Trojan horse?

New connected objects aren’t the only systems that can be attacked: existing software can be as well.

Nor is it just a question of resources. Aviation, one of the world’s most financially powerful industries, which has invested considerable resources in security, has also been the victim of criminal acts.

In this article, we won’t be discussing the impacts of flying, but rather the specific subject of in-flight entertainment.

The many films and series available bring undeniable benefits for users: boredom reduction, keeping children occupied, forgetting about stress (and the fact that you’re in an aircraft that burns thousands of liters of fuel per hour) …

Nevertheless, the screen is not a system totally isolated from the rest of the world. For example, cutting video during a staff call necessarily implies communication between the box and at least part of the rest of the device.

And this link can be used to support an attack.

Chris Roberts, a cybersecurity specialist, has demonstrated this by successfully modifying the power of a reactor using the entertainment system: https://www.01net.com/actualites/un-hacker-aurait-pris-le-controle-d-un-avion-en-vol-grace-a-son-systeme-de-divertissement-654810.html

In reality, it’s extremely difficult to totally isolate one system from another.

This story is just one example:

This last attack is an interesting one. It illustrates the issue of a well-known developer philosophy: “Why do it? Because we can.”

Hackers have taken advantage of a security flaw in a service of Meta’s flagship social network. The functionality in question allowed users to see how their profile was viewed by another user. Admittedly, this is of interest to the user, but it is not essential to the smooth operation of the social network. On the other hand, the consequences of an attack are extremely damaging, both for users and for the company, whose image is tarnished.

When the group became aware of the flaw, they immediately removed the service. The question then arises: did users notice the disappearance of the functionality?

From a general point of view, we can list a few disadvantages of the multiplication of possibilities offered by a digital service:

  • dispersion of resources that could have been allocated to securing key application or website services
  • implementation of little-used functionalities that receive little attention from the development team and are therefore more vulnerable
  • the need to reduce compatibility with older versions of Android or iOS. And consequently reduce the number of potential users
  • increase the weight of an application due to the development of more code or embedded media. Increasing the application’s environmental impact.

Taking into account the associated risks, we must always ask ourselves: is the comfort it brings really worth the impact it causes?

It’s also worth remembering that cybersecurity is an integral part of digital sustainability. As a designer of digital services, it is therefore our duty to protect users. Implementing security mechanisms is an important part of this, but we also need to think globally, encompassing all functionalities.

Malicious individuals will try to get into every nook and cranny of your system. By increasing the number of functions, you are giving them new doors that they will be happy to open.

Finally, all these attacks show us that digital sufficiency is not only a useful tool in the context of the ecological transition, but is also of interest in the fight against cybercrime.

Conclusion 

In short, digital sufficiency is proving to be our unexpected ally in the daily battle for IT security. Before rushing off to buy the latest gadget or design a new feature, let’s ask ourselves the following 2 questions:

  1. Is that useful ?
  1. Is the risk worth the benefit?

In some cases, the answer is obviously yes. The seatbelt makes the car heavier and therefore increases fuel consumption, but it considerably reduces the number of deaths on the roads. The reduction in comfort was worth it.

In many cases, the answer is the opposite. Today’s cars can reach speeds well in excess of 150km/h. Yet it is forbidden to exceed 130km/h. This measure, taken in France in 1974 to combat the 1973 oil crisis, was the result of a balancing act between individual freedoms on the one hand, and the collective effort needed to counter the consequences of the oil crisis on the other. It wasn’t worth the risk.

This central consideration in any decision must be at the heart of a development team’s questioning.

Today, only the advantage part of a feature is highlighted. But that’s forgotten:

  • User security
  • The financial cost of a computer attack
  • Damage to the image of the company that suffers a computer attack
  • The environmental impact of this functionality
  • Loss of compatibility with certain users
  • And many more…

33 years after the introduction of compulsory rear seat belts, the question of discomfort versus safety is no longer an issue in the automotive world. It must also become a reflex for digital service design teams in the IT world.

What is the correlation between eco-design and editorial sobriety?

Reading Time: 4 minutes

An ecodesign approach for digital services can only be successful if all project stakeholders are involved at all stages of the project lifecycle. Sometimes, despite all the efforts made to apply eco-design principles to the creation of a website, environmental impacts can increase due to elements outside the defined scope. In particular, it’s essential to involve those who will be producing content for the site. It’s not all that simple. Some best practices can be technically automated, while others require you to keep in mind all the content proposed, as well as its durability.
This article suggests a number of best practices aimed at facilitating content management with a view to reducing the impact (environmental and otherwise) of proposed content.

Further reading

Ferréole Lespinasse has already written extensively on this subject: https://www.sobriete-editoriale.fr/
The INR (Institut du Numérique Responsable) reference framework has a category dedicated to content: https://gr491.isit-europe.org/?famille=contenus
The same goes for the RGESN (Référentiel Général d’Ecoconception de Services Numériques): https://ecoresponsable.numerique.gouv.fr/publications/referentiel-general-ecoconception/#contenus

Best practices in editorial sobriety

Integrate as little non-textual content as possible

Context

Each piece of integrated content will generate requests and data transfers. It is therefore important to integrate as little as possible, while maintaining the attractiveness of your publications. Once only essential content remains, it’s time to integrate it as efficiently as possible (see below).

Most often, at impact level: video > podcast > moving image > static image > text

Please note that animated GIF images can be very large, posing accessibility problems.

The INRIA (Institut national de recherche en informatique et en automatique) MOOC offers a simple activity to help you understand these impacts.

How can we do it?

  • Limit the number of contents, taking into account their respective impacts
  • Avoid purely decorative content as much as possible (e.g. stock images or carousels)
  • Keep accessibility in mind

Reduce the weight of videos

Context

Especially in the age of social networks, video is often favored as a communication channel.

Today,video represents 60% of global data flows

How to do it?

Reduce the size of audio files

Context

Particularly with podcasts, audio content is multiplying on the web.

How to do it ?

  • Favor MP3, OGG or AAC formats
  • Use audio files that are as concise as possible
  • Rather than directly integrating the content on the page, integrate a clickable thumbnail leading to it

Reduce the weight of images

Context

Overall, on web pages, images are the source ofthe majority of data transferred [EN]

How to do it?

  • Favor the Webp format and other formats adapted for the web
  • Offer images with a size and quality adapted to user terminals
  • Optimize images using a tool (example:Squoosh
  • Load text by default and image only on demand

Tutorial (in English) on image optimization.

Limit the impact of third-party content

Context

It is easy to integrate content from other sites (Youtube/Dailymotion videos, Twitter/Facebook/Instagram/etc. messages or feeds).

Their direct integration often results in numerous requests (especially trackers) and data transferred.

How to do it?

Adopt sober management of publications

Context

Beyond the design of each publication, it is important to keep in mind all the publications available. The goal here is to keep content relevant and up-to-date. The point is to prevent the content from being drowned out in the crowd, which in turn helps improve natural referencing.

How to do it ?

  • Rely on concrete indicators: number of visits, number of arrivals on the site via this page, bounce rate, etc.
  • Update older posts that are still of interest. Possibly take advantage of this to change the format: the video becomes an article
  • Combine publications similar in their themes: informative articles are aggregated into a reference article
  • Delete posts that are no longer seen or no longer relevant (outdated content or relating to past events)

To go further, it is also possible to:

  • Set an expiration date for the publications created (examples: hot content VS cold content, unpublish date for temporary content)
  • Audit a site’s publications [EN]
  • Publish content in a reasoned and relevant way, particularly for its distribution on social networks and in newsletters. The latter must themselves be subject to an eco-design and accessibility process.This subject alone could be the subject of an article.

Propose explicit labels for links

Context

When browsing content, it is common to come across links that enrich the content in question. In order to avoid unpleasant surprises for users, the labels of these links must be as explicit as possible. The interest in the user experience is obvious but it is also a question here of preventing the user from loading content which is not useful to them or which their terminal or their internet connection does not allow them to use in specific situations. good conditions.

The criteria for this good practice are mostly derived from the rulesOPQUAST (OPen QUALITY STANDARDS). It is appropriate here to emphasize again the need to offer accessible links (but also more generally content).

How to do it?

Conclusion

We have discussed here what can be done to ensure that content is as light as possible. If certain actions rely mainly on contributors, it is ultimately important that thecontent management tools such as CMS (Content Management System) integrate tools to assist contributors. This may involve, for example, automating certain technical optimizations, visualizing the environmental impacts of the content produced but also facilitating the implementation of a more global content management approach (expiry of documents, visualization of consultations, etc. .). Some publishers have already taken the initiative to initiate such an approach; it remains to be hoped that it will become systematic.


The legislative framework for the eco-design of digital services

Reading Time: 4 minutes

In France, the accessibility of digital services has had a legislative framework for several years now (initiated by article 47 of the 2005-102 law of 11 February 2005 [FR] and specified in decree no. 2019-768 of 24 July 2019 [FR]). This is based primarily on the RGAA [FR] (Référentiel Général d’Amélioration de l’Accessibilité – General Accessibility Improvement Reference Framework). The eco-design of digital services, which has been discussed in France for over 15 years, has gained considerable momentum in recent years. However, the subject is still struggling to establish itself, or even to take precise shape within organisations. The legislative framework has been taking shape since 2021 and should enable the eco-design of digital services to take hold over the next few years. The aim of this article is to shed some light on the subject.

A quick reminder

ADEME (Agence de l’Environnement et de la Maîtrise de l’Energie) and ARCEP (Autorité de régulation des communications électroniques, des postes et de la distribution de la presse) are working together on the environmental impact of digital technology. Their work covers, in particular, the estimation of these impacts on a French scale, as well as best practices and prospects. This information can be found here: https://www.arcep.fr/nos-sujets/numerique-et-environnement.html [FR]

Ecodesign [FR] can be defined as an approach that integrates the reduction of environmental impacts right from the design stage of a digital service, with a global vision of the entire life cycle, via continuous improvement.

A digital service [FR] is a set of human, software and hardware resources needed to provide a service.

Consequently (but we’ll come back to this in a later article), talking about an eco-designed website can be perceived as a misuse of language. As part of an eco-design approach, we need to take an interest in all the site’s digital services (or at least a representative sample), through continuous improvement and by covering all the stages in the project’s lifecycle. All this goes much further than simply measuring a sample of pages on a site that is already online.

The laws

In France, there are currently 2 main laws: the AGEC law (Anti-Gaspillage pour une Économie Circulaire) and the REEN law (Réduction de l’Empreinte Environnementale du Numérique).

The AGEC law [FR] briefly addresses the subject, but this requirement does not yet seem to have been dealt with exhaustively. On this subject, see the Guide pratique pour des achats numériques écoresponsables from the Mission interministérielle Numérique Écoresponsable [FR].

Even if certain elements still need to be clarified, the REEN law [FR] goes further by mentioning (among other things) :

  • The need to train engineering students in digital-related courses in the eco-design of digital services. But there is also a need to raise awareness of digital sobriety from an early age.
  • The creation of an observatory on the environmental impact of digital technology, via ADEME (Agence de l’Environnement et de la Maîtrise de l’Énergie) and ARCEP (Autorité de régulation des communications électroniques, des postes et de la distribution de la presse).
  • A general reference framework for the eco-design of digital services to set criteria for the sustainable design of websites, to be implemented from 2024. ARCEP has since confirmed that this benchmark will be based on the RGESN (Référentiel général d’écoconception de services numériques [FR]): https://www.arcep.fr/actualites/actualites-et-communiques/detail/n/environnement-091023.html [FR] A public consultation, launched in October 2023, aims to consolidate this benchmark and practices around it, with a view to wider adoption from early 2024.
  • The fight against the various forms of obsolescence, as well as actions to promote re-use and recycling.
  • Reduce the impact of data centres (in particular by monitoring the efficiency of energy and water consumption) and networks. The decree is currently being published [FR].
  • Require municipalities and groups of municipalities with more than 50,000 inhabitants to draw up and implement a Responsible Digital Strategy by 2025. This strategy must include elements relating to the eco-design of digital services. A number of guides have been published to help establish this strategy, including this one: https://www.interconnectes.com/wp-content/uploads/2023/06/web-Guide-methodologique_V8.pdf [FR]

All of this is accompanied by the establishment of the HCNE (High Committee for Eco-responsible Digitisation), various roadmaps and an eco-responsible digital acceleration strategy. All this is detailed on this page: https://www.ecologie.gouv.fr/numerique-responsable [FR]

What’s next?

Once all these elements have been defined, the question arises of what remains to be done.

In 2024, the REEN law will require public websites to be designed in a sustainable way. By 2025, local authorities with more than 50,000 inhabitants will have to have integrated this dimension into their Responsible Digital Strategy.

Greenspector has been involved in the eco-design of digital services for several years. This evolution in the legislative framework coincides with our involvement in projects at an increasingly early stage, sometimes even from the expression of need. This inevitably requires changes in practices, including the introduction of ideation workshops that take into account the environmental footprint of a service. More and more often, the RGESN is used as a reference to guide the approach throughout the project. This reference framework is ideal for this type of support, but it also provides a basis for managing eco-design as a continuous improvement process.

This way of rethinking support for the eco-design of digital services also makes it possible to move towards greater impact reduction levers and to involve more types of profiles in the projects supported.

As the process begins with public institutions, it is to be hoped that companies will follow suit. In fact, some have already begun the process of complying with the RGESN. Not just in anticipation of a possible change in the legislative framework affecting them, but also because these standards provide a long-awaited framework for the eco-design approach.

To support all these efforts, financial aid is available for both companies [FR] and local authorities [FR].

On all the issues raised here, France has made great strides. Now it’s up to other countries to follow suit. In September, the W3C (World Wide Web Consortium) published its WSG [FR] (Web Sustainability Guidelines). They are now out for public consultation with a view to making further progress on the subject and perhaps eventually establishing web standards. They are also accompanied by discussions on the best way to introduce levers directly at institutional level. In Europe, some countries, notably Belgium and Switzerland, are federating around structures similar to the INR. It is to be hoped that the RGESN and other elements currently in place in France can be adapted to other countries.

A closer look

Reading Time: 4 minutes

Just ten years ago, the subject of the environmental impact of digital technology was confined to a handful of specialists. Over the past few years, however, the subject has gained considerable momentum, particularly in France but also internationally. While some people are (rightly) concerned about the preponderance of discourse around net zero and carbon neutrality, this trend is merely a symptom of a biased approach to the subject.

Reducing a global crisis to a technical problem

The climate emergency is a key issue that has gained enormous momentum in recent years. The digital sector has not been spared, and studies and tools have made many people aware of the issue. The problem is alarming, but also complex, which is why some aspects have been lost along the way in favor of broader awareness.

In the case of digital services, it is understood that an LCA (Life Cycle Assessment) is an excellent way of estimating environmental impacts, but the process can prove cumbersome and costly. Defining the scope, selecting the indicators, collecting and analyzing the data. The complexity is all the more difficult to take into account when you want results quickly and, preferably, easily communicated. So, to gain in efficiency, some choose to measure only part of their digital services, thanks to easy-to-use tools. In just a few clicks, you have your answer and can share it.

Sound familiar? It’s called technological solutionism, as expounded by Evgeny Morozov in his seminal work “To save everything, click here“.

This is also why solutions are being developed that analyze code to suggest ways of improving it to reduce its environmental impact. Some are even beginning to rely on artificial intelligence for this purpose.

It’s also what prompts some to optimize where their code will be executed, to move towards a location where energy has less impact from an environmental point of view (taking into account, of course, only greenhouse gas emissions). And what can’t be avoided or reduced can always be compensated for.

In the end, it’s all very human. Faced with a complex and urgent problem, we try to simplify and adopt or find a quick solution. That’s not a bad thing, but we can’t stop there. All the more so when some people rely on claims of “net zero” and carbon neutrality to artificially draw a finish line that can be reached via clever calculations and investments, whereas the problem is systemic by nature.

The risk here is of optimizing one indicator while degrading others that we didn’t have in mind (for example, requesting a data center presented as carbon neutral without taking into account its impact on water resources). As a result, we’re increasingly asking ourselves whether a sober site is necessarily ugly, without realizing that it’s not always accessible. Or really sober, for that matter.

Reminder

The environmental impacts of digital technology are not limited to greenhouse gas emissions. As we see in LCA, the indicators to be taken into account are much more numerous and varied. Little by little, we are also having to take into account the criticality of certain mineral resources, as well as that of water (as we saw recently with ChatGPT and Google’s data centers).

The environmental impact of digital services doesn’t just come from the code. In fact, according to GreenIT.fr, only around 20% of the impact comes from the code. Which makes perfect sense. Through code, we seek to improve efficiency (doing better with less). The real levers for reduction are to be found in the other stages of the lifecycle, notably design, strategy and content production. In this way, we can move towards sobriety for good.

Finally, the impacts of digital technology are not only environmental, and this is the heart of Responsible Digital. We need to keep in mind the impact on the individual (via accessibility, security, personal data management, the attention economy, ethics and inclusion). So, managing the climate emergency can only be done with an intersectional approach.

But how?

The technical approach is not necessarily bad in itself. It’s a good thing to have effective solutions to improve the efficiency of digital services (as long as we keep in mind the possible side-effects). Sometimes, it’s even an excellent starting point for taking initial action, initiating a continuous improvement process and getting to grips with the subject.

On the other hand, it’s essential to go further. This is what we see today in movements around Sustainable UX, responsible communication and even responsible digital marketing, for example. We are also seeing the emergence of resources and books on “green service design” and systemic design.

This is also the reason why the GreenIT collective’s 115 best practices have evolved over time, and why other, more comprehensive reference frameworks have emerged, such as RGESN and GR491.

Beyond this, it is also important to ask ourselves more general questions about what we eco-design, and how the services we create can induce more environmentally-friendly behavior.

Conclusion

As we’ve already seen when examining the offerings of web hosting providers, the reality of the environmental impact of digital technology is more complex than it might seem. The problem won’t be solved with a single click, and perhaps that’s just as well. In fact, it’s an opportunity to rethink digital technology, the way we use it and the way we think about it. These constraints may well give rise to a digital world that is more respectful not only of the environment, but also of individuals.

What is the environmental footprint of social networking applications? 2023 Edition

Reading Time: 8 minutes

Introduction 

The uses and functionalities of social networks are expanding, as are their communities and the time spent on our screens.
Trends, corporate marketing and new channels of influence are all factors that are multiplying user connection and usage time.

We are social’s Digital report France 2023 estimates that 92.6% of French people are connected to the Internet. This represents an increase of +1% compared to 2022, or 600,000 people, over 80.5% of whom are present on social networks.
The environmental impact generated by social networks is evolving with the increase in the number of people and time spent on applications. This implies a greater level of responsibility on these massively used digital services to assess and reduce their generated impacts. Is there an eco-responsible social network in the world? How can we raise the awareness of application publishers, and perhaps even their users? To answer these 2 questions, there’s nothing like a little consumption measurement and impact projection.

As not all these networks work in the same way, we chose to remeasure a use case common to all of them, namely, browsing and reading a news feed from the 10 most popular social network mobile applications in France.

 
Méthodology

Choice of social networks studied

The 10 most popular social networking applications among the French are: Facebook, Instagram, LinkedIn, Pinterest, Reddit, Snapchat, TikTok, Twitch, Twitter and Youtube. We have used We are social statistics from January 2023 to project environmental impacts.

Given the use case selected, we’ve focused on social networks with a news feed, which excludes messaging applications such as Whatsapp, Messenger, Imessage, Skype, Discord and Telegram. You’ll probably find them in a future article 😉

User path definition

We evolved the user journey by creating a news feed scrolling scenario with the following steps

  • Step 1: launch the application
  • Step 2: read news feed without scrolling (30 sec)
  • Step 3: News feed scrolls with pauses.
  • Step 4: application background (30 sec)
Step 1: launch the application
Step 2: read news feed without scrolling (30 sec)
Step 3: News feed scrolls with pauses.
Step 4: application background (30 sec)

This path consists of a 2-second scroll followed by a 1-second read (pause), all repeated and weighted over a 1-minute duration.

Regarding Snapchat, its operation forced us to consider a click and not a scroll scenario, but not calling into question the pause and content scroll times. What’s more, the chosen news feed is the stories page, which is not the application’s home page. In order to have comparable scenarios, the step of accessing the stories page was not measured on this path and therefore not included in the impact generated.

The pauses in scrolling through the news feed simulate the most realistic reading behavior possible.
This path does not transcribe the most frequent uses on these platforms (reading a post or associated rich content, a video, reaction, generated exchange, ….) but it does give us an indication of the level of sobriety of the applications.

For this study, data was measured using our Greenspector Test Runner solution, which enables automated tests to be carried out locally on smartphones.

We measured resource consumption (energy, memory, data) and response times. These data were then used to calculate the environmental impact of the applications.

Please note that the methodology used in this study compares only the scrolling of the most common news feeds. This means that the comparison is not necessarily equivalent, as some news feeds focus on video scrolling and others on multimedia posts (text, image, video, animated gif, etc.).

Measurement context

  • Samsung 10, Android 10
  • Network: Wi-Fi
  • Brightness: 50%.
  • Tests carried out over at least 3 iterations to ensure reliability of results

Assumptions used for environmental projections

  • User location: 100% France
  • Server localization: 100% worldwide (in the absence of information for each application)
  • Devices used: 100% smartphone
  • Server type: 100% complex

The environmental footprint depends on the location of application servers, their type, the location of users and the type of devices they use. We have decided to study the use of applications only on smartphones and on the share of French users.

Top and flop apps in France according to results

The graph below ranks the various social networking applications according to the environmental footprint of the path we defined above.

Ranking of the environmental impact of mobile social networking applications
1 - linkedIn :0.47gEqCO2/min
2- twitch : 0.51 gEqCO2/min
3- Twitter : 0.52 gEqCO2/min
4- Facebook: 0.63 gEqCO2/min
5- Snapchat: 0.65 gEqCO2/min
6- Pinterest: 0.66 gEqCO2/min
7- Instagram: 0.87 gEqCO2/min
8- Youtube: 0.87 gEqCO2/min
9- Reddit: 0.92 gEqCO2/min
10 - Tiktok: 0.96 gEqCO2/min
The measurements were taken by Greenspector on 13/04/2023

The less sober app

Tiktok comes last in the ranking, but that’s no great surprise. In fact, the application is very energy-hungry, consuming 22.4 mAh at launch and exchanging a lot of data as it scrolls through the news feed. This enormous exchange is due in particular to the constantly running video launch and the many advertisements present on the application.

The application preloads a wide range of content, so if the user is offline, he or she can still access the videos. Tiktok loads around 5 MB of data for 30s after launch, equivalent in this test to 10 preloaded videos.

The most sober application

LinkedIn is the least impactful application according to our results. It exchanges a very low volume of data when the application loads, as well as when scrolling through the news feed. The fact that the social network is focused on sharing text-based posts with a low amount of photos and videos explains this score in particular. What’s more, LinkedIn consumes 13.9 mAh of energy, 15% less than the other applications on the panel.

 

Other applications preload less content, and often less volume. A preloaded video consumes more energy and generates more data exchanges than a preloaded text post.

One-year projection of the impact of the 2 applications most used by the French

According to the We Are Social annual report, the average time spent on social networks is 1h55 per day. When we project the environmental impact over one year for each application, the environmental impact represents 20 to 40 kg eqCO2 depending on the social network. This represents 185km by car for the least sober network.

According to Ademe’s Impact CO2 website, which offers an online converter, approximately 200g CO2eq = 1km. This includes direct emissions, vehicle construction (manufacture, maintenance and end-of-life) and the production and distribution of fuel and electricity. Infrastructure construction (roads, railways, airports, etc.) is not included in this calculation.

We have chosen to compare the 2 applications most used by the French, namely Facebook, which has around 38.1M users, and Instagram, which has around 30.5M users.

Facebook 

The report states that 52 million people in France are present on social networks. Facebook is the most popular social network among 16-64 year-olds (73.3%). If we multiply Facebook’s environmental impact by the number of French users present on this platform (approx. 38.1M), this represents more than 24 tonnes eqCO2/min (or the production of 773 smartphones/min). That’s almost 1M tonnes of eqCO2 per year!

Instagram 

Instagram is the 2nd most popular social network among 16-64 year-olds after Facebook. If we multiply Instagram’s environmental impact by the number of French users present on this platform (58.6%), this represents more than 26.5 tonnes eqCO2/min (or the production of 853 smartphones/min). That’s over 1.1M tonnes of eqCO2 per year!

We can see that despite a gap of almost 8 million users, Instagram has a greater carbon impact than Facebook.

It’s worth pointing out that the amount of time devoted to social networking varies according to the audience concerned. Some people spend less time on them, while others spend considerably more, sometimes up to 8 hours a day.

The table below projects the carbon impact in terms of uptime.

What about international projection?

With an average time spent on social networks of 2 hours and 31 minutes across all networks, we estimate the consumption of these applications worldwide.

Facebook has 2.958 billion users worldwide, making it once again the most popular network. The daily consumption of a user spending an average of 2h31 on this network would be around 95g eqCO2. For the almost 3 billion Facebook users who spend an average of 2h31 a day on this social network, the platform would have an environmental footprint of more than 281,000 tonnes eqCO2/day, or more than 102 million tonnes eqCO2 a year!

Internationally, Instagram has around 2 billion users. Per day, the consumption of a user spending 2h31 on Instagram would produce around 132g eqCO2. On the scale of 2 billion users, this would represent 262,000 tonnes eqCO2/day, or almost 96 million tonnes per year.

And what happens if we use a dark theme?

We carried out our measurements a second time with the applications in dark mode, so as to be able to compare the energy impacts generated.

The measurements were carried out on a Samsung S10, equipped with AMOLED technology, known for the fact that a dark pixel is actually a partially extinguished pixel, which explains why dark modes reduce power consumption. Conversely, when the screen uses LCD technology, color has no influence on consumption, which explains why dark mode is no more energy-efficient than light mode, see article here.

screenshot feed LinkedIn en white mode Versus Screenshot feed LinkedIn en darkmode
RS visuel – 9

Nowadays, more and more phones are equipped with AMOLED screen technology, and it’s worth activating the dark mode to reduce power consumption and preserve battery discharge.

In this study, we noticed that only 8 of the 10 applications studied offered dark mode. Snapchat and Tiktok didn’t, so we excluded them from the measurements. As their interface is based on scrolling videos and photos only, only a few pages such as messaging would lower the energy consumption measurement.

ApplicationEnergy consumed in light mode /1 min(mAh)Energy consumed in dark mode /1 min(mAh)Energy consumption reduction rate
Twitter12,28,531%
LinkedIn11,78,528%
Facebook12,59,326%
twitch107,525%
Pinterest11,38,822%
Instagram13,211,811%
YouTube13,411,910%
Reddit14,113,18%

It can be seen that activating the dark mode reduces the energy consumption measured on the battery.

We can see that when dark mode is activated on the application, energy consumption is reduced by an average of 20%, and the rate of battery discharge is therefore reduced by an average of 18%, relative to their equivalents in light mode.

On text-heavy applications such as Twitter, LinkedIn and Facebook, dark mode is more energy-efficient, as it inverts the colors of a block of text, turning it into fine white writing on a black background. On the other hand, images and videos will not have their colors inverted, so there will be little difference when displaying multimedia content.

darkmode consumption

Conclusion

In this study, we observe that the GHG impact is around twice as great between the most and least impacting platforms.

Applications with a lot of multimedia content consume a lot of energy and require a lot of data exchange over the network to display this content. Text-based content, on the other hand, is much easier to load and consumes much less energy.

In conclusion, although social networks facilitate the exchange and accessibility of information, they are not as totally virtual as we might think, and raise the question of our relationship to the consumption of these applications. Are we really using them to communicate and inform ourselves, or rather to feed on a barrage of information and content that is generally neither desired nor expected?

At a time when climate change is a matter of urgency, it’s time to examine our relationship with our screens and adopt eco-friendly gestures, such as reducing time spent online and activating dark mode on mobile applications.

If you’re an application publisher, you also have a role to play! Here are a few ways in which you can reduce your impact:

  • Default to dark mode when downloading the application
  • Avoid massive pre-loading of heavy content
  • Avoid auto-starting videos or auto-re-launching at the end of videos

Sources  

For social network usage statistics :

https://wearesocial.com/fr/blog/2023/02/digital-report-france-2023-%f0%9f%87%ab%f0%9f%87%b7/

For equivalents in terms of carbon impact :

https://impactco2.fr/

Best practice: optimizing fonts

Reading Time: 4 minutes

In recent years, the use of fonts on the web has exploded (both in terms of the number of existing fonts and the number of sites using them).

As usual, the Web Almanac is a mine of information, especially via the chapter dedicated to fonts. We learn that the two main suppliers of web fonts are Google and Font Awesome, the latter consisting in the provision of icons. Beyond the potential cost on performance and environmental impacts, some countries have already established that this could contravene the GDPR (General Data Protection Regulation).

Proportion of websites using web fonts

Let’s see what good practices can reduce the impact of fonts on the web.

Existing reference systems

The fonts are mentioned in the UX/UI family of the RGESN (Référentiel Général d’écoconception de services numériques) :

  • 4.10 – Does the digital service use mostly operating system fonts?

They are also found in GR491 (Responsible Digital Service Design Reference Guide):

Finally, the 115 web ecodesign best practices also mention them:

Good practices

Objectives

In order to reduce the impacts of fonts, several best practices are applicable:

  • Give preference to standard/ system fonts : This avoids additional requests
  • Use an optimal compression format (today, it is the WOFF2 format). Online tools as Everything Fonts can provide this conversion.
  • Limit the number of variants used or use a variable font
  • Load only the characters that are really used (for example via a subset)

When ?

These good practices must be implemented as soon as the visual design of the service is done in order to favor standard fonts as much as possible. If this is not possible, then limit the number of variants to be loaded. Finally, when fonts are integrated, use the woff2 format, variable fonts and make sure to load only the characters or languages actually used.

Ease of implementation

If the site is already online, it can be complicated to change the font used. On the other hand, technical optimizations are easy to implement (format, variable font, Subset).

Estimated gains

These best practices reduce the number of HTTP requests and the volume of data transferred.

Specific cases

Google Fonts

To avoid problems with the RGPD, it is recommended to host Google Fonts yourself.
If variable versions are not available for all, some creators offer these versions for free. In addition, the Google API allows you to directly create a Subset with a request of this type: https://fonts.googleapis.com/css?family=Montserrat&subset=latin

Icons

Icon fonts are quite common. Using them directly may imply loading many icons that will not necessarily be used. The best way to use icons is to use each of them directly in SVG format. In this form they can be embedded directly in the HTML (without any additional HTTP request). If an icon font must be kept for practical reasons, limit the file to the icons actually used.

Case study

As part of the support Docaposte teams receive for their corporate site, fonts are often a separate project.

The fonts used here are two Google Fonts: Montserrat and Barlow. The site being already online, it is complicated to impose the use of standard fonts.

To avoid violating the GDPR and to improve site performance, fonts are hosted directly on Docaposte’s servers. In a second phase, a dedicated subdomain could be set up to eliminate the need for cookies.

The integration in the form of a variable font requires some additional adjustments, especially in the style sheets. In the meantime, it was decided to apply two best practices:

  • Propose the files in woff2 format rather than woff
  • The site being proposed only in French and English, a Subset was created keeping only the Latin alphabet.

Original requests

Requests after Subset and conversion to woff2

The woff2 format offers an average of 30% more compression than the woff format and even more than other formats like ttf.

This change in format, combined with Subset, reduced the total weight of the fonts from just over 400 kb to just under 90 kb, a reduction of about 78%.

To go further

DOM as a metric for monitoring web sobriety?

Reading Time: 3 minutes

Choosing the right metric to assess its impact is critical in a sobriety approach.

We have validated the use of energy in our tools (https://greenspector.com/fr/pourquoi-devriez-vous-mesurer-la-consommation-energetique-de-votre-logiciel/ and https://greenspector.com/fr/methodologie-calcul-empreinte-environnementale/ for more details). We do however use and measure other metrics such as CPU. This metric can however be complex to measure and some tools or teams use other more technically accessible elements. The CPU is an interesting metric to measure the resource footprint on the terminal side. Indeed, we have carried out measurements on several hundred sites and it is clear that the CPU is the most important metric for analysing the impact of software. This is why all the models that use the data exchanged to calculate the impact of the terminal are not consistent. CPU-based models (such as the Power API) are preferred.

However, it is necessary to be rigorous in the analysis of this metric as there may be interpretation biases (Example of criticism on the CPU). The criticism must be even more important on the way to obtain this metric, and more particularly in the case of modelling the CPU. This is the case, for example, with methods for projecting the CPU into the web from DOM elements.

This is based on the assumption that the structure of the DOM has an impact on the resource consumption of the terminal. The more complex the dom, the more it needs to be processed by the browser, the more resources (CPU and RAM) it uses and the more environmental impact it creates.

Assuming that the hypothesis of a correlation between DOM complexity and environmental impact is valid, the metric often used is the number of elements. A DOM with many elements may be complex but not systematically so. To take into account the complexity of the DOM, it would be necessary to take into account the architecture of the DOM, in particular the depth, the type of node (not all nodes have the same impact on the browser…). The choice of the number of DOM elements is therefore debatable.

But is the choice of DOM complexity a viable assumption? There are several criticisms of this.

The DOM is a raw structure that is not sufficient for the browser to display the page. The style is used with the DOM to create the CSSOM, a complexity of the style can thus greatly impact the CSSOM, even with a simple DOM. Then the layout tree is a structure that will allow the display to be managed (typos, sizes…), this management is much more complex to handle for browsers.

A DOM can be modified after its creation. This is called reflow and repaint. The browser will recalculate the layout and other things. This can happen several times during loading and after loading. The depth of the DOM (and not the number of elements) can influence but not only: the loading and execution of JS code are to be taken into account.

Independently of the DOM, resource consumption can be impacted by various processes on the terminal. In particular, all the JS processing that will be executed when the page is loaded. This cost is currently the main cost on the CPU in the web. And you can have a DOM with 100 elements (not many) and a JS gas factory.

Graphics animations will increase resource consumption without necessarily impacting the DOM. Even if most of this processing is handled by the GPU, the resource impact is not negligible. We can also put in this category the launching of videos, podcasts (and more generally media files) and ads.

There are also many other sources of resource consumption: ungrouped network requests, memory leaks.

The use of the DOM should therefore be used with great care. It is best used as a software quality metric that indicates “clean HTML”. Reducing the number of DOM elements and simplifying the DOM structure may be a good sobriety practice but not a sobriety reduction or CO2 calculation KPI.