DOM as a metric for monitoring web sobriety?

Reading Time: 3 minutes

Choosing the right metric to assess its impact is critical in a sobriety approach.

We have validated the use of energy in our tools (https://greenspector.com/fr/pourquoi-devriez-vous-mesurer-la-consommation-energetique-de-votre-logiciel/ and https://greenspector.com/fr/methodologie-calcul-empreinte-environnementale/ for more details). We do however use and measure other metrics such as CPU. This metric can however be complex to measure and some tools or teams use other more technically accessible elements. The CPU is an interesting metric to measure the resource footprint on the terminal side. Indeed, we have carried out measurements on several hundred sites and it is clear that the CPU is the most important metric for analysing the impact of software. This is why all the models that use the data exchanged to calculate the impact of the terminal are not consistent. CPU-based models (such as the Power API) are preferred.

However, it is necessary to be rigorous in the analysis of this metric as there may be interpretation biases (Example of criticism on the CPU). The criticism must be even more important on the way to obtain this metric, and more particularly in the case of modelling the CPU. This is the case, for example, with methods for projecting the CPU into the web from DOM elements.

This is based on the assumption that the structure of the DOM has an impact on the resource consumption of the terminal. The more complex the dom, the more it needs to be processed by the browser, the more resources (CPU and RAM) it uses and the more environmental impact it creates.

Assuming that the hypothesis of a correlation between DOM complexity and environmental impact is valid, the metric often used is the number of elements. A DOM with many elements may be complex but not systematically so. To take into account the complexity of the DOM, it would be necessary to take into account the architecture of the DOM, in particular the depth, the type of node (not all nodes have the same impact on the browser…). The choice of the number of DOM elements is therefore debatable.

But is the choice of DOM complexity a viable assumption? There are several criticisms of this.

The DOM is a raw structure that is not sufficient for the browser to display the page. The style is used with the DOM to create the CSSOM, a complexity of the style can thus greatly impact the CSSOM, even with a simple DOM. Then the layout tree is a structure that will allow the display to be managed (typos, sizes…), this management is much more complex to handle for browsers.

A DOM can be modified after its creation. This is called reflow and repaint. The browser will recalculate the layout and other things. This can happen several times during loading and after loading. The depth of the DOM (and not the number of elements) can influence but not only: the loading and execution of JS code are to be taken into account.

Independently of the DOM, resource consumption can be impacted by various processes on the terminal. In particular, all the JS processing that will be executed when the page is loaded. This cost is currently the main cost on the CPU in the web. And you can have a DOM with 100 elements (not many) and a JS gas factory.

Graphics animations will increase resource consumption without necessarily impacting the DOM. Even if most of this processing is handled by the GPU, the resource impact is not negligible. We can also put in this category the launching of videos, podcasts (and more generally media files) and ads.

There are also many other sources of resource consumption: ungrouped network requests, memory leaks.

The use of the DOM should therefore be used with great care. It is best used as a software quality metric that indicates “clean HTML”. Reducing the number of DOM elements and simplifying the DOM structure may be a good sobriety practice but not a sobriety reduction or CO2 calculation KPI.