Category: Digital sobriety

The Top 10 myths of frugal ICT

Reading Time: 5 minutes

I have been working for more than 8 years in GreenIT and I have seen lately that several studies and initiatives have started. This is a very positive sign and shows that there is a real dynamic to change the impact of ICT. All actions, whether small scale, as a simple awareness, or on a larger scale such as the optimization of a website with millions of visitors, is good to take into account the climate emergency.

However it’s important to avoid any greenwashing phenomenon and to understand the impact of the good practices mentioned (are they really all green?)

Myth 1 – A powerful software is a simple software.

False

A powerful software is a software that will be displayed quickly. This gives no information on its sobriety. On the contrary, it’s possible that practices are put in place for a quick display and that they go against the sobriety. As for example put the loading of the scripts after the display of the page. The page will be displayed quickly but many processes will run in the background and will have an impact on resource consumption.

Myth 2 – Optimize the size of queries and the weight of the page, this makes the software more frugal.

True and false

True because actually fewer resources will be used on the network and servers. Which means less environmental impact. It goes in the right direction.

False because the evaluation of a simple software will not only be based on this type of technical metrics. Indeed, it is possible that certain elements have an equally important impact. A carousel on a home page could for example be quite light in terms of weight and requests (for an optimized carousel) but in any case will have a strong impact in user-side resource consumption (CPU consumption, graphics … ).

Myth 3 – Automatic control via tools allows me to be green

True and false

True because it is important to measure the elements. This will allow to know objectively where we are, and to improve.

False because the evaluation will be done on technical elements. There is a bias: we only measure what we can automate. This is the criticism that can be made for example on Lighthouse (accessible tool in Chrome) on the accessibility. We can make a totally inaccessible site by having a score of 100. This is the same criticism that we can have about the tools that are used in ecodesign. For example the website http://www.ecoindex.fr/ is an interesting tool to initiate the process, however the calculation of this tool is based on 3 technical elements: the size of the page, the number of request and the size DOM. These are important elements in the impact of the page, however several other elements can be impacting: CPU processing from script, graphic processing, more or less good solicitation of the radio cell … All elements that can create false positives.

A measurement software will be complementary 😉

Myth 4 – My software uses open-source and free code, so I’m green

False

Free software is a software in its own right. He suffers the same obesity as other software. He will therefore potentially be a consumer. On the other hand, free software has a stronger capacity to integrate good efficiency practices. Still need to implement or at least begin to evaluate the impact of its solution …

Myth 5 – The impact is more on the datacenter, on the features, on that …

True and false

Any software is different, by its architecture, its use, its implementation, its functions … no serious study can certify a generality on a domain that would have more impact than another. In some cases, the impact will be more on the datacenter (for example on calculation software) but in other cases it will be on the user side (for example mobile applications). In the same way, some software will be obese because of their multiple functionalities whereas others will be because of a bad coding or an external library too heavy.

Myth 6 – Ecodesign requires a structured and holistic approach

True and false

True because indeed it’s necessary to involve all the actors of the companies (developer but also Product Owner, Business Department) and to have a coherent strategy.

However, starting process and product improvement through unit and isolated actions is very positive. The heaviness of the software is indeed in a state where any isolated positive action is good to take.

Both approaches are complementary. Avoiding the application of certain practices while waiting for a structured approach (which can be cumbersome) would be dangerous for the optimization and competitiveness of your software.

Myth 7 – The green coding does not exist, the optimization is premature …

False

This is an argument that has existed since the dawn of time (software). Code implemented, legacy code, libraries … optimization tracks are numerous. My various audits and team accompaniments showed me that optimization is possible and the gains are significant. To believe otherwise would be a mistake. And beyond optimization, learning to code more green is a learning approach that is useful to all developers.

Myth 8 – My organization is certified green (ISO, ICT responsible, Lucie …), so my product is green.

False

All its certifications will effectively ensure that you are on the right track to produce more respectful software. Far be it from me to say that they aren’t useful. However, it must not be forgotten that these are organization-oriented certifications. In a structured industry (like agriculture, a factory …) the company’s deliverables are very aligned to the process. Certifying an AB farm will ensure that the product is AB good.

However in the mode of the software it is not so simple, the quality of the deliverables is indeed very fluctuating, even if one sets up a process of control. In addition, an organization potentially consists of a multitude of teams that are not going to have the same practices.

It’s therefore necessary to control the qualities of software products and this continuously. This is an approach that will be complementary to the certification but mandatory. Otherwise we risk discrediting the label (see going to greenwashing).

Myth 9 – Optimizing energy is useless, it’s the equivalent CO2 that is important to treat

False

The ecodesign work is mainly based on the reduction of equivalent CO2 (as well as other indicators such as eutrophication …) over the entire life cycle of the ICT service. It’s therefore important to take into account this metric. Without this, we risk missing the impacts of IT. However, on the same idea as points 5 to 7, no optimization is to be discarded. Indeed, it is necessary to understand where the impacts of the software are located. However, the integration of the energy problem in teams is urgent. Indeed, in some cases the consumption of energy in the use phase is only part of the impact (compared to gray energy for example). However in many cases, high energy consumption is a symptom of obesity. In addition, in the case of software running in mobility (mobile application, IoT) energy consumption will have a direct impact on the renewal of the devices (via the wear of the battery).

Myth 10 – I compensate so I’m green

False

It’s possible to offset its impact through different programs (financing of an alternative energy source, reforestation …). It’s a very good action. However, it is a complementary action to an ecodesign process. It is indeed important to sequence the actions: I optimize what I can and I compensate what remains.

Conclusion

The frugal ICT is simple because it’s common sense. However, given the diversity of the software world, the findings and good practices aren’t so simple. However, the good news is that, given the general cumbersome software and the delay in optimization, any action that will be taken will be positive. So don’t worry, start the process, it’s just necessary to be aware of some pitfalls. Be critical, evaluate yourself, measure your software!

The software world is destroying itself … Manifesto for a more sustainable development

Reading Time: 21 minutes

The world of software is bad and if we do not act, we may regret it. Environment, quality, exclusion … Software Eats The World? Yes a little too much.

The software world is bad. Well, superficially, everything is fine. How could a domain with so many economic promises for the well-being of humanity go wrong? Asking ourselves the question could be a challenging of all that. So everything is fine. We are moving forward, and we are not asking ourselves too much.

The software world is getting bad. Why? 20 years of experience in the software world as a developer, researcher or CTO have given me the chance to rub shoulders with different fields and to have this feeling that it is growing year by year. I have spent the last 6 years especially trying to push practices, software quality tools to educate developers about the impact of the software on the environment. You have to be severely motivated to think about improving the software world. Good practices do not pass as easily as the new Javascript framework. The software world is not permeable to improvements. Or at least only those superficially, not deep ones.

The software world is getting bad. Everything is slow, and it’s not going in the right way. Some voices are rising. I invite you to read “Software disenchantment”. Everything is unbearably slow, everything is BIG and “BLOAT”, everything ends up becoming obsolete…The size of websites explodes. A website is as big as the Doom game. The phenomenon affects not only the Web but also the IoT, the mobile … Did you know? It requires 13% CPU When Idle Due to Blinking Cursor Rendering….

This is not the message of an old developer tired by the constant evolutions and nostalgic of the good old days of floppy disk … It is rather a call to a deep questioning of the way we see and develop the software. We are responsible for this “non-efficiency” (developers, project managers, salespeople …). To say that everything is fine would not be reasonable, but to say that everything is going wrong without proposing any improvement would be even more so.

Disclaimer: You will probably jump, call FUD, troll, contradict … reading this article. It’s fine but please, go all the way!

We’re getting fat (too much)

Everything grows: the size of the applications, the amount of data stored, the size of the web pages, the memory of the smartphones … Phones now have 2 GB of memory, exchange a photo of 10 MB by mail is now common… It might not be an issue if all software was used, effective and efficient … But this is not the case, I let you browse the article “The disenchantment of the software” to have more detail. It is difficult to say if many people have this feeling of heaviness and slowness. And at the same time, everyone has got used to that. It’s computer science. Like the bugs, “your salary has not been paid? Arghhh… it must be a computer bug”. IT is slow, and we can not help it. If we could do anything about it, we would have already solved the problem.

So everyone get used to slowness. All is Uniformly Slow Code. We sit on it and everything is fine. Be effective today is to reach a user feeling that corresponds to this uniform slowness. We get rid of things that might be too visible. A page that takes more than 20 seconds to load is too slow. On the other hand, 3 seconds is good. 3 seconds? With the multicores of our phones / PCs and data centers all over the world, all connected by great communication technologies (4G, fiber …), it’s a bit weird? If we look at the riot of resources for the result, 3 seconds is huge. Especially since the bits circulate in our processors with time units of the level of nanosecond. So yes, everything is uniformly slow. And that suits everyone (at least, in appearance.) Web performance (follow the hashtag #perfmatters) is necessary but it is unfortunately an area that does not go far enough. Or maybe the thinking in this area can not go further because the software world is not permeable enough or sensitive to these topics.

There are now even practices that consist not to solve the problem but to work around it, and this is an area in its own right: to work on “perceived performance” or how to use the user’s perception of time to put in place mechanisms so that they don’t need to optimize. The field is fascinating from a scientific and human point of view. From the point of view of performance and software efficiency, a little less. “Let’s find plenty of mechanisms to not optimize too much!”

All of this would be acceptable in a world with mediocre demands on the performance of our applications. The problem is in order to absorb this non performance, we scale. Vertically by adding ultra-powerful processors and more memory, horizontally by adding servers. Thanks to virtualization that allowed us to accelerate this arms race! Except that under the bits, there is metal and the metal is expensive, and it is polluting.

Yes, it pollutes: it takes a lot of water to build electronic chips, chemicals to extract rare earths, not to mention the round trips around the world … Yes, the slow uniformity still has a certain cost. But we will come back to it later.

It is necessary to go back to more efficiency, to challenge hardware requirements, to redefine what is performance. As long as we are satisfied with this slowness uniformity with new solutions which won’t slow down more (like the addition of equipment), we won’t move forward. The technical debt, a notion largely assimilated by development teams, is unfortunately not adapted to this problem (we will come back to this). We are on a debt of material resources and bad match between the user need and the technical solution. We are talking here about efficiency and not just about performance. Efficiency is a story of measuring waste. The ISO defines Efficiency with the domain: Time behaviour, Resource utilization and Capacity. Why not push these concepts further more?

We are (too) virtual

One of the problems is that the software is considered “virtual”. And this is the problem: “Virtual” defines what has no effect (“Who is only in power, in a state of mere possibility as opposed to what is in action” according to Larousse). Maybe it comes from the early 80s when the virtual term was used to speak about Digital (as opposed to the world of the Hardware). “Numeric” is relative to the use of numbers (the famous 0 and 1). But finally, “Numeric”, it’s not enough and it includes a little too much material. Let’s use the term Digital! Digital / Numeric is a discussion in France that may seem silly but is important in the issue we are discussing. Indeed, the digital hides even more this material part.

But it should not be hidden: digital services are well composed of code and hardware, 0 and 1 that circulate on real hardware. We can’t program without forgetting that. A bit that will stay on the processor or cross the earth will not take the same time, or use the same resources:

Developing a Java code for a J2EE server or for an Android phone, it’s definitly not the same. Specific structures exist to process data in Android but common patterns are still used. The developers have lost the link with the hardware. It’s unfortunate because it’s exciting (and useful) to know how a processor works. Why: abstraction and specialization (we’ll see this later). Because by losing this insight, we lose one of the forces of development. This link is important among hackers or embedded computing developers but unfortunately less and less present among other developers.

Devops practices could respond to this loss of link. Here, it’s the same, we often do not go all the way of it: usually the devops will focus on managing the deployment of a software solution on a mixed infrastructure (hardware and few software). It would be necessary to go further by going up for instance consumption metrics, by discussing the constraints of execution … rather than to “scale” just because it is easier.

We can always justify this distance from the material: productivity, specialization … but we must not mix up separation and forgetting. Separate trades and specialize, yes. But forget that there is material under the code, no! A first step would be to give courses on materials in schools. It is not because a school teaches about programming that serious awareness of the equipment and its operation is not necessary.

We are (too) abstract

We’re too much virtual and far from the material because one wanted to abstract oneself from it. The multiple layers of abstraction have made it possible not to worry about the material problems, to save time … But at what price? That of heaviness and forgetting the material, as we have seen, but there’s much more. How to understand the behavior of a system with call stacks greater than 200? :

Some technologies are useful but are now regulary used. This is the case, for example, of ORM which have become systematic. No reflection is made on its interest at the beginning of the projects. Result: we added an overlay that consumes, that must be maintained and developers who are no longer used to perform native queries. That would not be a problem if every developer knew very well how the abstraction layers work: how does HIBERNATE work for example? Unfortunately, we rely on these frameworks in a blind way.

This is very well explained in the law of Joel Spolsky “The Law of Leaky Abstractions

And all this means that paradoxically, even as we have higher and higher level programming tools with better and better abstractions, becoming a proficient programmer is getting harder and harder. (…) Ten years ago, we might have imagined that new programming paradigms would have made programming easier by now. Indeed, the abstractions we’ve created over the years do allow us to deal with new orders of complexity in software development that we didn’t have to deal with ten or fifteen years ago (…) The Law of Leaky Abstractions is dragging us down.

We believe (too much) in a miracle solution

The need for abstraction is linked to another flaw: we are still waiting for miracle tools. The silver bullet that will further improve our practices. The ideal language, the framework to go even faster, the tool of miracle management of dependencies … It is the promise each time of a new framework: to save time in development, to be more efficient … And we believe in it, we run into it. We give up the frameworks on which we had invested, on which we had spent time … and we go to the newest one. This is currently the case for JS frameworks. The history of development is paved with forgotten frameworks, not maintained, abandoned … We are the “champions” that reinvent what already exists. If we kept it long enough, we would have the time to master a framework, to optimize it, to understand it. But this is not the case. And don’t tell me that if we had not repeatedly reinvented the wheel, we would still have stone wheels… Innovate would be to improve the existing frameworks.

This is also the case for package managers: Maven, NPM … In the end, we come to hell. The link with abstraction? Rather than handling these dependencies hard, we put an abstraction layer that is the package manager. And the edge effect is that we integrate (too) easily external code that we do not control. Again we will come back to it.

On languages, it’s the same story. Warning, I don’t advocate to stay on assembler and C … This is the case for instance in the Android world, for over 10 years developers have been working on tools and Java frameworks. And like that, by magic, the new language of the community is Kotlin. We imagine the impact on existing applications (if they have to change), we need to recreate tools, find good practices … For what gain?

Today the Android team is excited to announce that we are officially adding support for the Kotlin programming language. Kotlin is a brilliantly designed, mature language that we believe will make Android development faster and more fun Source

We will come back later to the “fun” …

Honestly, we do not see any slowdown in technology renewal cycles. It’s always a frenetic pace. We will find the Grail one day. The problem is then the stacking of its technologies. Since none of them really dies and parts are kept, we develop other layers to adapt and continue to maintain these pieces of code or these libraries. The problem is not the legacy code, it is the glue that we develop around fishing. Indeed, as recited the article on ” software disenchantment “:

@sahrizv :
2014 – #microservices must be adopted to solve all problems related to monoliths.
2016 – We must adopt #docker to solve all problems related to microservices.
2018 – We must adopt #kubernetes to solve all the problems with Docker.

In the end, we spend time solving internal technical problems, we look for tools to solve the problems we add, we spend our time adapting to its new tools, we add overlays (see previous chapter … ) … and we didn’t improve the intrinsic quality of the software or the needs that must be met.

We do not learn (enough)

In the end, the frantic pace of change does not allow us to stabilize on a technology. I admit that as an old developer that I am, I was discouraged by the change Java to Kotlin for Android. It may be for some real challenges, but when I think back to the time I spent on learning, on the implementation of tools .. We must go far enough but not from scratch. It is common, in a field, to continually learn and be curious. But it remains in the iteration framework to experiment and improve. This is not the case in programming. In any case in some areas of programming, because for some technologies, developers continue to experiment (.Net, J2EE ..). But it’s actually not that fun…

Finally, we learn: we spend our time on tutorials, getting started, conferences, meetups … To finally experiment only 10% in a project side or even a POC, which will surely become a project and production.

As no solution really dies, new ones come … we end up with projects with multitudes of technologies to manage along with the associated skills too … The, we’re surprised that the market of the recruitment of developer is plugged. No wonders… There are a lot of developers but it’s difficult to find a React developer with 5 years of experience who knows the Go language. The market is split, like the technologies. It may be good for developers because it creates rareness and it raises prices, but it’s not good for the project!

To return to the previous chapter (Believe in miracle tools …), we see in the Javascript world with the “JS fatigue”. The developer must make his way into the world of Javascript and related tools. This is the price of the multitude of tools. This is an understandable approach (see for example a very good explanation of how to manager it). However this continuous learning of technologies create the problem of learning transverse domains: accessibility, agility, performance … Indeed, what proves us that the tools and the languages that we will choose won’t change in 4 years ? Rust, Go … in 2 years? Nothing tends to give a trend.

We have fun (too much) and we do not recover (enough) in question

Unless it is in order to put a technology in question to find another. The troll is common in our world (and, I confess, I use it too). But it is only to put one technology in question for another. And continue the infernal cycle of renewal of tools and languages. A real challenge is to ask ourselves sincerely: are we going in the right direction? Is what I do sustainable? Is it quality? But questioning is not easy because it is associated with either the troll (precisely or unfairly) or a retrograde image. How to criticize a trend associated with a technological advance?

The voices rise little against this state of facts: The disenchantment of the software, Against software development… and it’s a shame because questioning is a healthy practice for a professional domain. It allows to “perform” even more.

We do not question because we want to have fun. Fun is important, because if you get bored in your job, you will be depressed. By cons, we can not, under the pretext of wanting fun all the time, change our tools continuously. There is an imbalance between the developer experience and the user experience. We want fun, but what will it really bring to the user? A product more “happy”? No, we are not actors. One can also criticize the effort that is made to reduce the build times and other developer facilities. This is important but we must always balance our efforts: I accelerate my build time but it is only valid if I use the time gained to improve the user experience. Otherwise it is only tuning for his own pleasure.

It is necessary to accept criticism, to self-criticize and to avoid hiding behind barriers. The technical debt is an important concept but if it is an excuse to make bad refactoring and especially to change to a new fashionable technology, as much to acquire debt. We must also stop the chapel wars. What is the aim of defending one’s language from another? Let’s stop repeating that “premature optimization is the cause of all the evils…” This comes from the computer science of the 70s where everything was optimized. However, there is no more premature optimization, it is only an excuse to do nothing and continue like that.

We are (badly) governed

We’re not questioning ourselves about the ethics of our field, on its sustainability… This may be due to the fact that our field does not really have an ethical code (such as doctors or lawyers). But are we as truly free developers if we can not have a self-criticism? We may be enslaved to a cause brought by other people? The problem is not simple but we have in all cases a responsibility. Without an ethical code, the strongest and most dishonest is the strongest. The buzz and the practices to manipulate the users are more and more widespread. Without Dark Pattern your product will be nothing. The biggest (GAFA …) did not made it for nothing.

Is the solution political? We have to legislate to better govern the world of software. We see it with the latest legislative responses to concrete problems: GDPR, cookies and privacy notifications … the source of the problem is not solved. Maybe because politicians do not really understand the software world.

It would be better if the software world was structured, put in place a code of ethics, self-regulate … But in the meantime, it is the rule of the strongest that continues… At the expense of a better structuring, a better quality, a real professionalisation…

If this structuring isn’t done, the developers will lose the hand on what they do. But the lack of ethics of the profession is externally criticized. Rachel Coldicutt (@rachelcoldicutt) director of DotEveryOne, a UK think tank that promotes more responsible technology, encourages non-IT graduates to learn about these issues. To continue on this last article, it would be in the right way of computer science, domain from the military world where engineers and developers would be trained to follow decisions and commands.

A statement that echoes, in particular, the one held by David Banks (@da_banks) in the insolent “The Baffler”. D.Banks emphasized how the world of engineering is linked to authoritarianism. The reason is certainly to look at the side of the story. The first engineers were of military origin and designed siege weapons, he recalls quickly. They are still trained to “connect with the decision-making structures of the chain of command”. Unlike doctors or lawyers, engineers do not have ethical authorities who oversee them or swear an oath to respect. “That’s why the engineers excel in the outsourcing of reproaches”: if it does not work, the fault lies with all the others (users, managers …) “We teach them from the beginning that the most moral thing that they can do is build what they are told to build to the best of their abilities, so that the will of the user is carried out accurately and faithfully.”

With this vision, we can ask ourselves the question: can we integrate good practices and ethics in a structural and internal way in our field?

Development follows (too much) like any organization of absurd decisions

The world of software integrates into a traditional organizational system. Large groups, outsourcing via Digital or IT Companies, web agencies … All follow the same techniques of IT project management. And everyone goes “in the wall”. No serious analysis is made on the overall cost of a software (TCO), on its impact on the company, on its profit, its quality … It’s the speed of release (Time to Market), the “featural” overload (functional), immediate productivity, that matter. First because the people outside this world know too little about the technicality of the software and its world. It is virtual so simple. But this isn’t the case. Business schools and other management factories do not have development courses.

We continue to want to quantify IT projects as simple projects while movements like the no estimate propose innovative approaches. Projects continue to fail: the chaos report announces that just 30% of projects are doing well. And in the face of this bad governance, technical teams continue to fight against technologies. Collateral damage: quality, ethics, environnement… and ultimately the user. It would not be so critical if the software did not have such a strong impact on the world. Software eats the world … and yes, we eat it …

One can ask the question of the benevolence of the companies: are they only interested in their profit, whatever is the price, and leave the world of the software in this doldrums? The answer may come from sociology. In his book “Les Decisions Absurdes” Christian Morel explains that individuals can collectively make decisions that go totally the other way from the purpose. In particular, the self-legitimization of the solution.

Morel explains this phenomenon with the “Kwai River Bridge” where a hero builds a work with zeal for his enemy before destroying it.

This phenomenon of the “Kwai River Bridge”, where action is self-legitimized, where action is the ultimate goal of action, exists in reality more than one might think. He explains that decisions are meaningless because they have no purpose other than the action itself. “It was fun”: this is how business executives express themselves with humor and relevance when one of them has built a “bridge of the Kwai River” (…) Action as a goal in itself supposes the existence of abundant resources (…) But when the resources are abundant, the organization can support the cost of human and financial means which turn with the sole objective of functioning “. And, in the world of software, it globally provides the means to operate: gigantic fundraising, libraries that allow to release very quickly, infinite resources … With this abundance, we build a lot of Bridges of the Kwai River.

In this context, the developer is responsible for abundance direction that he follows.

The development is (too) badly controlled

If these absurd decisions happen, it is not only the fault of the developer but of the organization. And who says organization says management (sub-different form). If we go back to Morel’s book, he speaks of a cognitive trap in which managers and technicians often fall. This is the case for the Challenger shuttle, which was launched despite the knowledge of the problem of a faulty seal. The managers underestimated the risks and the engineers did not prove them. Everyone blamed the other for not providing enough scientific evidence. This is often what happens in companies: warnings are raised by some developers but the management does not take them seriously enough.

This has also happened in many organizations that have wanted to quickly develop universal mobile applications. In this case, the miracle solution (we come back to it) adopted by the decision-makers was the Cordova framework: no need to recruit specialized developers iOS and Android, ability to recover web code … The calculation (or not) simple showed that benefits. On the other hand, on the technical side, it was clear that native applications were much simpler and more efficient. 5 years later, the conferences are full of feedback on failures of this type of project and the restart from scratch of them in native. The link with Challenger and the cognitive traps? The management teams had underestimated the risks, the actual cost and did not take into account the comments of the technical teams. The technical teams had not sufficiently substantiated and proved the ins and outs of such a framework.

At the same time, we return to the previous causes (silver bullet, we have fun …), it is necessary to have a real engineering and a real analysis of technologies. Without this, the technical teams will always be unheard by the management. Tools and benchmark exist but they are still too little known. For example, Technologie Radar that classifies technologies in terms of adoption..

It is at the same time important that the management of the companies stops thinking that the miracle solutions exist (one returns to the cause of the “virtual”). You really have to calculate the costs, the TCO (Total Cost of Ownership) and the risks on the technology choices. We continue to choose BPM and Low-code solutions that generate code. But the hidden risks and costs are important. According to ThoughtWorks:

“Low-code platforms use graphical user interfaces and configuration in order to create applications. Unfortunately, low-code environments are promoted with the idea that this means you no longer need skilled development teams. Such suggestions ignore the fact that writing code is just a small part of what needs to happen to create high-quality software—practices such as source control, testing and careful design of solutions are just as important. Although these platforms have their uses, we suggest approaching them with caution, especially when they come with extravagant claims for lower cost and higher productivity.”

We divide (too much) … to rule

This phenomenon of absurd decision is reinforced by the complex fabric of software development: Historically out-of-digital companies outsource to digital companies, IT Companies outsource to freelancers … The sharing of technical responsibility / management is even more complex and absurd decisions more numerous.
But this does not end here. We can also see the use of open-source as a sort of outsourcing. Same for the use of framework. We are just passive consumers, we are free of many problems (which have an impact on resources, quality …).

This is all the easier as the field is exciting and the practice of side-projects, time spent on open-source projects outside office hours is common … The search for “fun” and the time spent then benefit more organizations than developers. It is difficult in this case to quantify the real cost of a project. And yet, that would not be a problem if we came up with software “at the top”. This does not change the quality, on the contrary, the extended organization that is composed of the bulk of groups, IT companies, freelancers, communities has no limit to build the famous bridges of the Kwai River.

The developer is no longer a craftsman of the code, but rather a pawn in a system criticizable from the human point of view. This is not visible, everything is fine and we have fun. In appearance only, because some areas of software development go further and make this exploitation much more visible: The field of video games where the hours are exploding.

A better professionalization, a code of ethics or anything else would be useful in this situation. Indeed, this would make it possible to put safeguards on overtaking or practices (directly or indirectly) open to criticism. But I’ve never heard of the developer corporation or other rally that would allow this code defense.

We lose (too) often the final goal: the user

And so, all these clumsiness (too heavy software, no quality …) are found among users. As we have to release software as quickly as possible, that we do not try to solve internal inefficiencies, and that we do not put more resources to make quality, we make mediocre software. But we have so many tools for monitoring and monitoring users to detect what is happening directly at home in the end, we think it does not matter. It would be a good idea if the tools were well used. But the multitude of information collected (in addition to the bugs reported by users) is only weakly used. Too much information, difficulty to target the real source of the problem … we get lost and in the end, it is the user who suffers. All software is now in beta testing. What’s the point of over-quality, as long as the user asks for it? And we come back to the first chapter: a software that is uniformly slow… and poor.

By taking a step back, everyone can feel it every day in the office or at home. Fortunately, we are saved by the non-awareness of users in the software world. It is a truly virtual and magical world that they are used to. We put them in hand tools but without an explanatory note. How to evaluate the quality of a software, the risks on the environment, the security problems… if one does not have notions of computer science, even rudimentary ones?

21st century computer science is what agribusiness was for consumers in the 20th century. For reasons of productivity, we have pushed mediocre solutions with a short-term computation: placing on the market more and more fast, profit constantly rising… intensive agriculture, junk food, pesticides … with significant impacts on health, on the environment… Consumers now know (more and more) the disastrous consequences of these excesses, the agri-food industry must reinvent themselves, technically, commercially and ethically. For software, when users understand the ins and outs of technical choices, the software industry will have to deal with the same issues. Indeed, the return to common sense and good practices is not a simple thing for agribusiness. In IT, we start to see it with its consequences on the privacy of users (but we are only in the infancy).

It is important to reintroduce the user into the software design thinking (and not just by doing UX and marketing workshops …) We need to rethink everyone of the software: project management, the impacts of the software, quality … This is the goal of some movements: software craftmanship, software eco-design, accessibility … but the practices are far too confidential. Whose fault is it? We go back to the causes of the problem: we are pleased on one side (development) and we have a search only profit (management side). Convenient to build Kwai River bridges … where are the users (us, actually).

We kill our industry (and more)

We are going in the wrong way. The computer industry has already made mistakes in the 1970s with non-negligible impacts. The exclusion of women from IT is one of them. Not only has this been fatal for some industries but we can ask ourselves the question of how we can now address responses to only 50% of the IT population, with very low representativeness. The path is now difficult to find..

But the impact of the IT world does not stop there. The source and the model of a big part of the computing come from the Silicon valley. If Silicon Valley winners are left out, local people are facing rising prices, decommissioning, poverty … The book Mary Beth Meehan puts in image this:

“The flight to a virtual world whose net utility is still difficult to gauge, would coincide with the break-up of local communities and the difficulty of talking to each other. Nobody can say if Silicon Valley prefigures in miniature the world that is coming, not even Mary, who ends yet its work around the word “dystopia”.

In its drive towards technical progress, the software world is also creating its …environmental debt

There are many examples, but the voices are still too weak. Maybe we will find the silver bullet, that the benefits of the software will erase its wrongs… nothing shows that for now, quite the contrary. Because it is difficult to criticize the world of software. As Mary Beth Meehan says:

“My work could just as easily be swept away or seen as leftist propaganda. I would like to think that by showing what we have decided to hide, we have served something, but I am not very confident. I do not think people who disagree with us in the first instance could change their minds. “

On the other hand, if there are more and more voices, and they come from people who know the software (developers, architects, testers …), the system can change. The developer is neither a craftsman nor a hero: he is just a kingpin of a world without meaning. So, it’s time to move…

TechForGood or GoodForTech ?

Reading Time: 3 minutes

There was much talk before and after the Viva Technology 2018 event about the #TechForGood concept. The idea? Encourage the « tech giants » (and the less-giant ones) to contribute, through their technological solutions, to the achievement of societal or environmental progress.

Continue reading “TechForGood or GoodForTech ?”

Apple planned obsolescence explained (for dummies and others)

Reading Time: 2 minutes

End of 2017, Apple underwent bad buzz and was accused to intentionally be slowing down older iPhones. And this feeds the whole discussion on planned obsolescence. A debate very much either black or white: mean manufacturer versus sweet consumer. Or even the contrary (which is surprising to me): the concept of obsolescence initiated by NGOs.

Let’s start from the beginning:

Our phones’ batteries are now mainly based on the Litthium-Ion technology. The chemical behavior of the battery worsens accordingly to the amount of charge/discharge cycle. After 500 cycles, the battery only has 80% of its battery capacity left (but the phone OS recalculates a level so that it displays a “charge” of 100%). This means that if you have a 3000mAh battery, after 500 cycles, you will really only have 2400mAh.

Battery ageing usually goes in pair with a loss of battery power, especially on peak loads management. You might have encountered this situation, your phone or PC has 10% of battery and all of a sudden the level drops; and as it usually happens with a low battery level, your phone shuts down without giving you any notice. This is what Apple describes on its blog.

In order to limit that phenomenon, Apple tries and limit peak loads by limiting the CPU frequency. Then, there is usually less peak loads. However, on the phone, there are other big consumers (GPS, radio cell…). We can even wonder if Apple isn’t slowing down other components.

Before that, we need to go back on this whole cycle story. Is it inevitable? A cycle directly is connected to the phone consumption level, which itself depends on a few things:

  • Hardware consumption
  • OS consumption
  • Your use (amount of calls, video, etc)
  • Applications consumption

For the first two points, manufacturers usually try and make efforts. When it comes to the use, you are the one managing it. However, there is very little communication. For applications consumption, it isn’t inevitable (it is actually GREENSPECTOR goal).

Once you have that in mind, is Apple responsible for obsolescence? If so, is it planned obsolescence? First off, the actual cause for obsolescence is distributed: is Apple responsible for the overconsumption of specific applications? Some manufacturers try to solve this issue by [putting a finger on consuming applications]( https://greenspector.com/en/articles/2017-12-12-why-you-should-care-about-your-application-impact-on-battery-life/).
Apple doesn’t show much zeal on this point. Application designers: 0, Apple: 0.
When it comes to usage, Apple provides the strict minimum in terms of communication. It is way more hype to communicate on the launch of an animated emoji, than this after all. It kind of makes sense though, users love it. It is also more interesting for media: publishing the endless queue stories for every new version released never gets old. Apple: 0, Media: 0, Users: 0.

Overall, 0 for everyone, so a shared obsolescence! However, the thing the most debatable about Apple acknowledgement of battery ageing (which is real), isn’t the whole slowdown phenomenon, it is the lack of communication. Users are smart enough to understand a message like “Your battery is getting old, we recommend a slowdown of your phone: Yes, No (I prefer changing battery)”. But again, this doesn’t meet the “hype” requirements and the product would look “too technical” (which is the case). With this last point and Apple not sending alerts to its user about slowdowns, the user cannot be accountable and act either way. It doesn’t know all the facts necessary to assess the situation. It most likely will be a lack of parameter that will potentially make it go choose the renewal option. And in this case, yes, Apple is doing planned obsolescence.

End of life: software-induced obsolescence and wastes?

Reading Time: 3 minutes

The end of life stage of a software can be tough to apprehend and manage; this is why today we are focusing exclusively on this stage of the life cycle in an article that complete our Software LCA series.

Find the complete Methodological Guide to software LCA as well as a use case on assessment of an application’s environmental impacts.

Software hides to die (end of life & obsolescence)

The end of life stage of a software is especially hard to apprehend in the life cycle assessment, specifically for the two following reasons:

1) Obsolescence as such doesn’t actually exist for software. Indeed, theoretically a software is endlessly usable, as long as hardware exists to make it work. Software doesn’t recognize wear and doesn’t break down because it has become too old itself. As a consequence, we cannot properly predetermine a software lifetime duration, as it is linked to its components degrading throughout time. The only explanations to software obsolescence are external to the software itself:

  • user’s choice to delete it,

  • maintenance policy of a version,

  • obsolescence of hardware supporting the software,

  • obsolescence of other software interacting with the software we analyze (exploitation system, database…),

  • disappearance of the user’s need

  • etc.

2) A software doesn’t seem like it is generating any physical waste in its end of life stage. Whenever we decide not to use it anymore – or when we cannot use it – it is simply deleted from the terminal on which it is installed, without generating any physical waste. In the worst case scenario, there are remaining files uselessly occupying disk space. But in reality, if we have a closer look, we can find physical wastes (wastes from the design and development stage – CD + package + user guide in paper form if the software was packaged – taken into consideration in other stages of the analysis), and more specifically hardware-related wastes (computer, smartphone, tablet, network equipment…) generated from the use of hardware, required to make the software work.

But the question is: how does software contribute to generating wastes? Well simply with its direct or indirect impact on hardware obsolescence:

Replacement or software update requiring new equipments:

For a similar user’s need, if the software goes through a major update or if it is replaced with another software, plus if this operation requires additional physical resources (more powerful machines, different technologies), then we can consider older hardware as software-induced waste. This is a phenomenon of « hardware obsolescence » caused by software renewal: hence, software is reponsible for the wastes. A mature software (with no functional evolution) has no reason to spearhead wastes… But what software doesn’t evolve, right? It’ll be necessary to watch consumption of resources required by the new software versions.

Wrong uninstallation:

An uninstallation process that is badly executed – or badly applied – can contribute to obsolescence as well. Indeed, registry keys are left out, temporary files too; if the software modifies the system, it remains a residual footprint which makes the system heavier.

Side effects of uninstallation on other software:

You also need to pay attention to other software as uninstallation can make them obsolete: dependences can exist, which could lead to a cascade effect of obsolescence (I update my software => it requires a newer version of database manager => this manager requires an OS update => the newer OS doesn’t have the printer pilot anymore => I have to change of printer…).

As we see, the end of life stage of a software can be tricky to apprehend. Nonetheless, it’d be useful to conduct a quick first estimation in order to determine the importance of its relative weight compared to other stages.

This serie on software LCA is now complete, you can find on the blog the previously published articles on the same subject:

You can also have access to the Methodological Guide to software LCA, downloadable for free on our website, as well as a use case on environmental impacts of an application.

Software eco-design: what is the life cycle of a software?

Reading Time: 4 minutes

The goal of this article is to present the software life cycle. For each issue raised by life cycle assessment we identify in this article, we will indicate the approach we believe is the most appropriate, particularly for environmental impact evaluation.

Find the complete Methodological Guide to software LCA as well as a use case on assessment of an application’s environmental impacts.

Most manufactured goods’ life cycles, analyzed with a LCA (Life Cycle Assessment), can be considered made of the following six stages:

  • Product development
  • Raw material extraction
  • Production and packaging process
  • Logistics and distribution process
  • Product use
  • End of life (disassembling, transportation, sorting, recycling, wastes).

Nonetheless, if this life cycle makes sense for a basic tangible product, it isn’t really suited for software. Indeed, as an « intangible good », software doesn’t require any raw material extraction directly. The production phase doesn’t work like a manufacturing process you repeat N times to produce N copies of a product: you should consider it more like a unique stage creating a version of a software theoretically reproducible and reusable endlessly.

Upstream transportation and distribution

Regarding upstream transportation (logistics), if the software is made of different modules developped in other sites, you should take into account, as much as possible, the “sending part” from others sites to the modules’ aggregation site. In a first approach, these impacts could be negligible, as they are more likely to represent less than 5% of total impact.

If distribution to end-user is conducted through a download on internet, this download’s environmental impact should be taken into account. If distribution is done via a tangible support (DVD, USB key…), production and transportation of these supports should be taken into calculation as well.

Software installation can be linked to the use phase. Maintenance can be considered as production overcosts. A software’s end of life seems non-existing, or at least without any impact. We will see later how wrong that statement is. We will have to integrate the program removal process and the data destruction or retrieval associated with the uninstallation process.
Just like it is stated in Green Patterns, the reference guide when it comes to software eco-design written by Green Code Lab, we can simplify a software life cycle by keeping only 4 stages: production, distribution to end-user, actual use and end of life/reutilization/recycling.

Production

Design and development process is considered as a unique stage allowing to produce the software. This phase includes the whole software design process:

  • need analysis,
  • design,
  • programming,
  • test,
  • stabilization,
  • deployment.

Resources associated with correctional maintenance acts (bug fix) and functional enrichments are to be included in this stage.
Software is often composed of elements such as frameworks, libraries, etc. In that case, we can consider the production of these components has a negligible impact if we look at the amount of copies (reutilizations) that are made.

Distribution to end-user

Several scenarios are possible, we’ll present briefly three of them.

  • Downloading: software and documentation are distributed electronically. The program issuer (download server) perimeter has to be taken into account, just like the recipient’s (end user’s computer), as well as the infrastructure used to send electronic files (networks, routeur etc ), by taking a portion of the hardware manufacturing and energy needed to download the sotfware depending on used resources.
  • Software and documentation are packaged and sent in the mail, hence the support have to be taken into account (CD-ROM, DVD, USB key, documentation), so as for packaging and mailing services associated.
  • User can get the license and the user manual in a local shop or via mail and download the software. The packaging step (manufacturing & transportation) has to be taken into account, the software download too. In that particular case, impacts due to users movements can be rather high and become greater than the other impacts. Previous LCAs, conducted by the Orange Group on terminals, mobiles, modem, CD-ROM, showed the clients’ moves can vary a lot from each other and be very impactful, particularly if it is done by car (several kilogramms of CO2).

The software utilization by the end-user is initiated by the installation on its hardware (initial operation) following the download (distribution) for instance and covers the whole software use stage on the user’s suited hardware.

Perimeter includes:

  • Hardware needed or required to use the software. In this case, we consider the portion of:
    • hardware manufacturing (user’s equipment, network access, server access),
    • the energy used when the hardware is on (user’s equipment and potentially network access and server access), which could automatically integrate the consumption of required software;
    • required software integrating its own resource consumption (OS, virtual machines…). We can isolate the resource consumption of thoses mandatory software by establishing a standard value called “Idle” which is the resource consumption of hardware and its requirements, before any execution of the analyzed software; this value can be split in as many values as we want if we wish to isolate OS from browser for instance;
  • The software being assessed and integrating its power consumption:
    • data needed to use the software or ones created by it and stored on the application’s different resources;
    • power consumption associated with this data is integrated by default in the equipment.

For example, if we take a Web page, the hardware and software requirements to display the page are: a computer/tablet/smartphone, an OS (Android, Windows, iOS…), a browser (Firefox, Chrome, Edge, Safari…) and potential plugins.

End of life / Reutilization / Recycling

We assume that, at the end of life, a software is erased or uninstalled on the user-side and the editor-side. There are several things to take into account for this step: the end of life of support hardware and software-generated data.

  • Support hardware end of life: here, we are facing a classic end of life issue of an electronical hardware support, considered complex and polluting, thus classified “WEEE” (Waste Electronic and Electrical Equipment).
  • Data end of life: we can uninstall the software properly folllowing a procedure deleting all setting files on the client’s terminal. In this phase, you should also take into consideration the software-generated data created willingly, or not, by the user. In that case, we may face different situations:
    • the user does not wish to retrieve the data he created;
    • the user wants to get its data back in order to use them with a similar tool and a conversion process exists, this process was included in the new tool during the design and development phase;
    • the tool doesn’t allow the retrieval and data conversion process for a new use, in that case we will have to estimate the conversion impact for the user in that end of life stage

To finish off this serie on software LCA, the next blog article will be about planned obsolescence of software. Indeed, as we saw, anticipating a software end of life is not an easy task and that’s why we are dedicating a whole article to this topic.

Discover the whole Methodological Guide to software LCA, downloadable for free, as well as a use case on environmental impacts of an application.

Software eco-design: software products features

Reading Time: 4 minutes

This article aims at presenting features specific to software during a life cycle assessment (LCA). For each issue raised by these features, we will explain what approach we recommend to assess environmental impacts.

Find the complete Methodological Guide to software LCA as well as a use case on assessment of an application’s environmental impacts.

Software: tangible or intangible?

Software is a very special type of good:

• It doesn’t produce any direct tangible waste.

• It isn’t connected directly to power supply, hence isn’t seen as « consuming ».

• However, it does have an environmental impact represented by the consumption of resources and energy, due to hardware needs for its development and usage.

The goal of a LCA is to evaluate environmental impacts of manufactured goods, services and processes. But here, the question is: which category does software belong to?

As an initial reaction, it seems obvious that software has similarities with tangible goods, like the ones produced by the traditional industry, as they are materialized by a set of computer data (source code / executable code) that we can trade, own and use to answer a specific need.

However, it is important to make the difference between storage medium and physical interfaces of software interaction itself: software simply is a « state » of the storage medium (made of a unique and well-defined sequence of 0 and 1), a « state » of the network moving data around, a set of « states » of the screens displaying the software graphical representation, etc. So, should we consider software more as an intangible good?

To answer these questions, it is key to distinct the software itself from the service it offers. This way, we can consider software as an intangible good, offering one or more specific services (features or content). As an intangible product, its environmental impacts will result from consumption of resources (human, physical, tangible…) needed for the implementation of different phases of its life cycle: manufacturing/development, operating phase, distribution, decline.

Should we isolate software from its operating environment?

It is rather obvious software doesn’t function by itself, but always in an ecosystem of software it depends on, starting with OS (exploitation system), or with which it communicates and interacts. With this method, measuring impacts generated by an only studied software during usage phase is pretty tough.

Software impact never goes without the hardware and OS it works with: during a LCA, identifying environmental impacts linked to OS or hardware correctly isn’t possible. However, these impacts can be retrieved thanks to comparative LCAs, which means by comparing LCA of two very specific configurations. Let’s illustrate with an example: Software A on Hardware X with OS1 versus Software A on Hardware X with OS2. Or for instance, conducting sensitivity analyses would allow to asses impact deltas linked to different hardware.

IT equipment doesn’t’ necessarily work only for the studied software. Most of the time other applications and software are running at the same time, on the same equipment, thus, are consuming resources. As a consequence, the power consumed by the equipment cannot be associated with the studied software only. In order to assign a software the energy it consumes, the strategy implemented as part of the Web Energy Archive research project was to subtract the energy consumption induced by the OS and specific services such as antivirus (it is called consumption in idle mode) to the whole consumption of the equipment.

Software: what perimeter to consider?

One of the main issues we encounter when we think of environmental impact evaluation of a software is that it evolves quite a lot from one version to another (correction, features, etc) and it can have a modular architecture, or even work simultaneously on different equipments.

Software keeps changing.

Software breaks down in a variety of versions and sub-versions with different features. We may be tempted to say it doesn’t lead to any major issue as versions are spaced out in time… but it rarely is the case. When an official release of a well-identified version is available, software can quickly become the core subject of corrective patches or complementary modules, which may be very numerous and frequent. It has been very common and has been the trend in the last few years.

It is important to differentiate minor evolutions of a software from major ones:

Major evolutions carry new features, or even a complete application restructuration.

Minor evolutions mainly carry bugg corrections or addition of minor features.

As talking about a « finished » version of a software is tricky, we suggest limiting the study to the « latest stable version that is the most used ». No matter what version you study, it will have to be mentionned explicitly in the study assumptions. The impact of corrective and/or operational versions, whether minor or major, will be taken into account only with a sensitivity analysis. This means we model the impact of a bug correction or feature evolution by adding resources (HR, paper consumption, hardware…) during the manufacturing/development phase or in a new specific phase (maintenance).

Often, software is modular.

Software itself can be broken down into different modules we choose whether or not to install, or it can offer the possibility to install plugins and add-ons (like it is the case for most internet browser). We cannot model this concept of modularity per se with a LCA, for a simple reason: it would be tough, almost impossible, to identify specific resources needed for the development of each and each modules. You’ll have to consider the most standardized configuration possible, then you’ll be able to run sensitivity analysis in order to assess impacts of resources needed to develop specific modules (HR, hardware…).

To continue this serie on software LCA, the next blog article will be covering life cycle of software. As a matter of fact, life cycle of a basic tangible product is rather simple to figure out, however it isn’t the case for software.

Discover the whole Methodological Guide to software LCA, downloadable for free, as well as a use case on environmental impacts of an application.

Software eco-design: why conducting a software Life Cycle Assessment (LCA)?

Reading Time: 5 minutes

Eco-design consists in taking into account environmental and sanitary impacts during conception or improvement phases of a product or service. It is perceived more and more like a value creation process, in all kind of businesses and areas. This phenomenon is growing as companies get more sensitive to their share of responsibility in the future of subjects such as our planet or next generations. The other reason is firms realize the numerous benefits they can get out of such a process.

Find the complete Methodological Guide to software LCA as well as a use case on assessment of an application’s environmental impacts.

Why conducting a life cycle assessment of software?

There is actually a domain eco-design is in the introduction phase: the software world, in which most methods and best-practices remain to be written. Plus, just like in any other economical areas, the benefits perceived by the different actors from the digital world are numerous and very interesting:

Cost reduction

By trying to reduce resources or raw material needed to produce a good, eco-design also allows to decrease manufacturing costs. This principle is applicable to software as well. Indeed, in the software production phase, lowering the number of functionalities to develop, the amount of work stations to deploy, the quantity of impressions to generate or the energy needed for the software to function, is a way to reduce pollutions generated by the activity, as well as diminish software manufacturing costs.

Anticipation of environmental rules

More and more norms are inflicted to companies to make products, and more generally the economy, more virtuous environment-wise. For instance, think of Electrical and Electronic Equipment related rules (EEE), RoHS, WEEE, REACH or ErP, aiming at making products less polluting. We could also mention current or future government attempts to integrate environment deterioration costs to our economy, which as of today aren’t undertaken by companies (negative externalities): CO2 emission rights, carbon tax, etc. Now these rules are implemented and others around the corner, it is safe to say companies which already considered this eco-design issue are a step ahead have a true competitive advantage compared to other firms.

Product differentiation

« Eco-designing » is also developing a better-quality product that is at the same time more resistant, more durable and more frugal for the user; as these benefits go tightly with the impact reduction of the product on the environment and/or the extension of its active life cycle. The user can get the most of it. For example, power consumption is an actual issue for a datacenter manager, who would perceive greatly a less energy consuming software, especially in an area such as this one where “Cloud operators” keep appearing on the market and better optimize their resources usage. Also, mobile platform autonomy is a key stake for smartphones and tablets’ constructors and users, battery consumption it generates is a metrics to take into account.

Innovation factors:

The French Ministry of Ecology, Environment and Sustainable Development declares on its website: (translated to English)
«Eco-design is a spur for innovation, both at the product function level and the different steps of its life cycle. Having a fresh look to optimize consumptions (materials and energy) and to reduce pollutions can sometimes lead to brand new ideas for a product’s components, the way it works or the technologies it uses». This is true for software, but also for any other type of product.

Company’s image:

We are in a time consumers are more and more attentive to corporate social responsibility (CSR) efforts of companies, and being actively engaged in applying software eco-design principles for sure benefits a company’s image and prestige, with positive financial impacts.

After making these observations, a group of « Green IT » experts founded the Green Code Lab which goal is to promote Software eco-design and offer tools and methods to facilitate the implementation. As part of the collaboration between Orange and GREENSPECTOR, winner of a call for projects on software eco-design launched by ADEME, both companies brought each of their expertise together to continue working, subject of the Methodological Guide to software LCA. We offer here a methodology to conduct a software Life Cycle Assessment (LCA) that truly suits the objective of defining a methodology to diffuse widely in order to initiate future requests of software impact evaluation.

Indeed, LCA is a central tool that is a key element in software eco-design. We are in a case of standardized methodology (ISO14040 and ISO14044 among others) which lets you assess the environmental impacts of manufactured goods, services and processes, and this in a very complete way. Examining pollutions generated at every single stage of the product life cycle (conception, production, usage and decline) permits to not forget any of them and figure out which stage pollutes the most (the one you should focus on at first). This effort will vary depending on the company’s decisions, choices and strategical constraints. Having an overall vision of all stages also allows you to make sure a solution lowering the impact on the environment at a certain stage will not generate more pollution at another stage of the product life cycle (avoiding pollution and/or impact transfer).

As a consequence, the purpose of the Methodological Guide to software LCA is to offer a methodology to conduct a software LCA. Defining a mutual methodology to this category of products is justified by the fact that software, that are often wrongly considered as intangible, hold specific features different from the «average tangible» products. This intangibility raises questions on what is the best way to conduct such an analysis on software.

Let’s point out that the social aspect, which is one of the three pillars of sustainability and to which Green Code Lab particularly pays attention, isn’t discussed directly in this document (beside the indirect sanitary impacts). However, social LCA methodologies exist widely, and what is mentioned here is applicable and transposable to any social and societal impact analysis.

What are the goals of a software LCA?

As stated in more details in the Methodological Guide to software LCA, the first step in a software LCA is the goal and field of study definition. This step is essential as it will impact numerous factors choosing in what way the next steps of the study should be organized, but also the results of the study themselves. That is the reason why LCAs are said to be « goal dependent ». When it comes to software, we can identify several goals:

  • Studying environmental impacts of a given software (already performed): consumption of non-renewable resources, energy, pollutant emission (chemical or particles) in water, air and soils.
  • Determining the most impactful stages in a software life cycle: production/development, usage, transportation, and decline. This particular type of study can narrow down to just software categories (email software, word processing software, CMS, web pages…)
  • Identifying improvement opportunities for future products, as well as ones for the reduction of impacts on the environment. This aims is particularly targeted by editors and software creator that are mindful to develop a product with a better environmental quality.
  • Comparing environmental impacts of several products or software solutions in order to pick the one with the lower environmental impact. As a consequence, users (Information System Department, individuals, etc) or developers/integrators facing technological choices can use this tool. In the context of a compared LCA (evolution of a software or new product), only the phases that changed between the two versions of product/service will be calculated. But careful, comparing two LCAs can be tricky. In order to be trustworthy, the comparison should be executed with the same software, at the same date, with the same cut-off rules and, if possible, by the same person.

Just like any other product and service LCA, in order to be published, a software LCA must be the subject of an independent critical review. Find out the specificities and features of software products in the next article, coming (very) soon on our blog: stay tuned!

Discover the whole Methodological Guide to software LCA, downloadable for free, as well as a use case on environmental impacts of an application.

Ethical and responsible Developer’s Week

Reading Time: 4 minutes

Software is everywhere. Because yes, software impacts directly our daily life: uberization, digitalization… but let’s stop the “buzzword-ization” right here. We, developers, are the architects of a virtual world serving real life. Our work has an impact on companies and life. If we accept to be credited for the numerous benefits realized, then let’s be honest and also recognize the drawbacks.

Social exclusion, diverse impacts on the environment, digital gap… all of these are real effects caused by the software we produce.

But do we really have the choice when facing the clients, the demand and the typical constraints such as cost and deadline? Can we code any differently?
Well, the answer is yes! And this is the choice made by numerous companies and individual developers: volunteering project such as Code for America eco-design of public software… Being an ethical and green developer is possible but how can we do so, in a very practical way?

Monday: (Re)think software impact

Monday is usually planning poker day, we discuss past events and upcoming tasks. A sustainable software is above all an environment and man friendly software. The beginning of the week is the perfect time to rethink the functionalities you are about to develop. Are they all useful and necessary? Will your choice of ergonomics exclude part of the population? Are you going to integrate elements going against users’ interests (tracking…)? It is the right time to discuss it with your client or the product owner. But don’t you worry, ethics and responsibility are pretty contagious.

Tuesday: Switch to « slow connection » mode

Nope, sorry, you cannot stay in your ivory tower with a fiber and 4G connection. Many users don’t have access to a 4G coverage (rural places, developing countries…). Your application or web site might be completely unreadable with a 2G connection? It is super easy to check: switch your phone in 2G and browse through your website. Spend the day like this and you may have empathy for the nomad user located in Northern Montana.

Wednesday: Switch to low-tech

Happy to be back with a good connection? But did you realize your users don’t necessarily own the latest up to date smartphones, like a Galaxy S28 or the iPhone 15? You develop on a killer platform and emulator so, of course, it’s fast. Here’s another mini challenge: let’s switch to “low tech” mode for the day! Choose a smartphone that has less than 2 CPUs, borrow the intern’s PC to browse through your site… Chroma development tools are pretty helpful too. You can emulate a visit from a single CPU. If you lighten your app as a next step then, not only you will get furthermore potential users, but will also avoid nourishing the need of others to replace their hardware “just because” their equipment is too slow. Yup, obsolescence isn’t (only) created by manufacturers.

Thursday: Lose your senses

Chances are you are a super-developer with your 5 senses working in full force; so imagine losing one, or more, of those precious senses. Use a screen reader, close your eyes and listen to your site… Not easy, huh? Accessibility is very important, if you don’t pay attention to it you end up excluding part of the population. At the end of the day, you will probably want to implement a few additional accessibility rules. Here’s a useful reference: Accessiweb.

Friday: Don’t send to prod directly, measure energy consumption first

First off, never send to prod on a Friday (if you’re ever in doubt, check here. So you might as well just do some double checking. By now, your actions probably reduced your application’s overall impact on the environment. In order to be sure and avoid future mistakes, you need to M.E.A.S.U.R.E. Yes, measure! No matter if you think resources consumption aren’t a real issue and it’ll be just fine “‘cause you were careful”… well, guess what? Intuition isn’t enough in this case. Now, your users, know that they do watch very closely their battery percentage and data consumption. After all, as any good performance-addict would do, if you focus on display speed, you might miss the energy-reduction opportunity, that could be easily done… You could even increase the level of consumption which, really, isn’t a good idea. So, 1) Measure, then 2) Act.

Saturday: Git pull request – the good word

Coding is life, so on weekends you work on open source side projects. Why not trying to discuss implementation of improvements you would have learnt this week with the rest of the community? Open source being a favorable environment for these best practices.

Sunday: Be responsible – and a proud one!

Usually, in typical family dinners, you “that works in IT” are asked to solve Uncle Bob’s printer issues. Because, no, they still don’t get what you do as a job. For their defense though, when you tried to explain Aunt Lucy how awesome the newest Angular 2 features were, you kinda ruined the vibe.

But this time it’s different, we promise. This week that just went by changed something. Today, you can proudly tell what you did, what it changed, how beneficial it is and how good it will be for the environment and the society. This time, I can assure you your folks’ faces will light up and you might find in their approving looks the appearance of a true respect and understanding of your job of developer, that you now proudly practice and assume. The week after will be different though!

Software eco-design : towards a Happy Sobriety in the Digital world

Reading Time: 6 minutes

Digital is booming ! We consume more and more services and information in digital formats, whenever and wherever we feel like it. These services and contents keep getting bigger in number and in size. This omnipresence leads to a sharp increase of resources consumption in datacenters; however, the effects are even more significant in all deployed products such as laptops, tablets, smartphones, internet boxes, connected objects, Dresde University estimated that, by 2030, the Internet as a whole would consume as much energy as the whole humanity in 2008!

The most striking digital evolutions are both the amount of data we produce then store every single minute, and also the hardware miniaturization allowing an easy access to information and services. To be able to do that, we are constantly in need of an increasing intelligence embedded in hardware that keeps getting smaller. Such changes are doable only if we have recourse to new optimizations : components, batteries, datacenter cooling… and now software! Because natural entropy of the software is an actual thing and we even have a name for it: « BLOATWARE » !

The « bloatware» phenomenon

Nowadays, developers are not taught to watch used resources anymore ; not in school, nor in companies. As a main goal, developer teams aim at delivering expected functionalities in a predefined timing (which is good enough!). In order to be productive, we put together, integrate and then reuse existing libraries. When dealing with slow software, we always end up increasing the material strength to counterbalance the lack of software efficiency… Unfortunately nothing is measured throughout the whole development cycle, which would, at least, allow to react early enough and prevent correcting costs from being too high, hence be prohibitive. This tangible addition is possible at the expense of autonomy, ecology and common sense after all.
Some might even say this is a form of relocation for the digital world; when investing locally in software efficiency leads to cost reductions of hardware produced on the other side of the world, in social, sanitary and environmental conditions that are not always decent.

The Code Vert project was born !

As a company dedicated to a more respectful and virtuous digital environment, we had an intuition : by applying eco-design principles to the software “manufacturing” process, it should be possible to lower energy and resource consumption when the software is being used. However, this intuition had to be verified, and that’s what we did in the context of « Code Vert ».

This project, launched in 2012, lasted 30 months. Thanks to this, we validated gains resulting from a better use of coding instructions in an IT program (good «green» patterns or « Green Patterns»). In addition to that, we also were able to have a better overview of what our solution, GREENSPECTOR would look like, the goal being to accompany developers in their practical software eco-design implementation. Since then, we enriched the tool with power consumption measurement functionalities, which allows developers to see real gains (nothing like monitoring progresses to measure results!), thus going further than just the « theoretical » application of a good pattern; but it also allows detecting overconsumption, which would be impossible to see by simply analyzing the source code

Companies commitment to software eco-design

What is the point for a company to engage in this software sobriety path? Examples are multiplying lately and, soon enough, digital eco-design will be fully integrated in good work habits of all development teams. Would Facebook have figured out its viable business model a couple years ago, if it hadn’t divided its servers’ power consumption by two, thanks to the implementation of a software optimization strategy, preventing them from having to build a new datacenter?

More recently, Facebook launched its mobile application Facebook Light, intended for emerging markets thanks to reduction of both exchanged data volume and consumed energy (hence battery life). Now around us, we are starting to see digital eco-responsibility criteria in digital calls for tender issued by major French companies, or even good patterns reference document.

Software eco-design stakes and gains

In a world where moderating wisely mobile use is not an option (it might change one day, who knows?), this process answers a pressing demand from the end consumer we all are : the battery life of our mobile hardware, embedded, connected etc that follow us everywhere ! Indeed, autonomy is one of the main criteria when it comes to choosing a smartphone model. In early 2017, most mobile manufacturers were claiming that their focus was to develop the best battery life on the market for their latest smartphone. The user’s interest here doesn’t lie in making money, but in mobility, productivity, usability… overall to get a better user experience. Software optimization then becomes an essential part of the cog in the manufacturer’s autonomy quest, at least for the ones who want to grow their market shares.

Other gains can be even more interesting, particularly in the field of connected objects. Software eco-design allows a reduction in maintenance frequency, but also an increase in life cycle of deployed products (sometimes greedy for rare resources), even with a higher level of service. The first service we can offer in this IoT domain is to provide the object’s power consumption profile, based on actual use (not manufacturer’s data, when they’re even available…) to integrate it very practically in the solution’s economic model. This process has to include resources measurement (energy, data, memory…) while it’s actually functioning. In the end, no matter what the motives are, we are aiming at making it at least as performing for the user, for a lower exploitation cost, while controlling the stress from energy and resources needs – too often not renewable enough.

For those who don’t have visible or substantial gains in spite of the potential costs of implementing this eco-design, you can try and comply with an online good Green pattern reference so you can, at least for now, communicate on exemplarity and encourage this « eco-system ». Nantes Métropole was the first collectivity to get its website dedicated to energy transition labeled so it could publicly communicate on its interests in this eco-responsibility process applied to the websites.

Good patterns in terms of software eco-design !

How did we manage to set the good pattern bases in design and development efficiency? As part of Web Energy Archive, the Green Code Lab organization measured the power consumption of over 700 websites. What came out is a correlation between the resources consumption and the website’ complexity (scripts, number of requests in a page…). Other points were noted, which allowed to demonstrate that, on average, the consumption of a page in a minimized tab (non-displayed on the screen, hence no interaction), represents about 1 Watt of power on a workstation that doesn’t even display the page (so it doesn’t include consumption of requests sent to servers and supplying the network with traffic!). Avoiding such useless consumption is pretty easy to realize, you can just ask the developer to plan a data processing termination whenever the user isn’t looking at the browser tab.

In 2017, over 50% of web info and services are accessed on a mobile phone, under constraints of service, smartphones’ battery, limited data depending on phone plan… All reasons for a web content editor to be interested in this subject and watch its pages otherwise there is a threat of losing audience and consumers that are looking for instantaneousness.
Companies are slowly starting to integrate this process in their software factory, with interesting gains they forgot the past years. In « digital factory » or « mobile factory », the main stake is not to miss the digital transformation of organization, by providing a flawless user experience, (understand here mobility performance and productivity) key aspects in a project success.

Conclusion

Let’s fantasize a bit. Wouldn’t it be possible to take this logic further, and try to « save the planet » from the danger the digital evolution has generated ? This would give developers a new meaning and purpose to their job, they could finally find a way to intervene practically at their own level to limit their production’s ecological impact, thanks to their daily tasks. A way to avoid “throwing up” code with no responsibilities nor morals, and add value to the job?

It is true that revising an organization’s aplication portfolio in order to realize a retro eco-design doesn’t necessarily makes sense from an economicla point of view in the short-term. But digitalization is just starting, chapters that remain to be written are far more numerous than the ones already published. Let’s bet companies won’t have the choice but integrate this new frugal dimension in order to be competitive in a world with scarce resources. Like Pierre Rabhi describes furthermore in his book, we are heading towards a « Happy » Sobriety in the Digital world!