Digital Sobriety Expert
Books author «Green Patterns», «Green IT - Gérer la consommation d’énergie de vos systèmes informatiques», ...
Speaker (VOXXED Luxembourg, EGG Berlin, ICT4S Stockholm, ...)
Green Code Lab Founder, ecodesign software national association
The GREENSPECTOR team gladly announces its newest release is ready : version 2.4.0 Olive! With this new release, you will have, for all your test steps, metrics of the Android system (in addition to the metrics of resources and energy). This allows you to more finely analyze the behavior of the application and identify design issues. In the same way, with this version, you can measure several packages and you can distinguish the transmitted data from the received ones. Details on improvements below.
By facilitating access to digital solutions, the number of mobile application users will increase significantly. Mechanically the rate of commitment related to the service offered will also increase. This metric is essential to an application’s success. Without accessibility, the risk that users don’t use the service is high. And if the engagement rate is low, the revenue from the application could be heavily impacted.
The success and adherence of an application are too often reduced to ASO (App Store Optimization) or SEO (Search Engine Optimization). If the application is well ranked, then users will be at the rendezvous.
Why do we have this feeling of speed and access by all?
Operator communications are numerous on the speed and the good coverage of the network. For instance, operators advertise 98% coverage of the population for 4G. This is for them an differentiation asset.
And new technologies announcements reinforce the idea that current technologies are widely deployed. 5G communications make indeed the 4G thing outdated. This is the case of Xiaomi who recently unveiled the first 5G smartphone.
Moreover, the current technologies deployment seems to have no limit: 4G is on the moon
The feeling of connection speed is also due to the fact that decision makers and application designers are in perfect connection conditions zones (urban areas, premises with fiber connection…). The used analytics tools at the application editors don’t help. Indeed, how to know that a user exists if he doesn’t connect … The abundance of data makes the analysis of connection problems too difficult to achieve.
What is really on the side of the users?
The ARCEP reports since 2000 a digital barometer that counts the figures of the true use of the mobile in France. Its 2018 edition presents the following observations:
61% of mobile owners use 4G networks (compared to 42% in 2016)
This figure drops to 51% in municipalities with less than 2000 inhabitants.
We can notice that we are still far from the promise of 100% coverage. And this is confirmed by OpenSignal which realizes real measurements through user tests.
According to the measures, France has a coverage rate of 68%. Interestingly, the average connection speed at 25 Mbps is not among the best. This raises an important factor to take into account: the quality of the infrastructure (Operators, antennas …). The countries, the different areas even within the same city have inequalities in terms of coverage 4G:
For example, here is the network coverage map of downtown Nantes (France, 44):
We notice that there are very few areas in dark green (color meaning good network coverage) and many areas in red (bad coverage). In the end, network coverage and user speed is very variable. This is the same observation both locally and globally. Overall, users are unsatisfied with the 4G connection.
The deployment of technologies is accelerating so everything will be back to normal?
The belief in the evolution of technologies could make us believe that this situation is cyclical and that everything will soon return to order. This is the reassuring message from the operators, not only in France but also in countries that seem less advanced in terms of deployment.
It’s also the message of the politicians who announce the high speed of connection accessible to all. However it’s only a strategy and empty promises. The reality is much more complex.
On the one hand, we can see it even if new technologies are deployed, access for all may take longer. Many users are still in 2G. And this for many reasons. 100% coverage is impossible as we have observed but also some areas will always be white. The new buildings built with full metal are Faraday cages that block the waves and make access to networks complicated. This is only an example but we could mention a multitude of similar situations.
And even if you get 100% coverage, users’ equipment should follow closer. It would be necessary to be equipped with the latest generation of smartphones. And this isn’t necessarily the willingness of users to change as often. In any case, repackaging tends to maintain older generation equipment on the market. 1 in 3 French people say they have bought a used phone. But there is only a little luck that these phones integrate the 5G quickly.
The promises of new technologies are also often overestimated. The arrival of the 5G is associated with more speed. This isn’t that simple. The 5G will allow new uses (that of the IoT for example) and decongest the 4G network… In other words, if you expect to reach all mobile users, you must rely on the fact that a large part of them won’t in perfect connection conditions.
What impact on the business of your application or that of your services?
The use of mobile in different areas has become widespread. For example, that of M-Commerce, according to the french ARCEP study, 61% of mobile users make purchases with their smartphone. It’s basically the same statistics in other countries. In England, it’s 41%. The income of an application is generated and based on the number of users. But if the user has a bad experience or even if he can’t go after his desire to buy because of bad experience, the income will not be at the rendez-vous.
Lack of performance is one of the criteria for user disengagement. As we have seen, a large part of your users will be in non-optimal connection conditions. In this case, the application will probably be less responsive or unusable in some cases. For reference, we have measured with our tool GREENSPECTOR loading times of up to 4 minutes for some applications in 2G. In the end, the risk that the user uninstalls the application is high. In addition, the uninstall rate at 30 days is 28%. It’s even more important in some countries, such as developing countries, for reasons of space and heaviness. In this case, connection problems are very important for membership. This matches the observed data on the web performance side where 53% of visitors leave the site if it doesn’t load in less than 3 seconds (Chrome Dev Summit 2017).
Impacts other than economic ones?
If your application isn’t functional for slow connections, some users will not be able to use your services. You will therefore exclude users. And this exclusion doesn’t go in the direction of the social pillar of sustainable development which, among other things, requires the inclusion of all populations.
In the same way, if your application works poorly with slower networks, it will consume more battery. And here is the environmental pillar that you will not respect.
How to act?
Don’t wait for feedback from your users or those of your monitoring tools to act. A user who cann’t connect to your application may not be visible or reassembled in your data. It’s therefore necessary to anticipate and detect potential performance issues.
1) When designing and expressing the requirement, ask and specify that your solution is usable is visible under limited connection conditions. This can simply be summarized as “my application, or such function, can load in less than 3s on a 2G connection”
2) It’s necessary to test your solution in limited connections (2G, 3G …) automatically or manually.
3) You can monitor user performance through monitoring tools. Be careful however because it’s very possible that many users aren’t visible at all from these tools.
Solutions 1 and 2 are the solutions we advocate and use at GREENSPECTOR. Solution 3 is possible with GREENSPECTOR by measuring the solution immediately after going into production.
The world of software is bad and if we do not act, we may regret it. Environment, quality, exclusion … Software Eats The World? Yes a little too much.
The software world is bad. Well, superficially, everything is fine. How could a domain with so many economic promises for the well-being of humanity go wrong? Asking ourselves the question could be a challenging of all that. So everything is fine. We are moving forward, and we are not asking ourselves too much.
The software world is getting bad. Why? 20 years of experience in the software world as a developer, researcher or CTO have given me the chance to rub shoulders with different fields and to have this feeling that it is growing year by year. I have spent the last 6 years especially trying to push practices, software quality tools to educate developers about the impact of the software on the environment. You have to be severely motivated to think about improving the software world. Good practices do not pass as easily as the new Javascript framework. The software world is not permeable to improvements. Or at least only those superficially, not deep ones.
This is not the message of an old developer tired by the constant evolutions and nostalgic of the good old days of floppy disk … It is rather a call to a deep questioning of the way we see and develop the software. We are responsible for this “non-efficiency” (developers, project managers, salespeople …). To say that everything is fine would not be reasonable, but to say that everything is going wrong without proposing any improvement would be even more so.
Disclaimer: You will probably jump, call FUD, troll, contradict … reading this article. It’s fine but please, go all the way!
We’re getting fat (too much)
Everything grows: the size of the applications, the amount of data stored, the size of the web pages, the memory of the smartphones … Phones now have 2 GB of memory, exchange a photo of 10 MB by mail is now common… It might not be an issue if all software was used, effective and efficient … But this is not the case, I let you browse the article “The disenchantment of the software” to have more detail. It is difficult to say if many people have this feeling of heaviness and slowness. And at the same time, everyone has got used to that. It’s computer science. Like the bugs, “your salary has not been paid? Arghhh… it must be a computer bug”. IT is slow, and we can not help it. If we could do anything about it, we would have already solved the problem.
So everyone get used to slowness. All is Uniformly Slow Code. We sit on it and everything is fine. Be effective today is to reach a user feeling that corresponds to this uniform slowness. We get rid of things that might be too visible. A page that takes more than 20 seconds to load is too slow. On the other hand, 3 seconds is good. 3 seconds? With the multicores of our phones / PCs and data centers all over the world, all connected by great communication technologies (4G, fiber …), it’s a bit weird? If we look at the riot of resources for the result, 3 seconds is huge. Especially since the bits circulate in our processors with time units of the level of nanosecond. So yes, everything is uniformly slow. And that suits everyone (at least, in appearance.) Web performance (follow the hashtag #perfmatters) is necessary but it is unfortunately an area that does not go far enough. Or maybe the thinking in this area can not go further because the software world is not permeable enough or sensitive to these topics.
There are now even practices that consist not to solve the problem but to work around it, and this is an area in its own right: to work on “perceived performance” or how to use the user’s perception of time to put in place mechanisms so that they don’t need to optimize. The field is fascinating from a scientific and human point of view. From the point of view of performance and software efficiency, a little less. “Let’s find plenty of mechanisms to not optimize too much!”
All of this would be acceptable in a world with mediocre demands on the performance of our applications. The problem is in order to absorb this non performance, we scale. Vertically by adding ultra-powerful processors and more memory, horizontally by adding servers. Thanks to virtualization that allowed us to accelerate this arms race! Except that under the bits, there is metal and the metal is expensive, and it is polluting.
Yes, it pollutes: it takes a lot of water to build electronic chips, chemicals to extract rare earths, not to mention the round trips around the world … Yes, the slow uniformity still has a certain cost. But we will come back to it later.
It is necessary to go back to more efficiency, to challenge hardware requirements, to redefine what is performance. As long as we are satisfied with this slowness uniformity with new solutions which won’t slow down more (like the addition of equipment), we won’t move forward. The technical debt, a notion largely assimilated by development teams, is unfortunately not adapted to this problem (we will come back to this). We are on a debt of material resources and bad match between the user need and the technical solution. We are talking here about efficiency and not just about performance. Efficiency is a story of measuring waste. The ISO defines Efficiency with the domain: Time behaviour, Resource utilization and Capacity. Why not push these concepts further more?
We are (too) virtual
One of the problems is that the software is considered “virtual”. And this is the problem: “Virtual” defines what has no effect (“Who is only in power, in a state of mere possibility as opposed to what is in action” according to Larousse). Maybe it comes from the early 80s when the virtual term was used to speak about Digital (as opposed to the world of the Hardware). “Numeric” is relative to the use of numbers (the famous 0 and 1). But finally, “Numeric”, it’s not enough and it includes a little too much material. Let’s use the term Digital! Digital / Numeric is a discussion in France that may seem silly but is important in the issue we are discussing. Indeed, the digital hides even more this material part.
But it should not be hidden: digital services are well composed of code and hardware, 0 and 1 that circulate on real hardware. We can’t program without forgetting that. A bit that will stay on the processor or cross the earth will not take the same time, or use the same resources:
Developing a Java code for a J2EE server or for an Android phone, it’s definitly not the same. Specific structures exist to process data in Android but common patterns are still used. The developers have lost the link with the hardware. It’s unfortunate because it’s exciting (and useful) to know how a processor works. Why: abstraction and specialization (we’ll see this later). Because by losing this insight, we lose one of the forces of development. This link is important among hackers or embedded computing developers but unfortunately less and less present among other developers.
Devops practices could respond to this loss of link. Here, it’s the same, we often do not go all the way of it: usually the devops will focus on managing the deployment of a software solution on a mixed infrastructure (hardware and few software). It would be necessary to go further by going up for instance consumption metrics, by discussing the constraints of execution … rather than to “scale” just because it is easier.
We can always justify this distance from the material: productivity, specialization … but we must not mix up separation and forgetting. Separate trades and specialize, yes. But forget that there is material under the code, no! A first step would be to give courses on materials in schools. It is not because a school teaches about programming that serious awareness of the equipment and its operation is not necessary.
We are (too) abstract
We’re too much virtual and far from the material because one wanted to abstract oneself from it. The multiple layers of abstraction have made it possible not to worry about the material problems, to save time … But at what price? That of heaviness and forgetting the material, as we have seen, but there’s much more. How to understand the behavior of a system with call stacks greater than 200? :
Some technologies are useful but are now regulary used. This is the case, for example, of ORM which have become systematic. No reflection is made on its interest at the beginning of the projects. Result: we added an overlay that consumes, that must be maintained and developers who are no longer used to perform native queries. That would not be a problem if every developer knew very well how the abstraction layers work: how does HIBERNATE work for example? Unfortunately, we rely on these frameworks in a blind way.
And all this means that paradoxically, even as we have higher and higher level programming tools with better and better abstractions, becoming a proficient programmer is getting harder and harder. (…) Ten years ago, we might have imagined that new programming paradigms would have made programming easier by now. Indeed, the abstractions we’ve created over the years do allow us to deal with new orders of complexity in software development that we didn’t have to deal with ten or fifteen years ago (…) The Law of Leaky Abstractions is dragging us down.
We believe (too much) in a miracle solution
The need for abstraction is linked to another flaw: we are still waiting for miracle tools. The silver bullet that will further improve our practices. The ideal language, the framework to go even faster, the tool of miracle management of dependencies … It is the promise each time of a new framework: to save time in development, to be more efficient … And we believe in it, we run into it. We give up the frameworks on which we had invested, on which we had spent time … and we go to the newest one. This is currently the case for JS frameworks. The history of development is paved with forgotten frameworks, not maintained, abandoned … We are the “champions” that reinvent what already exists. If we kept it long enough, we would have the time to master a framework, to optimize it, to understand it. But this is not the case. And don’t tell me that if we had not repeatedly reinvented the wheel, we would still have stone wheels… Innovate would be to improve the existing frameworks.
This is also the case for package managers: Maven, NPM … In the end, we come to hell. The link with abstraction? Rather than handling these dependencies hard, we put an abstraction layer that is the package manager. And the edge effect is that we integrate (too) easily external code that we do not control. Again we will come back to it.
On languages, it’s the same story. Warning, I don’t advocate to stay on assembler and C … This is the case for instance in the Android world, for over 10 years developers have been working on tools and Java frameworks. And like that, by magic, the new language of the community is Kotlin. We imagine the impact on existing applications (if they have to change), we need to recreate tools, find good practices … For what gain?
Today the Android team is excited to announce that we are officially adding support for the Kotlin programming language. Kotlin is a brilliantly designed, mature language that we believe will make Android development faster and more funSource
We will come back later to the “fun” …
Honestly, we do not see any slowdown in technology renewal cycles. It’s always a frenetic pace. We will find the Grail one day. The problem is then the stacking of its technologies. Since none of them really dies and parts are kept, we develop other layers to adapt and continue to maintain these pieces of code or these libraries. The problem is not the legacy code, it is the glue that we develop around fishing. Indeed, as recited the article on ” software disenchantment “:
@sahrizv : 2014 – #microservices must be adopted to solve all problems related to monoliths. 2016 – We must adopt #docker to solve all problems related to microservices. 2018 – We must adopt #kubernetes to solve all the problems with Docker.
In the end, we spend time solving internal technical problems, we look for tools to solve the problems we add, we spend our time adapting to its new tools, we add overlays (see previous chapter … ) … and we didn’t improve the intrinsic quality of the software or the needs that must be met.
We do not learn (enough)
In the end, the frantic pace of change does not allow us to stabilize on a technology. I admit that as an old developer that I am, I was discouraged by the change Java to Kotlin for Android. It may be for some real challenges, but when I think back to the time I spent on learning, on the implementation of tools .. We must go far enough but not from scratch. It is common, in a field, to continually learn and be curious. But it remains in the iteration framework to experiment and improve. This is not the case in programming. In any case in some areas of programming, because for some technologies, developers continue to experiment (.Net, J2EE ..). But it’s actually not that fun…
Finally, we learn: we spend our time on tutorials, getting started, conferences, meetups … To finally experiment only 10% in a project side or even a POC, which will surely become a project and production.
As no solution really dies, new ones come … we end up with projects with multitudes of technologies to manage along with the associated skills too … The, we’re surprised that the market of the recruitment of developer is plugged. No wonders… There are a lot of developers but it’s difficult to find a React developer with 5 years of experience who knows the Go language. The market is split, like the technologies. It may be good for developers because it creates rareness and it raises prices, but it’s not good for the project!
To return to the previous chapter (Believe in miracle tools …), we see in the Javascript world with the “JS fatigue”. The developer must make his way into the world of Javascript and related tools. This is the price of the multitude of tools. This is an understandable approach (see for example a very good explanation of how to manager it). However this continuous learning of technologies create the problem of learning transverse domains: accessibility, agility, performance … Indeed, what proves us that the tools and the languages that we will choose won’t change in 4 years ? Rust, Go … in 2 years? Nothing tends to give a trend.
We have fun (too much) and we do not recover (enough) in question
Unless it is in order to put a technology in question to find another. The troll is common in our world (and, I confess, I use it too). But it is only to put one technology in question for another. And continue the infernal cycle of renewal of tools and languages. A real challenge is to ask ourselves sincerely: are we going in the right direction? Is what I do sustainable? Is it quality? But questioning is not easy because it is associated with either the troll (precisely or unfairly) or a retrograde image. How to criticize a trend associated with a technological advance?
The voices rise little against this state of facts: The disenchantment of the software, Against software development… and it’s a shame because questioning is a healthy practice for a professional domain. It allows to “perform” even more.
We do not question because we want to have fun. Fun is important, because if you get bored in your job, you will be depressed. By cons, we can not, under the pretext of wanting fun all the time, change our tools continuously. There is an imbalance between the developer experience and the user experience. We want fun, but what will it really bring to the user? A product more “happy”? No, we are not actors. One can also criticize the effort that is made to reduce the build times and other developer facilities. This is important but we must always balance our efforts: I accelerate my build time but it is only valid if I use the time gained to improve the user experience. Otherwise it is only tuning for his own pleasure.
It is necessary to accept criticism, to self-criticize and to avoid hiding behind barriers. The technical debt is an important concept but if it is an excuse to make bad refactoring and especially to change to a new fashionable technology, as much to acquire debt. We must also stop the chapel wars. What is the aim of defending one’s language from another? Let’s stop repeating that “premature optimization is the cause of all the evils…” This comes from the computer science of the 70s where everything was optimized. However, there is no more premature optimization, it is only an excuse to do nothing and continue like that.
We are (badly) governed
We’re not questioning ourselves about the ethics of our field, on its sustainability… This may be due to the fact that our field does not really have an ethical code (such as doctors or lawyers). But are we as truly free developers if we can not have a self-criticism? We may be enslaved to a cause brought by other people? The problem is not simple but we have in all cases a responsibility. Without an ethical code, the strongest and most dishonest is the strongest. The buzz and the practices to manipulate the users are more and more widespread. Without Dark Pattern your product will be nothing. The biggest (GAFA …) did not made it for nothing.
Is the solution political? We have to legislate to better govern the world of software. We see it with the latest legislative responses to concrete problems: GDPR, cookies and privacy notifications … the source of the problem is not solved. Maybe because politicians do not really understand the software world.
It would be better if the software world was structured, put in place a code of ethics, self-regulate … But in the meantime, it is the rule of the strongest that continues… At the expense of a better structuring, a better quality, a real professionalisation…
If this structuring isn’t done, the developers will lose the hand on what they do. But the lack of ethics of the profession is externally criticized. Rachel Coldicutt (@rachelcoldicutt) director of DotEveryOne, a UK think tank that promotes more responsible technology, encourages non-IT graduates to learn about these issues. To continue on this last article, it would be in the right way of computer science, domain from the military world where engineers and developers would be trained to follow decisions and commands.
A statement that echoes, in particular, the one held by David Banks (@da_banks) in the insolent “The Baffler”. D.Banks emphasized how the world of engineering is linked to authoritarianism. The reason is certainly to look at the side of the story. The first engineers were of military origin and designed siege weapons, he recalls quickly. They are still trained to “connect with the decision-making structures of the chain of command”. Unlike doctors or lawyers, engineers do not have ethical authorities who oversee them or swear an oath to respect. “That’s why the engineers excel in the outsourcing of reproaches”: if it does not work, the fault lies with all the others (users, managers …) “We teach them from the beginning that the most moral thing that they can do is build what they are told to build to the best of their abilities, so that the will of the user is carried out accurately and faithfully.”
With this vision, we can ask ourselves the question: can we integrate good practices and ethics in a structural and internal way in our field?
Development follows (too much) like any organization of absurd decisions
The world of software integrates into a traditional organizational system. Large groups, outsourcing via Digital or IT Companies, web agencies … All follow the same techniques of IT project management. And everyone goes “in the wall”. No serious analysis is made on the overall cost of a software (TCO), on its impact on the company, on its profit, its quality … It’s the speed of release (Time to Market), the “featural” overload (functional), immediate productivity, that matter. First because the people outside this world know too little about the technicality of the software and its world. It is virtual so simple. But this isn’t the case. Business schools and other management factories do not have development courses.
We continue to want to quantify IT projects as simple projects while movements like the no estimate propose innovative approaches. Projects continue to fail: the chaos report announces that just 30% of projects are doing well. And in the face of this bad governance, technical teams continue to fight against technologies. Collateral damage: quality, ethics, environnement… and ultimately the user. It would not be so critical if the software did not have such a strong impact on the world. Software eats the world … and yes, we eat it …
One can ask the question of the benevolence of the companies: are they only interested in their profit, whatever is the price, and leave the world of the software in this doldrums? The answer may come from sociology. In his book “Les Decisions Absurdes” Christian Morel explains that individuals can collectively make decisions that go totally the other way from the purpose. In particular, the self-legitimization of the solution.
Morel explains this phenomenon with the “Kwai River Bridge” where a hero builds a work with zeal for his enemy before destroying it.
This phenomenon of the “Kwai River Bridge”, where action is self-legitimized, where action is the ultimate goal of action, exists in reality more than one might think. He explains that decisions are meaningless because they have no purpose other than the action itself. “It was fun”: this is how business executives express themselves with humor and relevance when one of them has built a “bridge of the Kwai River” (…) Action as a goal in itself supposes the existence of abundant resources (…) But when the resources are abundant, the organization can support the cost of human and financial means which turn with the sole objective of functioning “. And, in the world of software, it globally provides the means to operate: gigantic fundraising, libraries that allow to release very quickly, infinite resources … With this abundance, we build a lot of Bridges of the Kwai River.
In this context, the developer is responsible for abundance direction that he follows.
The development is (too) badly controlled
If these absurd decisions happen, it is not only the fault of the developer but of the organization. And who says organization says management (sub-different form). If we go back to Morel’s book, he speaks of a cognitive trap in which managers and technicians often fall. This is the case for the Challenger shuttle, which was launched despite the knowledge of the problem of a faulty seal. The managers underestimated the risks and the engineers did not prove them. Everyone blamed the other for not providing enough scientific evidence. This is often what happens in companies: warnings are raised by some developers but the management does not take them seriously enough.
This has also happened in many organizations that have wanted to quickly develop universal mobile applications. In this case, the miracle solution (we come back to it) adopted by the decision-makers was the Cordova framework: no need to recruit specialized developers iOS and Android, ability to recover web code … The calculation (or not) simple showed that benefits. On the other hand, on the technical side, it was clear that native applications were much simpler and more efficient. 5 years later, the conferences are full of feedback on failures of this type of project and the restart from scratch of them in native. The link with Challenger and the cognitive traps? The management teams had underestimated the risks, the actual cost and did not take into account the comments of the technical teams. The technical teams had not sufficiently substantiated and proved the ins and outs of such a framework.
At the same time, we return to the previous causes (silver bullet, we have fun …), it is necessary to have a real engineering and a real analysis of technologies. Without this, the technical teams will always be unheard by the management. Tools and benchmark exist but they are still too little known. For example, Technologie Radar that classifies technologies in terms of adoption..
It is at the same time important that the management of the companies stops thinking that the miracle solutions exist (one returns to the cause of the “virtual”). You really have to calculate the costs, the TCO (Total Cost of Ownership) and the risks on the technology choices. We continue to choose BPM and Low-code solutions that generate code. But the hidden risks and costs are important. According to ThoughtWorks:
“Low-code platforms use graphical user interfaces and configuration in order to create applications. Unfortunately, low-code environments are promoted with the idea that this means you no longer need skilled development teams. Such suggestions ignore the fact that writing code is just a small part of what needs to happen to create high-quality software—practices such as source control, testing and careful design of solutions are just as important. Although these platforms have their uses, we suggest approaching them with caution, especially when they come with extravagant claims for lower cost and higher productivity.”
We divide (too much) … to rule
This phenomenon of absurd decision is reinforced by the complex fabric of software development: Historically out-of-digital companies outsource to digital companies, IT Companies outsource to freelancers … The sharing of technical responsibility / management is even more complex and absurd decisions more numerous. But this does not end here. We can also see the use of open-source as a sort of outsourcing. Same for the use of framework. We are just passive consumers, we are free of many problems (which have an impact on resources, quality …).
This is all the easier as the field is exciting and the practice of side-projects, time spent on open-source projects outside office hours is common … The search for “fun” and the time spent then benefit more organizations than developers. It is difficult in this case to quantify the real cost of a project. And yet, that would not be a problem if we came up with software “at the top”. This does not change the quality, on the contrary, the extended organization that is composed of the bulk of groups, IT companies, freelancers, communities has no limit to build the famous bridges of the Kwai River.
The developer is no longer a craftsman of the code, but rather a pawn in a system criticizable from the human point of view. This is not visible, everything is fine and we have fun. In appearance only, because some areas of software development go further and make this exploitation much more visible: The field of video games where the hours are exploding.
A better professionalization, a code of ethics or anything else would be useful in this situation. Indeed, this would make it possible to put safeguards on overtaking or practices (directly or indirectly) open to criticism. But I’ve never heard of the developer corporation or other rally that would allow this code defense.
We lose (too) often the final goal: the user
And so, all these clumsiness (too heavy software, no quality …) are found among users. As we have to release software as quickly as possible, that we do not try to solve internal inefficiencies, and that we do not put more resources to make quality, we make mediocre software. But we have so many tools for monitoring and monitoring users to detect what is happening directly at home in the end, we think it does not matter. It would be a good idea if the tools were well used. But the multitude of information collected (in addition to the bugs reported by users) is only weakly used. Too much information, difficulty to target the real source of the problem … we get lost and in the end, it is the user who suffers. All software is now in beta testing. What’s the point of over-quality, as long as the user asks for it? And we come back to the first chapter: a software that is uniformly slow… and poor.
By taking a step back, everyone can feel it every day in the office or at home. Fortunately, we are saved by the non-awareness of users in the software world. It is a truly virtual and magical world that they are used to. We put them in hand tools but without an explanatory note. How to evaluate the quality of a software, the risks on the environment, the security problems… if one does not have notions of computer science, even rudimentary ones?
21st century computer science is what agribusiness was for consumers in the 20th century. For reasons of productivity, we have pushed mediocre solutions with a short-term computation: placing on the market more and more fast, profit constantly rising… intensive agriculture, junk food, pesticides … with significant impacts on health, on the environment… Consumers now know (more and more) the disastrous consequences of these excesses, the agri-food industry must reinvent themselves, technically, commercially and ethically. For software, when users understand the ins and outs of technical choices, the software industry will have to deal with the same issues. Indeed, the return to common sense and good practices is not a simple thing for agribusiness. In IT, we start to see it with its consequences on the privacy of users (but we are only in the infancy).
It is important to reintroduce the user into the software design thinking (and not just by doing UX and marketing workshops …) We need to rethink everyone of the software: project management, the impacts of the software, quality … This is the goal of some movements: software craftmanship, software eco-design, accessibility … but the practices are far too confidential. Whose fault is it? We go back to the causes of the problem: we are pleased on one side (development) and we have a search only profit (management side). Convenient to build Kwai River bridges … where are the users (us, actually).
We kill our industry (and more)
We are going in the wrong way. The computer industry has already made mistakes in the 1970s with non-negligible impacts. The exclusion of women from IT is one of them. Not only has this been fatal for some industries but we can ask ourselves the question of how we can now address responses to only 50% of the IT population, with very low representativeness. The path is now difficult to find..
But the impact of the IT world does not stop there. The source and the model of a big part of the computing come from the Silicon valley. If Silicon Valley winners are left out, local people are facing rising prices, decommissioning, poverty … The book Mary Beth Meehan puts in image this:
“The flight to a virtual world whose net utility is still difficult to gauge, would coincide with the break-up of local communities and the difficulty of talking to each other. Nobody can say if Silicon Valley prefigures in miniature the world that is coming, not even Mary, who ends yet its work around the word “dystopia”.
In its drive towards technical progress, the software world is also creating its …environmental debt…
There are many examples, but the voices are still too weak. Maybe we will find the silver bullet, that the benefits of the software will erase its wrongs… nothing shows that for now, quite the contrary. Because it is difficult to criticize the world of software. As Mary Beth Meehan says:
“My work could just as easily be swept away or seen as leftist propaganda. I would like to think that by showing what we have decided to hide, we have served something, but I am not very confident. I do not think people who disagree with us in the first instance could change their minds. “
On the other hand, if there are more and more voices, and they come from people who know the software (developers, architects, testers …), the system can change. The developer is neither a craftsman nor a hero: he is just a kingpin of a world without meaning. So, it’s time to move…
The tests automation is often considered as an additional cost within the development teams, and this for various reasons:
Necessity of a team’s rise in competence on a particular tool
Writing times are more important than manual execution times
Necessary maintenance of tests over time
…
Mobile development with lower project costs and shorter development times doesn’t help move to automated testing. The benefits aren’t necessarily well evaluated towards the cost of autonomization. In the end, mobile application automation projects often go by the wayside or are delayed too late in the project. This is a common mistake because the benefits of test automation for mobile applications are numerous.
Mobile applications are applications like the others: complex, technical …
Mobile applications are considered applications requiring little development, low costs… This isn’t always the case. We are no longer in the same situation in recent years where mobile application projects were Proofs Of Concept and other early stages. Mobile applications have now undergone the natural entropy of any software project: reinforced security constraints, integrated libraries and SDKs, modular architectures, multiple interactions with backend servers …
This maturity (mixed with the entropy of software) no longer allows to leave the tests aside. An industrialization of the tests, and in particular the automation, makes it possible to ensure a necessary quality for the mobile projects. Without this, it’s a failure assured.
Failure is no longer possible
Combined with this complexification of mobile projects, applications have become critical business projects. Indeed, they are the new showcases of brands and organizations. And given the rapid development cycles, a project failure (delays, late detection of user bugs…) may be fatal to the company reputation. Especially since a bad experience experienced by the user can simply lead to uninstallation, non-use of the application or writing a negative opinion on the stores.
The level of quality must be at the rendezvous and automated tests are a must to control the performance of its application.
Test is doubtful, doubt is good
A quality development team, a square process and manual tests could help ensure this quality. To test would question the skills of the team? No, because as the stress of the tightrope walker that allows him to cross the ravine, the doubt is good for the quality. An SDK with unexpected behavior, an undesired regression … As much insure with tests.
Automation makes Test Driven Development (TDD) possible
Anticipating automation will allow more to go to the practices of Test Driven Development. Writing tests before development is quite possible in mobile projects. With or without tools, it’s interesting to automate a specified scenario and launch it under development.
And not to mention Test Driven Development, having tests that closely follow the development will detect other problems as soon as possible.
Platform fragmentation can not be managed with manual tests
Testing manually on a device only no longer makes it possible to ensure the proper functioning of an application. The diversity of the hardware with hardware and software configurations is a source of bug. Different screen sizes, overlay builders … an automation will allow to run parallel tests on different devices and detect potential bugs. This way, we will avoid confusing end users and beta testers of the application!
Master the regressions in maintenance
The release of the first release of the application is only the beginning of the life cycle of the application. 80% of the development load is maintenance and application evolution. It’s therefore necessary to project on the duration. By automating, we will avoid adding regressions in the application. The launch of the tests will be systematic with each evolution of the application.
Automation allows performance metrics
In the end, automation will make it possible to follow other requirements than functional requirements: non-functional requirements. Indeed, associated with measurement tools, the automated tests will allow to trace new metrics: performance, resource consumption…
This is the strategy that GREENSPECTOR recommends to its users: by integrating the GREENSPECTOR API into automated tests, they can follow at each test campaign: the efficiency and performance of their development. The cost of automation is then largely covered by the benefits.
Your phone battery is discharging too fast? When buying your smartphone, the seller has promised you a battery life of 3 days. After one month, you recharge it every day (the phone, not the seller). Maybe the seller (and probably the constructor with him) lied to you about the actual capabilities of the phone? Since the Volkswagen case, we are no longer sure of anything … But the problem is probably more complex. You may not be using your smartphone correctly, or you may be choosing the services that aren’t suit your needs as well.
{{% note %}} Article to read with some tracks to listen to
Music associated with the topic of the article and according to the tastes (With a distant report on the subject, I admit it):
The music or the autonomy of the phone, you must choose!
It’s a long way from listening to music with an MP3 player or even a cassette player (for the older ones). Now, as for many uses, the smartphone has replaced the different readers. We don’t recharge batteries, but we charge the battery of the smartphone. Especially that there are many moments in everyday life to listen to music: in public transport, at the office, by car, in the evening to fall asleep …
And there are many ways to listen to music: apps like Deezer, Live Radio, Youtube, locally stored MP3 … But what’s the least consumer way for your battery?
Measurement protocol
This study was done on a Samsung Galaxy S7 smartphone. Some results may vary on different devices, however they provide a trend for the digital services that have been evaluated.
I want to choose the least consumer service
So you always have the screen turned on and you particularly like to zap between the tracks. Consider that you are in Wi-Fi and the sound volume is on a medium level.
You can reduce your phone’s battery life by up to 3.5 hours depending on which service you use It’s usually best to play an MP3 that you have stored on the phone. You don’t use an internet connection, so there is less consumption that results. However, we see that apps like Google Play will tend to consume a lot more autonomy than playing music on the Internet (Deezer, [Spotify](https://play.google.com/store/apps/details? id = com.spotify.music), Youtube…). Be careful therefore to choose the player for your MP3 files. Android will tell you in all cases the most consuming applications, you can then make your choice.
Internet radio isn’t the most interesting solution. Pre-downloaded podcasts will be preferred. Indeed, the internet connection coupled with the browser make that the radio players aren’t the most effective.
What if I lower the volume of the audio ?
Does the sound volume affect battery consumption?
Yes and no, for a low to medium level, consumption does not vary much. However, if you use your smarphone as a speaker to broadcast sound around you, the consumption will be higher (2h20 of autonomy less). If you want to hear the louder sound, the headphones will be more useful. Don’t wait for a significant reduction in consumption with a headset.
And if I’m mobile with 4G?
Is it good listening to music on public transport where there is no Wi-Fi network?
Well this at a cost, and a significant cost. It’s best to connect to a Wi-Fi hotspot.
The summary of consumption while listening to music on the Youtube application and all in 4G.
And if I turn off my screen?
Some applications allow to go into sleep mode (or background mode) and let the screen turns off. If so, you will be able to save battery power. The ranking between services hardly changes, you just reduce consumption. Deezer actually becomes the most consumer. It seems that the background treatments aren’t optimized enough within the application.
So what?
Uses vary, depending on tastes, habits, smartphone … and energy consumption too. It’s thus possible to obtain a “range” of autonomy in continuous reading from 4h to 13h.
It’s also possible to change habits to increase autonomy. To go further in this process: identify consumer applications on OS and challenge publishers by asking them to improve is all it’s possible.
That’s green?
Yes, less energy consumption will reduce the load on the battery, so you will perform fewer charge cycles, longevity of the battery will be higher (it’s the cycles of reloading / unloading that l use), and you will extend the overall life of the battery (which is very polluting) see your smartphone.
This study was conducted with the GREENSPECTOR tool that measure the energy consumption of applications on real devices. For more information on methodologies and tools, we invite you to browse this blog.
The internet browser is one of the most critical applications of your smartphone. It allows you to access a multitude of services (Social Network, news, games …). It’s even more so when you don’t want to download an app and prefer to use a mobile site instead. Your browser is used almost continuously on your phone. It’s therefore responsible for some of the decline in the battery life of your smartphone.
It’s therefore important to choose the best browser if you want to increase the life of your smartphone.
Ranking
Find the methodology of this ranking at the end of the article.
We obtain a range of different browsers that varies between 6h15 and 7h26. This may seem small as a difference but over the total battery life of your smartphone, you will less stress your battery and ultimately avoid planned obsolescence. Not to mention a prolonged autonomy at the end of the day!
The browsers
Top 1: Brave
New browser on the market, Brave wants to make privacy a workhorse. It automatically blocks ads and trackers. This seems to pay on autonomy since Brave takes the lead with 7h26mn estimated autonomy for a continuous web use.
Top 2: Firefox Focus
Mozilla publishes Firefox Focus, a browser focused on privacy: default privacy policy, blocking tackers … It seems that just like Brave, this strategy allows to gain autonomy.
Top 3: Dolphin
Much less known than Chrome or Firefox, Dolphin is however very downloaded. With features similar to Chrome or Firefox, it’s a challenger to take very seriously.
Top 4: Opera
The Opera browser has an ad blocker and the ability to create a personalized homepage with a news feed. Its Mini version exists but could not be evaluated in this study. The autonomy is quite good but inferior of 30 minutes compared to our top 1: Brave.
Top 5: Ecosia
Based on Chromium, this German browser finances sustainable development actions (such as tree replanting) with searches carried out by users. Even if autonomy is not the worst, it’s unfortunate that this browser who wants only the good of the planet is not better placed in the ranking!
Top 6: Samsung internet
This is the browser pre-installed on all Samsung phones: Samsung Internet. Consumption close to that of Ecosia, probably because the browser is also based on Chromium.
Top 7: Chrome
Chrome, the Google Android browser, one of the most used browsers. The home page allows to consult a selection of articles of press. He’s in the top 3 of the worst browsers. Some heaviness coming from the history of the solution and the non-priority on privacy.
Top 8: Microsoft Edge
The new Microsoft engine is now available on Android. Maybe the bottom of the rankings is due to the non-adaptation of the engine for Android.
Top 9: Firefox
Published by Mozilla, the browser announces a reliability on the respect of the private life. We are, however, disappointed with his place in this ranking.
Conclusion
The choice of the browser should not only be about autonomy, but it is an important criterion. We observe that the 3 historical and recognized players (Microsoft, Google and Mozilla) are at the bottom of the ranking. This is probably due to the age of the applications, and therefore overweight code (Obesiciel or bloatware). But we can also go watch a performance race that was made at the expense of autonomy. Maybe the race between the 3 has not been beneficial to their improvement. We remember, for example, the Microsoft Edge benchmark … with only Chrome and Firefox. Maybe the arrival of new serious browsers will change the game.
We can note that the differences in autonomy between browsers are also related to certain features like the news feeds on the default home pages. However, the user has a margin of maneuver: disable these features if he doesn’t use them.
Note that the base open source Chromium which is found in different browsers (Brave, Samsung Internet, Ecosia …) isn’t necessarily the most optimized (We find most browsers in the middle of the ranking.) An optimization of the heart (and potentially better integration by publishers) would reduce the consumption of several browsers. Here we see the potential of open source that isn’t used fully.
A highlight of this benchmark, newcomers to the market with a real position on privacy are at the top of the ranking (Brave and Firefox Focus). Respect for privacy and the environment go in the same direction. This is a good signal for users.
Methodology
We perform our measurements of the real energy consumption of the phone with the tool GREENSPECTOR. The Samsung Galaxy S7 smartphone was used for this Benchmark.
The methodology aims to achieve a journey that is representative of the use of a user. We study how the browser behaves on the same course. The course lasts 5 minutes and is carried out 4 times to obtain reliable measurements.
Beforehand a preparation of the phone is carried out:
Some apps like Youtube, Google Chrome or Twitter offer a not white but black wallpaper. This “Night” mode or dark theme dedicated to a night use, facilitates reading at night, rest your eyes and especially you avoid turning into a lighthouse. But what about your battery?
We chose to compare the “Day” and “Night” mode of the Twitter app for this battle of efficiency.
Click on the bubble of your profile at the top left of your screen
Click on the moon icon at the bottom left of your screen
You can also enable or disable the “Night” mode from Settings:
Go to “Settings and Privacy”
Go to “Display and Sound”
Click on “Night mode”
And here is the result of the “Night” mode versus the “Day” mode:
Quel gain pour l’énergie ?
Over 30 seconds, the “Day” mode will consume 3.92 mAh and the “Night” mode 2.98 mAh. The “Night” mode is therefore less consumer -23%. Why ? The measurement was made on a Samsung Galaxy S7 that has an AMOLED screen. These screens are much less consumer on dark colors, find our explanatory article about it on our blog.
Does that make a difference to the autonomy of my phone? Absolutely! With the “Day” mode enabled, 1 hour of social network (including Twitter) will discharge the battery of 15% whereas in mode “Night”, the discharge will be only 11%.
This is a victory for the “Night” mode!
Note: The measurements were done simply with the GREENSPECTOR tool on a Samsung Galaxy S7. I invite you to browse this blog for more information on tools and methodology.
Knowing the battery life of a smartphone is important because it’s one of the first purchase criteria. This is the more critical for the fleets of devices (in B2B mode) for the companies. In fact, poor autonomy will lead to a productivity decrease or customer insastisfactions. It’s therefore necessary to have a good strategy to choose its devices as well as the applications that will be hosted on it.
Traditional approaches to estimate battery life
A first approach is to rely on data provided by manufacturers. However, the limit of this approach is that those data are based on usage scenarios that aren’t necessarily representative of yours. The risk is to have a reality of autonomy far removed from what you have estimated. Especially since some features (such as taking pictures for example ..) may not be optimized on a certain type of smartphone and will be very used in your use. This criticism is also valid for tests carried out by external laboratories.
A second approach is to perform tests on real devices and perform the target scenario. Tools exist to launch benchmarks automatically but you can also perform your tests manually. The advantage is to have a realistic autonomy. The only problem is that these tests are very time-consuming. And that doesn’t give the right to the error. Indeed, if you want to change a parameter on the smartphone (brightness …) or add an application to test, you must restart the entire test campaign.
One last approach is to let users do the testing. You wait for your users feedback or you use the fleet deployment tools (MDM) to trace the information. This has the advantage of being inexpensive, the disadvantage is that there is a hidden cost: if there is a problem, there will inevitably be insatisfaction and unproductivity generated. And that forces you to choose a device that may need to be replaced.
The innovative approach of GREENSPECTOR
To meet this need to control the battery life of devices (and to choose the right device as soon as possible), GREENSPECTOR proposes an approach based on on real but unitary measurements with a projection algorithm. The process consists of 3 stages:
Measurement of the main features on the devices to be evaluated
Configuring a target scenario
Data projection and analysis
Use case
We want to estimate the autonomy of a traditional use on a smartphone Samsung Galaxy S7. The use can be a use in company but also an intensive personal use:
30 minutes of internet browsing
30 minutes of social network
30 minutes of telephone conversation
30 minutes of taking pictures
10 minutes of video recording
30 minutes of e-mail
30 minutes of videoconference
30 minutes of Microsoft Word
10 minutes train reservation
30 minutes of geolocation
This scenario is deliberately generic but we could add a specific application or an exotic use …
Functionality measurement
We use the module Free Runner of GREENSPECTOR which allows perform manual tests. These actions can be empowered but in the approach of this article, we focus on rapid testing oriented exploratory tests. If a larger benchmark is needed, automation would be of interest.
For each step of the scenario (navigation, taking a photo …), we launch the Free Runner module and we carry out a scenario representative of a real use over 1 minute.
The GREENSPECTOR module sends the measurements directly to the GREENSPECTOR server. In the end it took us just over 10 minutes to get all the measurements. If we want a little more precision (or representativeness), we can do more iterations.
At this stage, the most consuming features or applications can be identified.
Implementation of the budget strategy
Within the GREENSPECTOR interface on the Budget tab, you will be able to initiate a projection of autonomy:
You will be guided in the budget configuration. A first step is to specify the autonomy you want to achieve. If you’re on a fleet of devices, you probably want your user to have at least 9 hours of battery life to finish the day without recharging the phone.
GREENSPECTOR then offers you the possible steps of the scenario. They come from the measurements you have done previously.
The most complicated step for you now is going to be to specify how many times or how long you want the action to happen. For example, we use 30-minute target durations, so you have to enter this data for each step. No worries, this can be changed later.
You can then validate and let the algorithm calculate. No time to have a coffee, the results of the projections are immediate:
Analysis of the results of the algorithm
The first warning in the window means that the projection of battery life according to your scenario and the real measures allow to say that the autonomy of 9 hours will not be respected.
This information is found in the projection graph:
The 1st bar is the available capacity of usable energy over 9 hours: The capacity of the Samsung Galaxy S7 phone is 3000 mAh.
the 2nd bar is the energy distribution (the unitized bugdets) by functionality if you want to respect the autonomy.
The 3rd bar is the consumption projection associated with real measures.
It can be seen, the actual measured consumption is 3300 mAh while the capacity of the phone is 3000 mAh. We will see below what to do to correct this.
The notion of unit budget appears on the graph and on the right-hand side. This is the ideal distribution of energy consumption on each feature to respect autonomy. Here are the main principles of the algorithm:
To stay close to a real use, the algorithm adds a period of inactivity that corresponds to what the user would do between his actions (Idle foreground)
A deep sleep period is added which corresponds to a long-term inactivity of the phone (Idle background)
The idle periods that you will define (for example an idle corresponding to a lunch break) will be associated with a budget that is based on the reference consumption of the phone
To the actions, will be assigned a budget which corresponds to a maximum consumption of x2 the reference consumption.
In the end, the unit budget of each action is the amount of energy that a unit action must not exceed. Like that, you can check against the actual measurement if the action consumes too much:
We see here that the consumption of navigation is important and exceeds the budget. This feature contributes to the lack of respect of the desired battery life.
In the end, you can analyze the data outside GREENSPECTOR and for example visualize the battery discharge curve:
How to obtain a correct autonomy?
A first axis is to replace the phone. Indeed, you can choose the wrong device for your use. Ideally, the projections of autonomy will make it possible to carry out a benchmark to avoid a change too late.
Then maybe the scenario isn’t enough realistic. It will then be necessary to rethink the use: is the video conference via mobile really viable? Unfortunately, this approach is generously dismissed because we always want more digital service on the devices. The following approaches will then be more appropriate.
The unit budget will be useful to apply a better strategy:
On system applications (camera for example), we will study the possibility of setting the application differently to find optimizations and reduce consumption to stay within the defined budget.
For other applications like browsers we will be able to benchmark alternative applications. It’s likely, for instance, that videoconferencing solutions aren’t all equal in terms of energy consumption.
For applications developed by a third party and you master, you can incorporate a criterion in the specifications to meet the desired budget.
Finally for applications or websites developed internally, you can integrate GREENSPECTOR and budgets in the software factory to optimize the consumption of your applications as soon as possible and thus detect energy and energy problems. performance before your users.
Android 9 Pie (API level 28) introduces a new battery management feature: the Adaptive Battery. Depending on the user’s use of the applications, the system will restrict certain mechanisms for the applications.
New feature on Android 9 Pie : Adaptive Battery
The system prioritizes the use of resources on the frequency of use of applications and their recent date of use. 5 classes of applications (buckets) have been implemented:
Active: The user frequently uses the application. Some architectural criteria are also taken into account: starting activities, foreground service, user clicking on a notification …
Working set: The application is used frequently but is not always active. For example, a social network application will be assigned as Working set. An application used indirectly will also be in this class.
Frequent: The application is used frequently but not necessarily daily.
Rare: An application used irregularly. For example an airline flight reservation application for an individual.
Never: The application has been installed but has never been used.
The algorithm is based on artificial intelligence (AI), it’s likely that the learning phase will take several days. However many applications will likely be assigned to the Frequent bucket or the Rare one. System applications or Google applications (Maps, Camera…) will probably be in Working Set while the usual applications (Bank, Travel…) may be classified as Frequent. The implementation of the algorithm will also depend on the smartphone manufacturer.
Adaptive Battery restrictions
Depending on the buckets, several restrictions will be put in place:
Job
Alarm
Network
Firebase Cloud Messaging
This means that several features of your applications could be impacted:
One of the ways to avoid declassification is to have the user assign your application in the Doze whitelist. Applications freom this whitelist are exempt from restrictions.
If your application does not have a launcher activity, think about implementing one if possible.
It’s important that your users can interact with notifications.
Don’t clutter your user with too many notifications, otherwise the user could block them and your application will be downgraded.
Testing difficulty
It will be difficult to predict which class your app will be assigned to. It’s likely that its usage will be fragmented, hence your application may end up in any of the 5 classes. If you want to know the class of your application (but noly after your own usage), you can use the API :
It’s however necessary to test your application in all of the different cases. For this you can place the application in the desired class using ADB:
adb shell am set-standby-bucket packagename active|working_set|frequent|rare
It’s obvious that such new testing need will increase the duration of the tests.
Note that if you have a multi-apk application, it’s possible that all APKs aren’t in the same class. It’s therefore important to think about a suitable test strategy.
Does the Adaptive battery really reduce battery consumption?
Since the announcement of this feature (associated with the Artificial Intelligence buzzword) many speculations on its operating mode have been heard: Android would store the most used applications, would allow significant energy gains… Google announced a 30% CPU gain during application launch. Now this figure was actually true but in a Google-centric context. We are more likely around 5% off. The implementation of Adaptive battery is indeed more restricted: depending on the use, some treatments, especially in the background, are delayed. This allows for example, in some cases where the user would have little battery left, to postpone a treatement hoping that it happens when the device is charged. But note that if the treatment is postponed, it’s in no way suppressed. (Source). The Adaptive battery will allow for higher gains as developers use alarms and jobs. An Artificial Intelligence that would drastically reduce energy consumption may be a goal for Android, but we are only witnessing the beginnings.
Each new version of Android has brought more energy management features (Doze, Adaptive battery …). However the gains for the end users are hard to quantify. In any case, it all comes down to the battery life of our smartphones, and we are still to witness a game changing extent in its duration. However what these novelties bring to us, is that they shed some additional light on the applications that are detected as high consumers by the system. The consequence may be severe: the user having been alerted may choose to uninstall these applications.
So… what can we do?
It’s difficult right now to predict how the Adapative Battery system will perceive the applications and sort them in the buckets (Frequently used, Rarely …). However three points are of the utmost importance:
An efficient, fluid and well-designed application will probably be used more often. Beyond the good practices that are given in this article, it’s important to have a high level of quality for one’s application. This involves more testing, high quality control, gather resources consumption and energy metrics …
Background tasks set via Alarms and Jobs, as well as network treatments are targeted by Android. It’s important to design an effective application architecture and to test the behavior of these tasks. And to do this in various conditions: different network connection, fragmented platforms ….
OS editors and devices manufacturers are still looking for mechanisms to prevent applications from using too much battery. As an application developer, it’s critical to anticipate this issue. Indeed, the key to the problem is the design of applications. If applications don’t improve their behavior, systems may continue to put constraining – and somwehat inefficient – mechanisms into place.
Greenspector may use cookies to improve your experience. We are careful to only collect essential information to better understand your use of our website.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.