In the numerical services industry, time-to-market has become a crucial criteria in terms of company competitiveness. Current development practices tend to be of a greater agility and deployment is more and more clustered. In this context and to answer this need, DEVOPS methods were created.
According to the excellent Devops for Dummies), it represents:
- Continuous tests
- Shorter improvement cycles
- Close surveillance of production quality
A successful DEVOPS approach is based on control metrics allowing to inflect practices quite quickly. The aim here is to avoid the “tunnel effect” leading to late discoveries of functional defaults and bugs. The same goes for resource consumption. When application slowness and energy overconsumption are discovered too late (after production launch), it impacts the application’s image in a negative way, as well as increases the correction cost. When looking at the metrics to follow in an agile DEVOPS approach, the application’s performance and its energy consumption are both crucial as they will have an important impact on the final quality of the application.
The establishment of DEVOPS practices creates the opportunity to implement continuous efficiency measurement. Metrics will be chosen depending on the project’s priorities: performance, memory, energy… On the same basis as the DEVOPS uses practices of continuous integration and continuous deployment, we will add continuous measurement.
Listed below are the key elements of this approach:
DEVOPS implementation requires validation of the proper functioning of the application after it has been through integration tests: build is OK, unitary tests too, now what? It entails the implementation of automated functional testing. Even though these tests don’t cover the beginning of the process, they should allow the verification of the minimum required for the application (cf. MVP product notion). These tests can be completed with loading tests to make sure the functionalities work fine no matter the number of users.
The establishment of automated functional testing is an opportunity to measure consumed resources (responsiveness, energy consumption, memory consumption…) in a precise way. We can make a connection with loading tests, in which we look at both functionalities working properly and the server’s CPU consumption for instance. Nowadays, it is possible – and highly recommended – to include this “resources” approach in every type of tests, including functional ones.
On the other hand, a DEVOPS approach without automated test prevents the implementation of continuous consumption measurement. This shortage of test automation is usually due to a lack of process maturity. This new need of continuously measuring reinforce the test automation necessity. As a matter of fact, we witness a virtuous circle in terms of process improvement.
Continuous tests without functional testing
Continuous measurement is still possible without automated testing. Indeed, simple preliminary tests (such as a webpage launch or a mobile application launch) can be conducted and allow to get the first measures. As for example, in terms of web performance, we can cite “Web Page Test”. Regarding the measurement of battery and resource consumption, we decided to implement GREENSPECTOR Benchmark Runner: it is a functionality which is going to initiate a standardized test that is pre-automated.
Test scenario is going to: open the application (or the url) on a mobile device, wait (in order to measure the consumption in idle state), scroll down and then put the application in the background. This test is ready to be initiated, which allows teams with no automated test to have their first continuous results back. Even if we don’t validate the functional part, we measure both resource and energy consumption in the application’s different states.
Platforms of measure
A key element in continuous measurement is measuring in an environment as close as production as possible. This is also an essential point in the DEVOPS process. In order to do that, it is necessary to execute the measures on well-defined platforms:
- Pre-production loading tests (iso-production environment)
- Web performance tests on numerous types of platforms
- Efficiency tests on mobile devices (smartphones, tablets) or PCs.
Concerning tests on mobile devices, process can be launched by using one’s own test band. Online device lab services (like Saucelab, Xamarin Test Cloud, Perfecto Mobile…) allow tests to be initiated on different real mobile devices or – more frequently – emulated ones. Here, at GREENSPECTOR, we can provide you with a Power TestCloud service, testing the energy consumption of real devices – because only this type of devices has trustworthy resource consumption measures.
Ideally, mixing local devices and distant ones is the solution in order to have an exhaustive panel of platforms: local devices are more representative and configurable, online devices represent a larger number and management is externalized.
However, it is important to say that diversity also comes from settings variation: initiating tests on different types of connection (2G, 3G, WiFi…) provides more data and knowledge on the system behavior in different cases. This is one of the main advantages of continuous measurement: being able to cover furthermore configurations thanks to the measurement process automation.
Controlling the measures
Once the tools have been installed and the tests run, you still have to analyze your data! To do that, it is important to set limits to control and manage the resources. We refer it to “budget” of resources or consumed battery. This is the process proposed by the RAIL methodology (Response Animation Idle Load) regarding performance control.
The exact model is:
- Less than 100 ms for users’ responsiveness
- 10 ms to create a frame animation
- Regroup blocs by 50 ms to maximize the idle
- Deliver a content in less than 1 000 ms
These limits enables the initiation of the continuous measurement control. When you follow the metrics, if a limit is exceeded, a warning notification can be sent to the development team; in serious cases, deployment will be blocked.
We use the same process for efficiency, implementing “energy budgets”. For instance, a specific function shouldn’t double the smartphone discharge.
These limits are important acceptance criteria in the DEVOPS deployment cycle. They should allow the team to save time while analyzing measures, as well as feel confident about the production launch of the solution: hence, no surprise in terms of performance or resources discovered by the end-users.
A continuous improvement approach
The implementation of this continuous measurement leads to an ongoing improvement process, as the DEVOPS practices recommend. Indeed, the more deployments, the better knowledge the team will have on the impacts of functionalities. Every functional or technical integration in the application will make the evolution of the resource consumption visible. For example, if we choose a mobile application and integrate an animation, the measure will show an important increase in energy consumption. In that case, the team can make the adjustments immediately: suppression, alternative choices (progress bar for instance), discussion with the marketing team, project ownership… The understanding of the impacts of functionalities will be completed by an insight of resources data. Indeed, continuous measurement will make data (performance, CPU, energy, exchanged data volume…) more understandable for the team, just like other metrics such as the test coverage rate.
We can cite here Facebook’s sophisticated approach which consists of implementing performance measures which allows to spot regressions in every code commits. Without taking it that far, continuous improvement is possible for any team!
Continuous improvement will happen with time, by reducing the alert threshold. Once the team has the limits under control (no overrun, better quality…), it will be able to diminish it in a collective way (with the product owner, the project’s stakeholders…), in order to lower the resources consumption and offer an ameliorated version of the application to users.
If it reaches a fairly good maturity level (automation of all functional cases, tests on numerous platforms…), this can take some time. Process needs to be progressive and first actions simple. For example, initiating technical tests in Jenkins, on a unique platform, without any threshold, will give you the first continuous measurement. You just have to go for it!