With any transformation to Agile and Scrum, it is essential to take a good look at any tools and processes you are using. This is to ensure you maximise the value and benefits of the Agile methods and frameworks you are implementing. Take, for example, the idea of a Definition of Done. How do we stick to this and ensure a consistent high level of quality across all our teams and products, while rapidly developing new features through an iterative, responsive process?
Flexibility is key. Making sure that any new or existing tools you use are able to support any member of your Scrum team in building any potential feature is vital. This helps to break down silos and ensure that progress isn't halted because you are waiting for certain resources or expertise to become available.
Empowering Scrum teams to build and provision their own development and test environments has numerous benefits. It not only reduces bottlenecks, but it also allows them to gain familiarity or 'mechanical sympathy' for these environments and a better understanding of how the application works in production. At Aquila Heywood, using Docker and 'containerising' our deployments has been a massive help in this area. Docker has enabled our Scrum teams to test almost anything, anywhere, without being reliant on external resources or a few individuals.
If you have a desire to automate as much as possible to increase productivity, you can quickly reach the stage of having huge numbers of builds and corresponding configurations. Utilising builds as code, through Jenkins Pipelines, provides us with fine-grained control over what we build and how we build it. Storing this code in version control means builds can be reproduced on any machine and even enables the building of global libraries. This allows different builds to share common components or configuration, providing control and increasing the quality and maintainability of the code.
However, with increased productivity, there is also a big ramp up in the number of tests being written to ensure quality. So how do you maintain the quality of the tests themselves? This is crucial as, when the number of tests increases, so does the potential risks and costs of any tests that don't work.
Where this is perhaps most relevant is in UI or integration testing, especially where there may be a database and different sets of test data involved. Ensuring data hygiene between tests so that tests are sufficiently isolated from one another and remain stable can be problematic. At Aquila Heywood, initially just using Docker and containerised databases helped enormously but, as the total number of tests increased, so did the amount of time spent restoring data. Adding ZFS-based volumes to the equation means there is an almost instant rollback of any data used in a test. This means that our build resources spend more time on actual testing and giving valuable feedback, rather than spending time on setting up data.
While all this quality control and flexibility is great, there also needs to be a focus on speed to ensure that feedback loops are kept as small as possible. With all these new quality checks, it is easy for the flow of a feature's progress toward 'Done' to be disrupted by long build times, intermittent errors or delays in receiving feedback.
UI testing is, again, a good example of where this is critical. These tests tend to be slower and more error-prone than unit or integration testing and so we are continuously looking for ways to run them faster. Some of the biggest wins for us in this area have been from parallelisation of BDD Cucumber tests. This includes automatically spreading the feature file load over whatever hardware is available, while still being able to collate the results in one easy-to-find location. This has decreased build times and improved feedback loops.