At Aquila Heywood, we've been on our Agile transformation journey for a couple of years now, so it's a good time to pause and reflect on some of the changes that make this process a journey rather than a destination. Now, anyone can suggest an improvement that can make us more effective, and we've formalised this further by adopting Retrospectives within our Scrum practice. 'Retros' enable Scrum teams to review what went well, what didn't, and how to improve at the end of every two-week Sprint. We also track the 'waste' in each Sprint (effort spent that did not contribute toward a Sprint goal), and this is often discussed in the Retros.
We hadn't been carrying out Retros for very long before we noticed that we were coming up with two kinds of action:
The trouble is, general suggestions and Hard Retro actions usually need additional resources, additional technical work or additional cost. It's true that, if we didn't do this additional work, we might be able to spend more time working on the Sprint, but we wouldn't be as effective and we wouldn't achieve as much. It's probably helpful to give some examples:
So how do we get the right balance between working on the Sprint, and working on Sprinting faster?
We have discovered that there are two kinds of 'CI': Continuous Integration and Continuous Improvement. You can usually work out which is being talking about from the context but, at Aquila Heywood, we do spend a lot of time Continuously Improving the Continuous Integration pipeline!
So why 5%? Things don't get better by themselves: you have to make time to make things better. We've accepted this, so we allocate half a day from every Sprint to CI (Continuous Improvement), for every person in every Scrum team. Or, put another way, we take 5% of the total effort that would otherwise be available to a Sprint, and use this for Continuous Improvement. It's an arbitrary amount, but everyone has agreed to it, and that counts for something.
We protect ourselves from this by requiring CI tasks to give measurable (or at least observable) savings in the development cycle, and this helps us to choose what to tackle next. We call this 'quantifying the benefit', but it can be interpreted quite widely – the measurement is not always just about speed, but sometimes it can be about code size or quality.
Sometimes, earlier improvements lay the foundations for later improvements to build on, and quantifying the benefits for each in its own right can be difficult. For example, consolidating several test approaches into one more consistent approach that is then better positioned for another round of changes. In these cases, attempts to quantify the benefit can be a bit woolly, but even these justifications can be quantified, if you accept that we're just looking for a numerical basis:
So, despite the difficulties, if you start out with a rule that every CI suggestion should have a quantified benefit before it can be considered, you can see more easily which has the greatest value and should be tackled next.
If you apply this rule to all suggestions, even those that are intuitively obvious and with which everyone agrees, you'll find that sometimes you will reject them: they might still be good ideas, but they turned out not to be as valuable as they first seemed, or at least not right now.
So how do you know your improvement work has been worthwhile?
Everything must deliver some value, which sets the expectation up-front that the actual benefit should be measured. Occasionally, it's not as much as we had hoped, but sometimes it is even more, and all goes to show that we're moving in the right direction
By reporting the CI work that was done, and the benefit actually obtained, within our Sprint reviews, we retain the buy-in from our product owners and sponsors. When everyone sees the return on the investment, they're happy for us to continue spending 5% on it.