Financial ServicesProviders Company Schemes Public Sector Third Party Administrators

Getting better at improving. How hard can it be?

Graeme Macfarlane

Group Chief Architect

discusses the merits of optimising improvement

17 July 2018

At Aquila Heywood, we've been on our Agile transformation journey for a couple of years now, so it's a good time to pause and reflect on some of the changes that make this process a journey rather than a destination. Now, anyone can suggest an improvement that can make us more effective, and we've formalised this further by adopting Retrospectives within our Scrum practice. 'Retros' enable Scrum teams to review what went well, what didn't, and how to improve at the end of every two-week Sprint. We also track the 'waste' in each Sprint (effort spent that did not contribute toward a Sprint goal), and this is often discussed in the Retros.

We hadn't been carrying out Retros for very long before we noticed that we were coming up with two kinds of action:

  • 'Soft' Retro actions – mostly behavioural, involving individuals (people) and interactions
  • 'Hard' Retro actions - mostly technical, involving processes and tools (software)

The trouble is, general suggestions and Hard Retro actions usually need additional resources, additional technical work or additional cost. It's true that, if we didn't do this additional work, we might be able to spend more time working on the Sprint, but we wouldn't be as effective and we wouldn't achieve as much. It's probably helpful to give some examples:

  • A team reported a couple of days' waste because it had to set up a new environment manually before starting some testing, resulting in a Hard Retro action for automated environment provisioning.
  • A team found itself waiting for automated tests to run in our continuous integration build, so it raised a Hard Retro action to speed up the feedback loop using temporary databases that didn't have to be reset to a stable start point for each test.
  • After updating some code that it turned out was no longer used, we created metrics to report 'dead' code on a dashboard visible to all and, therefore, encourage its removal.

So how do we get the right balance between working on the Sprint, and working on Sprinting faster?

1. Add 5% of CI2 to every Sprint

We have discovered that there are two kinds of 'CI': Continuous Integration and Continuous Improvement. You can usually work out which is being talking about from the context but, at Aquila Heywood, we do spend a lot of time Continuously Improving the Continuous Integration pipeline!

So why 5%? Things don't get better by themselves: you have to make time to make things better. We've accepted this, so we allocate half a day from every Sprint to CI (Continuous Improvement), for every person in every Scrum team. Or, put another way, we take 5% of the total effort that would otherwise be available to a Sprint, and use this for Continuous Improvement. It's an arbitrary amount, but everyone has agreed to it, and that counts for something.

2. Quantify the benefit

Our CI time is precious and must not be mis-used - it is concerned with how we build our products, not about what they do. The improvement we're after from CI is primarily for our Development teams, although of course our customers also benefit from it indirectly. Usually, the improvement doesn't actually change our product: our Product team manages the priority conflicts in a backlog of changes that we want to make to the product. This means if we find ourselves making functional product changes in our CI time, we've probably strayed beyond CI into changes that aren't necessarily the ones that our Product team wants next.

We protect ourselves from this by requiring CI tasks to give measurable (or at least observable) savings in the development cycle, and this helps us to choose what to tackle next. We call this 'quantifying the benefit', but it can be interpreted quite widely – the measurement is not always just about speed, but sometimes it can be about code size or quality.

Sometimes, earlier improvements lay the foundations for later improvements to build on, and quantifying the benefits for each in its own right can be difficult. For example, consolidating several test approaches into one more consistent approach that is then better positioned for another round of changes. In these cases, attempts to quantify the benefit can be a bit woolly, but even these justifications can be quantified, if you accept that we're just looking for a numerical basis:

  • '… may allow a reduction in licences for third-party tools'
    In the last three Sprints, we updated tests using a tool that is licensed for just a few concurrent users, and almost every Sprint needs it at some stage. Removing the dependency on this tool will avoid delays of up to an hour when there is no licence available at the time that it is needed, as happened in three of the last five Sprints.
  • '… should make some merges easier'
    Each team typically completes 999 file merges in each Sprint, about 99 of which can be simplified or avoided by doing X. Although the time saved for each one is small, the work is error-prone and may require rework to fix bugs. The team estimates this will save 3-4 hours per Sprint.

So, despite the difficulties, if you start out with a rule that every CI suggestion should have a quantified benefit before it can be considered, you can see more easily which has the greatest value and should be tackled next.

If you apply this rule to all suggestions, even those that are intuitively obvious and with which everyone agrees, you'll find that sometimes you will reject them: they might still be good ideas, but they turned out not to be as valuable as they first seemed, or at least not right now.

Proof in the pudding

So how do you know your improvement work has been worthwhile?

Everything must deliver some value, which sets the expectation up-front that the actual benefit should be measured. Occasionally, it's not as much as we had hoped, but sometimes it is even more, and all goes to show that we're moving in the right direction

By reporting the CI work that was done, and the benefit actually obtained, within our Sprint reviews, we retain the buy-in from our product owners and sponsors. When everyone sees the return on the investment, they're happy for us to continue spending 5% on it.

Graeme Macfarlane is Group Chief Architect at Aquila Heywood, the largest supplier of life and pensions administration software solutions in the UK.

Further Reading