Agile is more Transparent

In a series of posts I’m examining some of the claimed benefits of agile methods – are they justified? Here are the posts so far:

  • My first post looked at the cost of development with agile
  • The second discussed speed.
  • The third addressed quality.
  • The fourth looked at claims of un-predictability of agile.

This post will examine the transparency of agile vs.  other methods. The main contention is that because agile delivers potentially releasable, working software at the end of each iteration, there is implicit visibility of actual progress in delivering business value – there is no need to rely on other metrics derived from the process, such as lines of code, defect counts, hours worked, story points executed or features coded.

In waterfall or ad-hoc development, there are no iterations (other than major milestones such as a product release) where the real value delivered can be measured – therefore, proxy measurements like those mentioned above are necessary. But there are several issues with using these:

  • The numbers can be ‘gamed’:  The old adage ‘measure what you manage’ is well recognised, not only by managers but by development teams too. As managers try to manage the team by measuring them, the team will often try to manage the managers through those same measurements! In effect, the numbers may not reflect the full picture where there are delays or other issues – these still may not become apparent till its too late to react to them.
  • Many metrics are confined to measuring inputs, when its output that is of more interest: Managing a project based on the effort expended, rather than the value generated, is always going to lead to problems. Traditional project management focuses on time, resources and cost expenditure. Although the hours spent coding, the lines of code generated, the defects found, etc may all be linked to the value generated, they can be highly unreliable in this regard, and are often just downright misleading.
  • Defining, collecting and reporting on these derived metrics can be pretty time consuming. There are many project and portfolio managers who spend the majority of their time on such work.
  • Because these metrics are intrinsically linked to the ‘plan’, it becomes more difficult to measure them if the plan changes. For example, if I plan 100 hours work for a feature, and a requirements change means it takes just 50 hours, how do I account for that in my metrics – when we deliver the feature, are we running behind?

There can be little argument that the most direct, reliable measure of progress is business value delivered, normally in the form of value to the customer. But there are some things to watch for when moving in this direction:

  • Output vs. Outcome:  Even by delivering working software on a regular basis, there is no guarantee it is of VALUE.  Measuring outPUT may just drive faster, more regular delivery of software with little value. It is the outCOME we should ideally measure – the actual value derived from the software. But as this can’t be reliably measured till the product is on the market or in use – a conundrum for sure.
  • Iterative methods like RUP deliver features in increments – however, there isn’t the focus on delivering working, POTENTIALLY RELEASABLE software at each iteration – therefore proxy measurements must still be substituted for measures of real VALUE – the software hasn’t any value until its working.
  • This approach underscores the importance of the ‘Definition of Done’ (DoD) in agile – the development team must adopt an agreed definition that really does result in potentially releasable software at the end of each iteration. I often see iterations where the coding and feature testing are complete, but code review, integration, performance testing, etc are delayed so they can be done more efficiently for multiple features at a time.  This is fine once they are completed within the same iteration – and if the story points ARE NOT credited until they are done. Only after all steps have been complete, and a strict DoD is adhered too, has value been delivered and credit can be taken.

As focus shifts from management by effort to management by value, and as iteration costs decrease through automated test, build, integration and deployment, delivering real value, in the form of potentially releasable software, becomes more achievable and leads to much needed transparency.