Attack Risk before it Attacks You!

Risk undermines predictability in software development – every development effort includes some unknowns. Some development is relatively risk free – for example, writting another driver that is 90% the same as one done previously, but just using a different interface signature. But most involve considerable risk – technology you haven’t used before, or a new domain, or a backend system you’ve never had to integrate with previously. Often these unknown aspects are left to late in the project – we naturally tend to do the easiest things first – it gives us a feeling of progress, and puts off the ‘hard’ work till another day. But this approach just stores up risk in the project. We often see this at the end of a project or release cycle, where the problems getting our system working only appear when we try to integrate all the individual bits at the end of the project, or tackle that tricky feature we’ve been avoiding. Thats when the risk in the project attacks our schedule – so many projects seem to be progressing fine until the last 10-20% when the slippages begin to show.
Agile and lean attacks the risk before it can attack you – by including risk as well as value in your prioritisation strategy, risky bits are addressed early in the project. And by insisting potentially releasable, working software is delivered from every iteration/sprint, you ensure that risk is dealt with as early as possible.
While plan-driven, waterfall methods attempt to improve predictability by ‘planning’ away risk, incremental approaches like agile and Kanban improve it by attacking it early in the cycle. They swop what can be an illusion of predictability with a more pragmatic approach to managing risk.

Agile Tour 2010 comes to Dublin

The Agile Tour is coming to Dublin October 14th – great chance to network with other local practitioners, get feedback on your experiences with agile and learn from others. I’ll be talking about adopting agile at the enterprise level – how do you justify it, what does it mean for the organisation and how do you develop a cohesive adoption strategy.

UPDATE: Agile Tour event was a great success! So much so we’re considering running it again in other cities around Ireland.

My presentation slides are available on Creative Commons License

Agile is more Transparent

In a series of posts I’m examining some of the claimed benefits of agile methods – are they justified? Here are the posts so far:

  • My first post looked at the cost of development with agile
  • The second discussed speed.
  • The third addressed quality.
  • The fourth looked at claims of un-predictability of agile.

This post will examine the transparency of agile vs.  other methods. The main contention is that because agile delivers potentially releasable, working software at the end of each iteration, there is implicit visibility of actual progress in delivering business value – there is no need to rely on other metrics derived from the process, such as lines of code, defect counts, hours worked, story points executed or features coded.

In waterfall or ad-hoc development, there are no iterations (other than major milestones such as a product release) where the real value delivered can be measured – therefore, proxy measurements like those mentioned above are necessary. But there are several issues with using these:

  • The numbers can be ‘gamed’:  The old adage ‘measure what you manage’ is well recognised, not only by managers but by development teams too. As managers try to manage the team by measuring them, the team will often try to manage the managers through those same measurements! In effect, the numbers may not reflect the full picture where there are delays or other issues – these still may not become apparent till its too late to react to them.
  • Many metrics are confined to measuring inputs, when its output that is of more interest: Managing a project based on the effort expended, rather than the value generated, is always going to lead to problems. Traditional project management focuses on time, resources and cost expenditure. Although the hours spent coding, the lines of code generated, the defects found, etc may all be linked to the value generated, they can be highly unreliable in this regard, and are often just downright misleading.
  • Defining, collecting and reporting on these derived metrics can be pretty time consuming. There are many project and portfolio managers who spend the majority of their time on such work.
  • Because these metrics are intrinsically linked to the ‘plan’, it becomes more difficult to measure them if the plan changes. For example, if I plan 100 hours work for a feature, and a requirements change means it takes just 50 hours, how do I account for that in my metrics – when we deliver the feature, are we running behind?

There can be little argument that the most direct, reliable measure of progress is business value delivered, normally in the form of value to the customer. But there are some things to watch for when moving in this direction:

  • Output vs. Outcome:  Even by delivering working software on a regular basis, there is no guarantee it is of VALUE.  Measuring outPUT may just drive faster, more regular delivery of software with little value. It is the outCOME we should ideally measure – the actual value derived from the software. But as this can’t be reliably measured till the product is on the market or in use – a conundrum for sure.
  • Iterative methods like RUP deliver features in increments – however, there isn’t the focus on delivering working, POTENTIALLY RELEASABLE software at each iteration – therefore proxy measurements must still be substituted for measures of real VALUE – the software hasn’t any value until its working.
  • This approach underscores the importance of the ‘Definition of Done’ (DoD) in agile – the development team must adopt an agreed definition that really does result in potentially releasable software at the end of each iteration. I often see iterations where the coding and feature testing are complete, but code review, integration, performance testing, etc are delayed so they can be done more efficiently for multiple features at a time.  This is fine once they are completed within the same iteration – and if the story points ARE NOT credited until they are done. Only after all steps have been complete, and a strict DoD is adhered too, has value been delivered and credit can be taken.

As focus shifts from management by effort to management by value, and as iteration costs decrease through automated test, build, integration and deployment, delivering real value, in the form of potentially releasable software, becomes more achievable and leads to much needed transparency.

Agile is more predictable

In a series of posts I’m going to examine some of the claimed benefits of agile methods – are they justified? My first post looked at the cost of development with agile, the second discussed speed. while the third addressed quality. Here I look at claims of un-predictability of agile.

What do we mean when we say agile methods sacrifice predictability for adaptability? Here I want to explore this commonly held belief – is it true?
In this context, predictability normally means the ability to deliver to a predetermined plan – predictable from the customer point of view.  The scope agreed is delivered with the necessary quality at the time and cost agreed.

But research shows that waterfall very rarely achieves this objective – in fact the record for plan-driven approaches is woeful! A now famous Standish report states that 31% of projects were cancelled before completion, while in 53% costs nearly doubled the  original budget. In fact only 16% came in on time and on budget. This report was from 1995 – when plan-driven was well established and agile methods had yet to make their appearance.

I would argue that in contrast agile methods bring predictability in several ways:

1) Timeboxed – Because agile treats scope rather than time as variable, ‘something’ will be delivered at the pre-determined milestone – the schedule does not slip.

2) Agile addresses high risk items early, like testing and end to end integration. These often prove to be the reasons plan-driven methods are so un-predictable.

3) Even if some scope is not delivered at the planned milestone, agile uses prioritisation by value to ensure the most important features are delivered (45% of features are never used according to the same Standish report)

4) Agile borrows from queueing theory to devise techniques to reduce variability in the development process – smoothing work flow through small, equally sized user stories greatly reduces queuing time and bottleneck formation in the process, delivering more reliable throughput.

5) By delivering potentially releasable code in each iteration, agile provides more visibility into real progress compared to the plan-driven approaches – this provides a more reliable basis for stakeholders to revise plans based on the reality of the project rather than any now incorrect plan.

6) Agile methods use inspect and adapt to adjust plans to emerging reality – regular reality checks mean more reliable predictions of milestone deliveries are possible.  In systems engineering terminology, such closed-loop systems are less reliant on every component of the system working in order to be predictable – instead they can compensate for unreliable components.

7) Agile methods encourage parallel work on various tasks and user stories – rather than analysis preceding development followed by test, these are performed concurrently.  Again, queueing theory shows us that this greatly reduces the variability of any process thereby increasing predictability.

Plan-driven methods can lend an illusion of control and predictability – and can serve political or covert roles by avoiding ‘whole-team’ accountability. But both experience and theory highlight that predictability is, contrary to common belief, not their strong-point.

Agile delivers better Quality

In a series of posts I’m going to examine some of the claimed benefits of agile methods – are they justified? My first post looked at the cost of development with agile, while the second discussed speed. Here we address quality. Commentators often say agile methods are not suitable for life or mission critical systems – for some reason they believe the quality cannot be delivered reliably using agile. But there is a growing opinion that agile methods can deliver even better quality than planned approaches. I believe it’s not anything inherent in agile methods that can lead to lower quality, rather it is a lack of discipline in applying the method. Discipline is required to ensure high quality in any method, agile or not. Unfortunatly agile is perceived by some (unjustly) as not demanding discipline, and hence not ensuring consistent quality. Here are some reasons I think an informed, mindful and disciplined agile method can be considered more reliable in terms of quality than planned methods:

  • Quality is built in – not added afterwards: Iterative development encourages continuous test – at a minimum every iteration. In agile methods with automated test, this is increased to daily, or even hourly, or every time the code base changes. With test driven development, tests are developed and executed even before the development begins.  This encourages quality coding from the outset, and the repeated and up front emphasis on testing engenders a quality culture in the team
  • Waterfall methods seperate responsibility for development and test – they encourage an antagonistic relationship between developers under pressure to churn out features and QA who have the lonely job of policing the quality of the system.
  • QA is normally the last major task in a waterfall model, following requirements, analysis, design and development. Therefore, it is usually conducted in limited time and under severe pressure at the end of the project – conditions not conducive to rigorous and insightful test.

But quality is not confined to the coded implementation itself:

  • The quality of requirements are improved through close customer collaboration and face to face interaction. Detailed requirements are determined only when required, just before implementation, and therefore involve less ‘future gazing’. Requirements are described in a user centric fashion, and can be better understood by the customer than more technology centric descriptions usual in traditional methods.
  • The quality of the plan, and therefore the predictability of the project, is improved through continuous replanning for each iteration, by addressing high importance and high risk items early on (eg integration) and by the transparency offered by measuring progress through ‘done’ features and the value they represent to the business.
  • The quality of experience for stakeholders is improved: customers who get what they want earlier, sponsors who get happier customers, product management who get better visibility and more options for managing the development, and developers and testers who get a more motivating and sustainable work experience.

Taken together, I believe agile can deliver better quality software than planned methods. Agile is rooted in lean thinking, a set of philosophical tools that helped Japanese companies reach new levels of manufacturing and product development quality over several decades.  However, discipline in their application, as with any method, is not optional.

Agile is faster…

In a series of posts I’m going to examine some of the claimed benefits of agile methods – are they justified? My first post looked at the cost of development with agile. Here we address the speed of development.

Agile methods are often portrayed as delivering software faster than traditional, planned methods. Here are some of the ways agile gets value to market faster:

  • Note I use the word ‘value’ above rather than software – traditional methods don’t deliver any value to the customer until the entire product release is completed. But through incremental delivery, agile delivers the most valuable software features much earlier than this.
  • Through timeboxing development, agile facilitates concurrent design, test and development allowing the entire end to end process be radically compressed.
  • By breaking requirements into small ‘user stories’ it eliminates ‘batch dependancies’ where one feature can’t be completed or released until others in the requirement are also complete – the value ‘flows’.
  • Delivering the highest priority items first often means features included ‘just-in-case’ are never developed – they are superseded by new higher priority items as the business context evolves
  • Using smaller, evenly sized user stories and short iterations reduces variability and thereby  queuing times in complex systems such as software development teams
  • Methods such as Scrum & XP allow the team to work faster by explicitly eliminating distractions and interruptions – for example, in scrum the scrummaster is tasked with barring interruptions to the team and ensuring the sprint backlog is not changed in any way during the sprint. Other sources of distraction, such as being assigned to multiple projects at the same time, and changing team composition, are not encouraged.
  • Finally, although there is an up-front cost in providing automated test, continuous integration and other automated processes, these save time over the duration of the project and allow the team focus on delivering value with greater speed.