Risk undermines predictability in software development – every development effort includes some unknowns. Some development is relatively risk free – for example, writting another driver that is 90% the same as one done previously, but just using a different interface signature. But most involve considerable risk – technology you haven’t used before, or a new domain, or a backend system you’ve never had to integrate with previously. Often these unknown aspects are left to late in the project – we naturally tend to do the easiest things first – it gives us a feeling of progress, and puts off the ‘hard’ work till another day. But this approach just stores up risk in the project. We often see this at the end of a project or release cycle, where the problems getting our system working only appear when we try to integrate all the individual bits at the end of the project, or tackle that tricky feature we’ve been avoiding. Thats when the risk in the project attacks our schedule – so many projects seem to be progressing fine until the last 10-20% when the slippages begin to show.
Agile and lean attacks the risk before it can attack you – by including risk as well as value in your prioritisation strategy, risky bits are addressed early in the project. And by insisting potentially releasable, working software is delivered from every iteration/sprint, you ensure that risk is dealt with as early as possible.
While plan-driven, waterfall methods attempt to improve predictability by ‘planning’ away risk, incremental approaches like agile and Kanban improve it by attacking it early in the cycle. They swop what can be an illusion of predictability with a more pragmatic approach to managing risk.
The Agile Tour is coming to Dublin October 14th – great chance to network with other local practitioners, get feedback on your experiences with agile and learn from others. I’ll be talking about adopting agile at the enterprise level – how do you justify it, what does it mean for the organisation and how do you develop a cohesive adoption strategy.
UPDATE: Agile Tour event was a great success! So much so we’re considering running it again in other cities around Ireland.
My presentation slides are available on Creative Commons License
In a series of posts I’m examining some of the claimed benefits of agile methods – are they justified? Here are the posts so far:
- My first post looked at the cost of development with agile
- The second discussed speed.
- The third addressed quality.
- The fourth looked at claims of un-predictability of agile.
This post will examine the transparency of agile vs. other methods. The main contention is that because agile delivers potentially releasable, working software at the end of each iteration, there is implicit visibility of actual progress in delivering business value – there is no need to rely on other metrics derived from the process, such as lines of code, defect counts, hours worked, story points executed or features coded.
In waterfall or ad-hoc development, there are no iterations (other than major milestones such as a product release) where the real value delivered can be measured – therefore, proxy measurements like those mentioned above are necessary. But there are several issues with using these:
- The numbers can be ‘gamed’: The old adage ‘measure what you manage’ is well recognised, not only by managers but by development teams too. As managers try to manage the team by measuring them, the team will often try to manage the managers through those same measurements! In effect, the numbers may not reflect the full picture where there are delays or other issues – these still may not become apparent till its too late to react to them.
- Many metrics are confined to measuring inputs, when its output that is of more interest: Managing a project based on the effort expended, rather than the value generated, is always going to lead to problems. Traditional project management focuses on time, resources and cost expenditure. Although the hours spent coding, the lines of code generated, the defects found, etc may all be linked to the value generated, they can be highly unreliable in this regard, and are often just downright misleading.
- Defining, collecting and reporting on these derived metrics can be pretty time consuming. There are many project and portfolio managers who spend the majority of their time on such work.
- Because these metrics are intrinsically linked to the ‘plan’, it becomes more difficult to measure them if the plan changes. For example, if I plan 100 hours work for a feature, and a requirements change means it takes just 50 hours, how do I account for that in my metrics – when we deliver the feature, are we running behind?
There can be little argument that the most direct, reliable measure of progress is business value delivered, normally in the form of value to the customer. But there are some things to watch for when moving in this direction:
- Output vs. Outcome: Even by delivering working software on a regular basis, there is no guarantee it is of VALUE. Measuring outPUT may just drive faster, more regular delivery of software with little value. It is the outCOME we should ideally measure – the actual value derived from the software. But as this can’t be reliably measured till the product is on the market or in use – a conundrum for sure.
- Iterative methods like RUP deliver features in increments – however, there isn’t the focus on delivering working, POTENTIALLY RELEASABLE software at each iteration – therefore proxy measurements must still be substituted for measures of real VALUE – the software hasn’t any value until its working.
- This approach underscores the importance of the ‘Definition of Done’ (DoD) in agile – the development team must adopt an agreed definition that really does result in potentially releasable software at the end of each iteration. I often see iterations where the coding and feature testing are complete, but code review, integration, performance testing, etc are delayed so they can be done more efficiently for multiple features at a time. This is fine once they are completed within the same iteration – and if the story points ARE NOT credited until they are done. Only after all steps have been complete, and a strict DoD is adhered too, has value been delivered and credit can be taken.
As focus shifts from management by effort to management by value, and as iteration costs decrease through automated test, build, integration and deployment, delivering real value, in the form of potentially releasable software, becomes more achievable and leads to much needed transparency.
In a series of posts I’m going to examine some of the claimed benefits of agile methods – are they justified? My first post looked at the cost of development with agile, while the second discussed speed. Here we address quality. Commentators often say agile methods are not suitable for life or mission critical systems – for some reason they believe the quality cannot be delivered reliably using agile. But there is a growing opinion that agile methods can deliver even better quality than planned approaches. I believe it’s not anything inherent in agile methods that can lead to lower quality, rather it is a lack of discipline in applying the method. Discipline is required to ensure high quality in any method, agile or not. Unfortunatly agile is perceived by some (unjustly) as not demanding discipline, and hence not ensuring consistent quality. Here are some reasons I think an informed, mindful and disciplined agile method can be considered more reliable in terms of quality than planned methods:
- Quality is built in – not added afterwards: Iterative development encourages continuous test – at a minimum every iteration. In agile methods with automated test, this is increased to daily, or even hourly, or every time the code base changes. With test driven development, tests are developed and executed even before the development begins. This encourages quality coding from the outset, and the repeated and up front emphasis on testing engenders a quality culture in the team
- Waterfall methods seperate responsibility for development and test – they encourage an antagonistic relationship between developers under pressure to churn out features and QA who have the lonely job of policing the quality of the system.
- QA is normally the last major task in a waterfall model, following requirements, analysis, design and development. Therefore, it is usually conducted in limited time and under severe pressure at the end of the project – conditions not conducive to rigorous and insightful test.
But quality is not confined to the coded implementation itself:
- The quality of requirements are improved through close customer collaboration and face to face interaction. Detailed requirements are determined only when required, just before implementation, and therefore involve less ‘future gazing’. Requirements are described in a user centric fashion, and can be better understood by the customer than more technology centric descriptions usual in traditional methods.
- The quality of the plan, and therefore the predictability of the project, is improved through continuous replanning for each iteration, by addressing high importance and high risk items early on (eg integration) and by the transparency offered by measuring progress through ‘done’ features and the value they represent to the business.
- The quality of experience for stakeholders is improved: customers who get what they want earlier, sponsors who get happier customers, product management who get better visibility and more options for managing the development, and developers and testers who get a more motivating and sustainable work experience.
Taken together, I believe agile can deliver better quality software than planned methods. Agile is rooted in lean thinking, a set of philosophical tools that helped Japanese companies reach new levels of manufacturing and product development quality over several decades. However, discipline in their application, as with any method, is not optional.
In a series of posts I’m going to examine some of the claimed benefits of agile methods – are they justified? My first post looked at the cost of development with agile. Here we address the speed of development.
Agile methods are often portrayed as delivering software faster than traditional, planned methods. Here are some of the ways agile gets value to market faster:
- Note I use the word ‘value’ above rather than software – traditional methods don’t deliver any value to the customer until the entire product release is completed. But through incremental delivery, agile delivers the most valuable software features much earlier than this.
- Through timeboxing development, agile facilitates concurrent design, test and development allowing the entire end to end process be radically compressed.
- By breaking requirements into small ‘user stories’ it eliminates ‘batch dependancies’ where one feature can’t be completed or released until others in the requirement are also complete – the value ‘flows’.
- Delivering the highest priority items first often means features included ‘just-in-case’ are never developed – they are superseded by new higher priority items as the business context evolves
- Using smaller, evenly sized user stories and short iterations reduces variability and thereby queuing times in complex systems such as software development teams
- Methods such as Scrum & XP allow the team to work faster by explicitly eliminating distractions and interruptions – for example, in scrum the scrummaster is tasked with barring interruptions to the team and ensuring the sprint backlog is not changed in any way during the sprint. Other sources of distraction, such as being assigned to multiple projects at the same time, and changing team composition, are not encouraged.
- Finally, although there is an up-front cost in providing automated test, continuous integration and other automated processes, these save time over the duration of the project and allow the team focus on delivering value with greater speed.
In a series of posts I’m going to critically review the common value propositions of agile – do the claims stand up to scrutiny?
Claim 1: Agile makes for cheaper software.
There are a number of justifications for this claim.
- No more kitchen sink – with waterfall a common pattern is the tendency of the customer to squeeze in all possible requirements at the beginning of the process since the ‘change control process’ will make it very difficult to add requirements afterwards, and any changes would give a justification for the development team to slip the schedule. So lots of features, poorly understood and unlikely to be needed, are included up front. This leads to more complexity in the design which now has to take into account more features, delays starting time of development as it takes longer to get signed off requirements, and puts features with low or even no current business value on an equal priority for delivery as critical functions (since all must be delivered in a big release). The software delivered in the end therefore has more functionality than in agile – it costs more to produce.
- Invest only in artifacts with business value: Agile values working software over documentation, but also values documentation with business value over that merely used to ‘run the project’. So time spent developing requirements specs, test plan documents, status reports, etc can be spent instead on great user guides, operations guides, regression test suites, etc.
- Lower Opportunity Costs: Because agile delivers value earlier, it reduces lost opportunity costs and enables the business address opportunities in the market more quickly.
- Lower Product LifeTime costs: The up front investment in continuous integration, automated test suites and refactored, well structured code has been shown to be rewarded several times over in reduced ongoing maintenance costs throughout the softwares life, and even extending that life beyond where ‘dirtier’ code becomes uneconomical to maintain.
- Delayed Investment: By returning business value from early in the development lifetime, an earlier return on the investment is realised, and further investment can be delayed till its really necessary.
- Global vs. Local Optimisation: A key lean principle is global over local optimisation. Large batches rather than continuous flow, specialisation rather than skills redundancy and other traditional tactics to increase resource utilisation aim to acheive optimisation of a particular task set such as development or testing. But it is the optimisation of the whole that is important and agile cuts costs by focusing on this global optimisation.
Are there other ways agile cuts costs? If you know more please share them…
Next blog I’ll look at how agile can speed time to value.