Attack Risk before it Attacks You!

Risk undermines predictability in software development – every development effort includes some unknowns. Some development is relatively risk free – for example, writting another driver that is 90% the same as one done previously, but just using a different interface signature. But most involve considerable risk – technology you haven’t used before, or a new domain, or a backend system you’ve never had to integrate with previously. Often these unknown aspects are left to late in the project – we naturally tend to do the easiest things first – it gives us a feeling of progress, and puts off the ‘hard’ work till another day. But this approach just stores up risk in the project. We often see this at the end of a project or release cycle, where the problems getting our system working only appear when we try to integrate all the individual bits at the end of the project, or tackle that tricky feature we’ve been avoiding. Thats when the risk in the project attacks our schedule – so many projects seem to be progressing fine until the last 10-20% when the slippages begin to show.
Agile and lean attacks the risk before it can attack you – by including risk as well as value in your prioritisation strategy, risky bits are addressed early in the project. And by insisting potentially releasable, working software is delivered from every iteration/sprint, you ensure that risk is dealt with as early as possible.
While plan-driven, waterfall methods attempt to improve predictability by ‘planning’ away risk, incremental approaches like agile and Kanban improve it by attacking it early in the cycle. They swop what can be an illusion of predictability with a more pragmatic approach to managing risk.

How agile are you?

One of the big culprits in failed agile teams is the tendency to cherry pick those practices that seem to ‘fit with the way you work’, ‘with the way we do things around here’. Agile explicitly calls for methods to be customised depending on context. But often this can be misconstrued as selecting those bits that are compatible with how you work now – thereby leading to no fundamental change in the way you work. Examples are iterations as long as the release cycle, calling the project manager a ScrumMaster without a change in role and considering a feature or story ‘done’  when it has been coded and passed to QA. This leads to a Cargo Cult adoption where the team adopts the language and some ceremonies of agile, without understanding the fundamentals of how it works. No wonder the benefits are elusive…

When assessing how well teams have adopted agile methods like scrum, the approach is usually compliance based – an evaluation of how closely the team follows the defined method – whether customised or not. There are two fundamental difficulties with this:

1) The way in which agile practices are implemented in a team has a great bearing on how they support or constrain agility – for example, a daily stand-up meeting that spends 45 mins getting status updates from everyone is really not going to help a self-organising team co-ordinate their actions for the day. Even a stand-up of 10mins where the three standard questions are posed can be ineffective if the team doesn’t engage and feel ownership of it. Therefore, assessment by compliance evaluates, well…, compliance – not agility which is probably what you want to know.

2) Since each project & team implements their development method differently (a scientific fact from extensive research), and since that implementation evolves over time, using compliance as the basis for assessment hinders inter-team comparisons – akin to comparing apples and oranges. A lot of the value in assessing a team is so you can benchmark and compare to other teams as a way to identify possible paths to improvement. Without the ability to effectively compare, the assessment just isn’t all that valuable.

3) Most methods already used by teams have some really good aspects. Moving to agile should preserve these (unless replacing them with something even better). If attempts to be compliant with some textbook method causes these to be lost, then we’re really ‘throwing out the baby with the bathwater’.

To overcome these, my colleagues and I have been developing an assessment that looks at agility from first principles – regardless of what method is being used by the team. Of course agility is a complex concept with many facets such as creativity, responsiveness, simplicity and quality. By focusing on how any given method contributes to these facets, we can assess how it contributes or detracts from agility as a whole. We can also compare very different methods, like scrum, XP or indeed waterfall. And we can make recommendations which preserve whats valuable in what you do today while tackling those areas promising improvement.

In another post I’ll discuss an alternate assessment technique I use – rather than assessing agility, this one identifies barriers to adoption and helps map out an adoption strategy tailored to a team, project and organisation.

Agile Tour 2010 comes to Dublin

The Agile Tour is coming to Dublin October 14th – great chance to network with other local practitioners, get feedback on your experiences with agile and learn from others. I’ll be talking about adopting agile at the enterprise level – how do you justify it, what does it mean for the organisation and how do you develop a cohesive adoption strategy.

UPDATE: Agile Tour event was a great success! So much so we’re considering running it again in other cities around Ireland.

My presentation slides are available on Creative Commons License

Agile is more Transparent

In a series of posts I’m examining some of the claimed benefits of agile methods – are they justified? Here are the posts so far:

  • My first post looked at the cost of development with agile
  • The second discussed speed.
  • The third addressed quality.
  • The fourth looked at claims of un-predictability of agile.

This post will examine the transparency of agile vs.  other methods. The main contention is that because agile delivers potentially releasable, working software at the end of each iteration, there is implicit visibility of actual progress in delivering business value – there is no need to rely on other metrics derived from the process, such as lines of code, defect counts, hours worked, story points executed or features coded.

In waterfall or ad-hoc development, there are no iterations (other than major milestones such as a product release) where the real value delivered can be measured – therefore, proxy measurements like those mentioned above are necessary. But there are several issues with using these:

  • The numbers can be ‘gamed’:  The old adage ‘measure what you manage’ is well recognised, not only by managers but by development teams too. As managers try to manage the team by measuring them, the team will often try to manage the managers through those same measurements! In effect, the numbers may not reflect the full picture where there are delays or other issues – these still may not become apparent till its too late to react to them.
  • Many metrics are confined to measuring inputs, when its output that is of more interest: Managing a project based on the effort expended, rather than the value generated, is always going to lead to problems. Traditional project management focuses on time, resources and cost expenditure. Although the hours spent coding, the lines of code generated, the defects found, etc may all be linked to the value generated, they can be highly unreliable in this regard, and are often just downright misleading.
  • Defining, collecting and reporting on these derived metrics can be pretty time consuming. There are many project and portfolio managers who spend the majority of their time on such work.
  • Because these metrics are intrinsically linked to the ‘plan’, it becomes more difficult to measure them if the plan changes. For example, if I plan 100 hours work for a feature, and a requirements change means it takes just 50 hours, how do I account for that in my metrics – when we deliver the feature, are we running behind?

There can be little argument that the most direct, reliable measure of progress is business value delivered, normally in the form of value to the customer. But there are some things to watch for when moving in this direction:

  • Output vs. Outcome:  Even by delivering working software on a regular basis, there is no guarantee it is of VALUE.  Measuring outPUT may just drive faster, more regular delivery of software with little value. It is the outCOME we should ideally measure – the actual value derived from the software. But as this can’t be reliably measured till the product is on the market or in use – a conundrum for sure.
  • Iterative methods like RUP deliver features in increments – however, there isn’t the focus on delivering working, POTENTIALLY RELEASABLE software at each iteration – therefore proxy measurements must still be substituted for measures of real VALUE – the software hasn’t any value until its working.
  • This approach underscores the importance of the ‘Definition of Done’ (DoD) in agile – the development team must adopt an agreed definition that really does result in potentially releasable software at the end of each iteration. I often see iterations where the coding and feature testing are complete, but code review, integration, performance testing, etc are delayed so they can be done more efficiently for multiple features at a time.  This is fine once they are completed within the same iteration – and if the story points ARE NOT credited until they are done. Only after all steps have been complete, and a strict DoD is adhered too, has value been delivered and credit can be taken.

As focus shifts from management by effort to management by value, and as iteration costs decrease through automated test, build, integration and deployment, delivering real value, in the form of potentially releasable software, becomes more achievable and leads to much needed transparency.

Agile is more predictable

In a series of posts I’m going to examine some of the claimed benefits of agile methods – are they justified? My first post looked at the cost of development with agile, the second discussed speed. while the third addressed quality. Here I look at claims of un-predictability of agile.

What do we mean when we say agile methods sacrifice predictability for adaptability? Here I want to explore this commonly held belief – is it true?
In this context, predictability normally means the ability to deliver to a predetermined plan – predictable from the customer point of view.  The scope agreed is delivered with the necessary quality at the time and cost agreed.

But research shows that waterfall very rarely achieves this objective – in fact the record for plan-driven approaches is woeful! A now famous Standish report states that 31% of projects were cancelled before completion, while in 53% costs nearly doubled the  original budget. In fact only 16% came in on time and on budget. This report was from 1995 – when plan-driven was well established and agile methods had yet to make their appearance.

I would argue that in contrast agile methods bring predictability in several ways:

1) Timeboxed – Because agile treats scope rather than time as variable, ‘something’ will be delivered at the pre-determined milestone – the schedule does not slip.

2) Agile addresses high risk items early, like testing and end to end integration. These often prove to be the reasons plan-driven methods are so un-predictable.

3) Even if some scope is not delivered at the planned milestone, agile uses prioritisation by value to ensure the most important features are delivered (45% of features are never used according to the same Standish report)

4) Agile borrows from queueing theory to devise techniques to reduce variability in the development process – smoothing work flow through small, equally sized user stories greatly reduces queuing time and bottleneck formation in the process, delivering more reliable throughput.

5) By delivering potentially releasable code in each iteration, agile provides more visibility into real progress compared to the plan-driven approaches – this provides a more reliable basis for stakeholders to revise plans based on the reality of the project rather than any now incorrect plan.

6) Agile methods use inspect and adapt to adjust plans to emerging reality – regular reality checks mean more reliable predictions of milestone deliveries are possible.  In systems engineering terminology, such closed-loop systems are less reliant on every component of the system working in order to be predictable – instead they can compensate for unreliable components.

7) Agile methods encourage parallel work on various tasks and user stories – rather than analysis preceding development followed by test, these are performed concurrently.  Again, queueing theory shows us that this greatly reduces the variability of any process thereby increasing predictability.

Plan-driven methods can lend an illusion of control and predictability – and can serve political or covert roles by avoiding ‘whole-team’ accountability. But both experience and theory highlight that predictability is, contrary to common belief, not their strong-point.

The Agile Innovation Dilemma

One the seminal events for the development of the agile software development movement was the 1986 publication in HBR of “The New New Product Development Game” by Nonaka and Takeuchi. Describing lean production principles applied to new product development, the paper introduced the metaphor of a rugby team where a clear goal, overlapping skill sets and joint accountability allow teams dynamically adapt and self-organise to achieve their objectives despite unforeseen setbacks and challenges. From this, the term scrum was used by Sutherland and Schwaber in 1995 to describe an incremental, team based approach to software development. In this way, agile development and innovating new products share a common lineage.
Agile methods have long been advocated in supporting innovation. They explicitly call for self-reflection and improvement of the method through retrospection. Close customer contact and an understanding of the business problem to be solved can help the development team develop more innovative solutions than if they were coding to a static functional specification. Proponents have written of ‘hyper-productive’ scrum exhibiting ‘punctuated equilibrium’ leading to discontinuous or radical innovations. However, agile practicioners offer an alternate view – the intense focus on delivering small increments of customer centric features in the immediate future undermines their ability to seek out alternate solutions, to ‘think outside the box’. This is what I refer to as the “agile innovation dilemma”. So what has gone wrong?

Agile methods promise, first and foremost, agility – that is, flexibility in responding to changing requirements, technology and markets. But in promoting agile, more traditional values such as productivity, quality and time to market are often to the fore.  In efforts to realise these benefits, agile implementations often become exercises in micro-managing development teams, with local optimisations coming to the fore. These are reflected in many of the ‘patterns of failure’ seen in agile projects, and I’ll be describing some of these in future posts. Unfortunatly, these tend to mitigate against innovation, resulting in ‘pressure cooker’ development environments where all focus is on delivering the next increment of functionality, where the product owner is breathing down your neck, and where doing the simplest thing can take precedence over doing the ‘right’ thing.

When did you last hear of agile being introduced to enhance the innovation of a team?

Agile delivers better Quality

In a series of posts I’m going to examine some of the claimed benefits of agile methods – are they justified? My first post looked at the cost of development with agile, while the second discussed speed. Here we address quality. Commentators often say agile methods are not suitable for life or mission critical systems – for some reason they believe the quality cannot be delivered reliably using agile. But there is a growing opinion that agile methods can deliver even better quality than planned approaches. I believe it’s not anything inherent in agile methods that can lead to lower quality, rather it is a lack of discipline in applying the method. Discipline is required to ensure high quality in any method, agile or not. Unfortunatly agile is perceived by some (unjustly) as not demanding discipline, and hence not ensuring consistent quality. Here are some reasons I think an informed, mindful and disciplined agile method can be considered more reliable in terms of quality than planned methods:

  • Quality is built in – not added afterwards: Iterative development encourages continuous test – at a minimum every iteration. In agile methods with automated test, this is increased to daily, or even hourly, or every time the code base changes. With test driven development, tests are developed and executed even before the development begins.  This encourages quality coding from the outset, and the repeated and up front emphasis on testing engenders a quality culture in the team
  • Waterfall methods seperate responsibility for development and test – they encourage an antagonistic relationship between developers under pressure to churn out features and QA who have the lonely job of policing the quality of the system.
  • QA is normally the last major task in a waterfall model, following requirements, analysis, design and development. Therefore, it is usually conducted in limited time and under severe pressure at the end of the project – conditions not conducive to rigorous and insightful test.

But quality is not confined to the coded implementation itself:

  • The quality of requirements are improved through close customer collaboration and face to face interaction. Detailed requirements are determined only when required, just before implementation, and therefore involve less ‘future gazing’. Requirements are described in a user centric fashion, and can be better understood by the customer than more technology centric descriptions usual in traditional methods.
  • The quality of the plan, and therefore the predictability of the project, is improved through continuous replanning for each iteration, by addressing high importance and high risk items early on (eg integration) and by the transparency offered by measuring progress through ‘done’ features and the value they represent to the business.
  • The quality of experience for stakeholders is improved: customers who get what they want earlier, sponsors who get happier customers, product management who get better visibility and more options for managing the development, and developers and testers who get a more motivating and sustainable work experience.

Taken together, I believe agile can deliver better quality software than planned methods. Agile is rooted in lean thinking, a set of philosophical tools that helped Japanese companies reach new levels of manufacturing and product development quality over several decades.  However, discipline in their application, as with any method, is not optional.

“Agile Methods and Cloud Computing” or “When is a Project not a Project”

“Agile Methods  strive to deliver valuable, quality software faster and cheaper, while maintaining flexibility to harness change for competitive advantage.”

“Cloud Computing  strives to deliver valuable, quality software faster and cheaper, while maintaining flexibility to harness change for competitive advantage.”

Coming from entirly different perspectives, one of process, the other of technology, agile & cloud share a lot of similar aims.

Traditional waterfall methods are based on the idea of a “project” – with a defined deliverable, objective, duration and budget. At the beginning of the project, all these dimensions are agreed and the job of the Project Manager is to “work the plan” – to strive to make emerging reality match the predetermined plan. Much has been written about the shortcomings of this approach (“the future ain’t what it used to be”) and how trying to predict and plan for future events based on past experience is becoming increasingly difficult as the rate of change in the business and technology landscapes accelerate.

Agile methods emerged as an attempt to treat constant, unpredictable change in a more pragmatic way – accept its going to happen and work in a way that not only delivers in turbulent environments, but even ’embraces change for competitive advantage’.  As these methods have evolved, increasingly the emphasis is moving from “projects” to “increments” – from trying to deliver big lumps of functionality in one go, to delivering little bits in a constant ‘flow’. This has been made possible by reducing the ‘transaction costs’ of testing, building and deploying systems through continuous integration, automated testing, etc. Practices like test driven development and timeboxed iterations mean software is never far from being production ready. This means the “project” can be released, re-directed or even terminated at very short notice, while still delivering value to the customer according to the investment made.

This move from “project” centric to “increment” centric development, and the further reduction in the transaction costs of deploying to cloud hosted platforms, makes agile and cloud particularly good bedfellows.  Of course, this move away from large project based development has profound implications for other organisational processes, or even structures.  Pressures to adapt the organisation will be increased through the emergence of flexible technology platforms such as cloud computing.

So, when is a project not a project? When its one or more increments!

Agile is faster…

In a series of posts I’m going to examine some of the claimed benefits of agile methods – are they justified? My first post looked at the cost of development with agile. Here we address the speed of development.

Agile methods are often portrayed as delivering software faster than traditional, planned methods. Here are some of the ways agile gets value to market faster:

  • Note I use the word ‘value’ above rather than software – traditional methods don’t deliver any value to the customer until the entire product release is completed. But through incremental delivery, agile delivers the most valuable software features much earlier than this.
  • Through timeboxing development, agile facilitates concurrent design, test and development allowing the entire end to end process be radically compressed.
  • By breaking requirements into small ‘user stories’ it eliminates ‘batch dependancies’ where one feature can’t be completed or released until others in the requirement are also complete – the value ‘flows’.
  • Delivering the highest priority items first often means features included ‘just-in-case’ are never developed – they are superseded by new higher priority items as the business context evolves
  • Using smaller, evenly sized user stories and short iterations reduces variability and thereby  queuing times in complex systems such as software development teams
  • Methods such as Scrum & XP allow the team to work faster by explicitly eliminating distractions and interruptions – for example, in scrum the scrummaster is tasked with barring interruptions to the team and ensuring the sprint backlog is not changed in any way during the sprint. Other sources of distraction, such as being assigned to multiple projects at the same time, and changing team composition, are not encouraged.
  • Finally, although there is an up-front cost in providing automated test, continuous integration and other automated processes, these save time over the duration of the project and allow the team focus on delivering value with greater speed.

Agile is cheaper…

In a series of posts I’m going to critically review the common value propositions of agile – do the claims stand up to scrutiny?
Claim 1: Agile makes for cheaper software.
There are a number of justifications for this claim.

  • No more kitchen sink – with waterfall a common pattern is the tendency of the customer to squeeze in all possible requirements at the beginning of the process since the ‘change control process’ will make it very difficult to add requirements afterwards, and any changes would give a justification for the development team to slip the schedule. So lots of features, poorly understood and unlikely to be needed, are included up front. This leads to more complexity in the design which now has to take into account more features, delays starting time of development as it takes longer to get signed off requirements, and puts features with low or even no current business value on an equal priority for delivery as critical functions (since all must be delivered in a big release). The software delivered in the end therefore has more functionality than in agile – it costs more to produce.
  • Invest only in artifacts with business value: Agile values working software over documentation, but also values documentation with business value over that merely used to ‘run the project’. So time spent developing requirements specs, test plan documents, status reports, etc can be spent instead on great user guides, operations guides, regression test suites, etc.
  • Lower Opportunity Costs: Because agile delivers value earlier, it reduces lost opportunity costs and enables the business address opportunities in the market more quickly.
  • Lower Product LifeTime costs: The up front investment in continuous integration, automated test suites and refactored, well structured code has been shown to be rewarded several times over in reduced ongoing maintenance costs throughout the softwares life, and even extending that life beyond where ‘dirtier’ code becomes uneconomical to maintain.
  • Delayed Investment: By returning business value from early in the development lifetime, an earlier return on the investment is realised, and further investment can be delayed till its really necessary.
  • Global vs. Local Optimisation: A key lean principle is global over local optimisation. Large batches rather than continuous flow, specialisation rather than skills redundancy and other traditional tactics to increase resource utilisation aim to acheive optimisation of a particular task set such as development or testing. But it is the optimisation of the whole that is important and agile cuts costs by focusing on this global optimisation.

Are there other ways agile cuts costs? If you know more please share them…

Next blog I’ll look at how agile can speed time to value.