Attack Risk before it Attacks You!

Risk undermines predictability in software development – every development effort includes some unknowns. Some development is relatively risk free – for example, writting another driver that is 90% the same as one done previously, but just using a different interface signature. But most involve considerable risk – technology you haven’t used before, or a new domain, or a backend system you’ve never had to integrate with previously. Often these unknown aspects are left to late in the project – we naturally tend to do the easiest things first – it gives us a feeling of progress, and puts off the ‘hard’ work till another day. But this approach just stores up risk in the project. We often see this at the end of a project or release cycle, where the problems getting our system working only appear when we try to integrate all the individual bits at the end of the project, or tackle that tricky feature we’ve been avoiding. Thats when the risk in the project attacks our schedule – so many projects seem to be progressing fine until the last 10-20% when the slippages begin to show.
Agile and lean attacks the risk before it can attack you – by including risk as well as value in your prioritisation strategy, risky bits are addressed early in the project. And by insisting potentially releasable, working software is delivered from every iteration/sprint, you ensure that risk is dealt with as early as possible.
While plan-driven, waterfall methods attempt to improve predictability by ‘planning’ away risk, incremental approaches like agile and Kanban improve it by attacking it early in the cycle. They swop what can be an illusion of predictability with a more pragmatic approach to managing risk.

Moving from Authority to Responsibility

Lean Thinking, which underlies Agile methods like Scrum and XP, has as one of its central pillars “respect for people”. Agile reflects this in terms of ‘whole team’ accountability, collaboration and self-organisation. All these factors lead to agile seeing team members from a more ‘humanistic’ point of view – they are more than just resources that can be swapped in and out of projects – the sort of ‘bean-counting’  mindset giving rise to the ‘mythical man-month’.  Agile takes teams and individuals much more seriously, calling for long-life, cross-functional teams that are allowed the time, latitude and autonomy to gel into a high-performing whole. This change is well summed up by the phrase ‘Move from Authority to Responsibility’. Where organisational structure is based on individual authority rather than joint responsibility, it leads to fragmentation, isolation of roles, hand-offs, friction and ultimately poor organisational performance.

Its great to have really simple and memorable phrases to guide our day to day decisions and I think this is one of those: Build Responsibility, not Authority.

How agile are you?

One of the big culprits in failed agile teams is the tendency to cherry pick those practices that seem to ‘fit with the way you work’, ‘with the way we do things around here’. Agile explicitly calls for methods to be customised depending on context. But often this can be misconstrued as selecting those bits that are compatible with how you work now – thereby leading to no fundamental change in the way you work. Examples are iterations as long as the release cycle, calling the project manager a ScrumMaster without a change in role and considering a feature or story ‘done’  when it has been coded and passed to QA. This leads to a Cargo Cult adoption where the team adopts the language and some ceremonies of agile, without understanding the fundamentals of how it works. No wonder the benefits are elusive…

When assessing how well teams have adopted agile methods like scrum, the approach is usually compliance based – an evaluation of how closely the team follows the defined method – whether customised or not. There are two fundamental difficulties with this:

1) The way in which agile practices are implemented in a team has a great bearing on how they support or constrain agility – for example, a daily stand-up meeting that spends 45 mins getting status updates from everyone is really not going to help a self-organising team co-ordinate their actions for the day. Even a stand-up of 10mins where the three standard questions are posed can be ineffective if the team doesn’t engage and feel ownership of it. Therefore, assessment by compliance evaluates, well…, compliance – not agility which is probably what you want to know.

2) Since each project & team implements their development method differently (a scientific fact from extensive research), and since that implementation evolves over time, using compliance as the basis for assessment hinders inter-team comparisons – akin to comparing apples and oranges. A lot of the value in assessing a team is so you can benchmark and compare to other teams as a way to identify possible paths to improvement. Without the ability to effectively compare, the assessment just isn’t all that valuable.

3) Most methods already used by teams have some really good aspects. Moving to agile should preserve these (unless replacing them with something even better). If attempts to be compliant with some textbook method causes these to be lost, then we’re really ‘throwing out the baby with the bathwater’.

To overcome these, my colleagues and I have been developing an assessment that looks at agility from first principles – regardless of what method is being used by the team. Of course agility is a complex concept with many facets such as creativity, responsiveness, simplicity and quality. By focusing on how any given method contributes to these facets, we can assess how it contributes or detracts from agility as a whole. We can also compare very different methods, like scrum, XP or indeed waterfall. And we can make recommendations which preserve whats valuable in what you do today while tackling those areas promising improvement.

In another post I’ll discuss an alternate assessment technique I use – rather than assessing agility, this one identifies barriers to adoption and helps map out an adoption strategy tailored to a team, project and organisation.

Losing and Finding the big picture

Working with agile teams I see a common underlying issue emerge time and again: Keeping the big picture in mind. Here are some examples:

  • Agile looks to emergent architecture – its illogical to define architectural design at the beginning of a project if you accept that the requirements (business and technical) cannot be determined reliably up front. Attempts to do this usually lead to over-specification – to avoid any constraints emerging due to the architecture we tend to engineer a highly complex solution to cater for all eventualities.  Agile has a notion of emergent design – because software is ‘ soft’ we can change it (eg refactoring) – we don’t have to get it ‘ right first time’  like in the construction  industry where the architecture metaphor came from. Although there is a cost in reworking architecture as we go along, there is also a saving in getting it right, in not over-specifying it, etc. But agile doesn’t call for no architectural design (as Christopher Alexander – creator of the design pattern concept, puts it: “good design requires keeping the big picture in mind”). Instead, agile looks to late elaboration, keeping our options open as long as possible, and allowing us learn as the project progresses. Remember, we know least about the project at the beginning – so why lock in our architecture prematurely? Perhaps the term ‘ architecture’ has outlived its usefulness….
  • Agile expresses requirements as small fragments – user stories are the common basis for expressing requirements in agile projects. But user stories are explicitly independant, small, specific (see Bill Wakes INVEST criteria for effective user stories). Other approaches to Agile/Lean requirements are Minimum Marketable Features (MMFs), Minimum Valuable Features (MVFs) and Business Value Increment (BVI). All these are attempts to delineate a “manageable” slice of functionality to be delivered in a short cycle time which will deliver value to the customer. But how these small units of functionality link to the bigger picture can get lost. How does my story contribute to this epic, or theme, or goal? Jeff Patton proposes the User Story Map as a mechanism to tie stories to the bigger picture. Another approach is to ensure stories are derived from the bigger picture – epics, themes – rather than being defined first and then grouped into larger areas of functionality.
  • Agile doesn’t make it easy to scale to big projects – methods like Scrum and XP are specified at the small team level – its hard to scale to big enterprise projects (though there are more and more success stories emerging). Small teams working on a sub-set of a big project can lose sight of the big picture.  Whats often forgotten here is that scaling problems are not specific to agile – no method makes it easy to scale system development. While traditional methods address scaling by freezing design early and relying on documents to communicate details between teams (both techniques rife with problems), agile relies on face to face communication (eg in a scrum of scrums) and continuous integration/automated test (where the code for the entire system is built, deployed and tested many times a day). From my experience, the agile approach holds far more promise, especially where the extent and rate of churn is increasing, and the ability to rapidly deliver of software is becoming a strategic capability.

In summary, focusing on the small bits (as agile does) does not mean losing sight of the big picture. This is not an either/or situation. We can work incrementally while keeping an eye on the big picture.

Stop Starting & Start Finishing

Words are worlds. Using different words to differentiate things can mask their many similarities. Distinguishing between agile and lean software development methods is a case in point – these are seen by many as distinctly different approaches to organising development, when in fact they are underpinned by the same philosophies and theories, and share more similarities than differences.

But, pigeon-holing things with a label can also be a powerful tool for gaining focus and for communicating. Take the title of this post – Stop Starting and Start Finishing – for me this is a powerful phrase for communicating the essence of lean – reducing WIP. As an agile coach I’ve found I can talk for hours about kanban, WIP, value streams, flow, pull, etc. But the ‘ ahaa’ moment often comes when I use this phrase. I think its power lies in not only exposing the heart of lean in common language, but it also acts as a real easy decision rule for the team – it captures the key action the team must make to start reducing that pile of WIP, get value flowing, get to DONE, move towards pull, etc.

If I had 5 seconds to teach agile/lean, I’d just say this phrase. If I had 50 seconds, I’d say it 10 times!

Think I’m DONE, so I’ll finish.

Hand-offs kill agility

I’ve been working with several large organisations recently, all interested in adopting agile in new development groups. But not before they build in the barriers that make agile exceedingly difficult, or even impossible.  I’m talking organisational structure here.  Once an organisation is formed, it can be very difficult to change. Once employees have been carved up into different functions/departments (ie fiefdoms) if can be a real bun-fight to get them working together effectively.

The traditional organisation has a business organisation, developers, QA, maybe data architects, DBA, UI design, SOA expert group, etc. But delivering a feature requires all these groups work together. And if you have an iteration cycle of 2 weeks, that means they all need to work together with no time wasted handing-off work to each other. As each group wrestles with its own priorities (driven by their own metrics and reward systems) they generally fail to gel as a cohesive team, with miscommunication, queuing and differing priorities the norm.Finger-pointing and increasing CYA bureaucracy are often the result.

Some say the matrix organisations are the answer – bring the necessary experts together on a project by project basis. But this means a lot of chop/change as projects start & finish, grow and shrink, etc. Research shows teams reach optimum performance when they’ve been working together for 4 years – much longer than the average project.

So why do we divide people up into distinct ‘departments’ – is there more to it than ease of centralised management? And if we are pursuing self-organisation, is there still a good arguement for it? Theres certainly a significant cost…

What is our goal?

Reading E. Goldratts famous book “The Goal” last night (a novelised – is that a word? – explanation of Theory of Constraints) and reflecting on some recent discussions on the agile manifesto, it seems the idea of what we are trying to achieve as agile developers is still not resolved.
1) In the Goal, Goldratt argues the Goal of any business is to make money – anything that contributes to that is productive, anything that doesn’t is non-productive
2) Lean thinking focuses on creating value – but value for whom is a little vague. If it is customer value, then the customer gains – but not necessarily the business (offering products below cost might be great for customer value but detrimental to the business). So should we interpret lean value as ‘business value’ – which may have constituent parts such as customer value, strategic value, employee value, etc?
3) The agile manifesto values “working software over comprehensive documentation” and states its “highest priority” is delivering “valuable software”. But valuable to whom? Working software isn’t necessarily valuable, and valuable software doesn’t necessarily make money.

This argument may seem pretty esoteric but aligning software development with the end goal of the business is essential to success. There are many corporate skeletons out there who were renowned for their technology but failed commercially (eg Digital Equipment Corp). So how do we integrate “making money”, “delivering value” and “valuable, working software” into a robust, operationalised framework for running our software development operations? Food for thought….

Motivation and agile teams

Love this video summary of Dan Pinks great book – DRIVE. Explains a lot about why agile teams get so much valuable work done, and feel so good doing it (of course it relies on agile being implemented in a ‘genuine’ manner with real autonomy & empowerment – not just as a facade or means to micro-manage). Watch it – well worth the 11 minutes!

Agile is more Transparent

In a series of posts I’m examining some of the claimed benefits of agile methods – are they justified? Here are the posts so far:

  • My first post looked at the cost of development with agile
  • The second discussed speed.
  • The third addressed quality.
  • The fourth looked at claims of un-predictability of agile.

This post will examine the transparency of agile vs.  other methods. The main contention is that because agile delivers potentially releasable, working software at the end of each iteration, there is implicit visibility of actual progress in delivering business value – there is no need to rely on other metrics derived from the process, such as lines of code, defect counts, hours worked, story points executed or features coded.

In waterfall or ad-hoc development, there are no iterations (other than major milestones such as a product release) where the real value delivered can be measured – therefore, proxy measurements like those mentioned above are necessary. But there are several issues with using these:

  • The numbers can be ‘gamed’:  The old adage ‘measure what you manage’ is well recognised, not only by managers but by development teams too. As managers try to manage the team by measuring them, the team will often try to manage the managers through those same measurements! In effect, the numbers may not reflect the full picture where there are delays or other issues – these still may not become apparent till its too late to react to them.
  • Many metrics are confined to measuring inputs, when its output that is of more interest: Managing a project based on the effort expended, rather than the value generated, is always going to lead to problems. Traditional project management focuses on time, resources and cost expenditure. Although the hours spent coding, the lines of code generated, the defects found, etc may all be linked to the value generated, they can be highly unreliable in this regard, and are often just downright misleading.
  • Defining, collecting and reporting on these derived metrics can be pretty time consuming. There are many project and portfolio managers who spend the majority of their time on such work.
  • Because these metrics are intrinsically linked to the ‘plan’, it becomes more difficult to measure them if the plan changes. For example, if I plan 100 hours work for a feature, and a requirements change means it takes just 50 hours, how do I account for that in my metrics – when we deliver the feature, are we running behind?

There can be little argument that the most direct, reliable measure of progress is business value delivered, normally in the form of value to the customer. But there are some things to watch for when moving in this direction:

  • Output vs. Outcome:  Even by delivering working software on a regular basis, there is no guarantee it is of VALUE.  Measuring outPUT may just drive faster, more regular delivery of software with little value. It is the outCOME we should ideally measure – the actual value derived from the software. But as this can’t be reliably measured till the product is on the market or in use – a conundrum for sure.
  • Iterative methods like RUP deliver features in increments – however, there isn’t the focus on delivering working, POTENTIALLY RELEASABLE software at each iteration – therefore proxy measurements must still be substituted for measures of real VALUE – the software hasn’t any value until its working.
  • This approach underscores the importance of the ‘Definition of Done’ (DoD) in agile – the development team must adopt an agreed definition that really does result in potentially releasable software at the end of each iteration. I often see iterations where the coding and feature testing are complete, but code review, integration, performance testing, etc are delayed so they can be done more efficiently for multiple features at a time.  This is fine once they are completed within the same iteration – and if the story points ARE NOT credited until they are done. Only after all steps have been complete, and a strict DoD is adhered too, has value been delivered and credit can be taken.

As focus shifts from management by effort to management by value, and as iteration costs decrease through automated test, build, integration and deployment, delivering real value, in the form of potentially releasable software, becomes more achievable and leads to much needed transparency.