One of the big culprits in failed agile teams is the tendency to cherry pick those practices that seem to ‘fit with the way you work’, ‘with the way we do things around here’. Agile explicitly calls for methods to be customised depending on context. But often this can be misconstrued as selecting those bits that are compatible with how you work now – thereby leading to no fundamental change in the way you work. Examples are iterations as long as the release cycle, calling the project manager a ScrumMaster without a change in role and considering a feature or story ‘done’ when it has been coded and passed to QA. This leads to a Cargo Cult adoption where the team adopts the language and some ceremonies of agile, without understanding the fundamentals of how it works. No wonder the benefits are elusive…
When assessing how well teams have adopted agile methods like scrum, the approach is usually compliance based – an evaluation of how closely the team follows the defined method – whether customised or not. There are two fundamental difficulties with this:
1) The way in which agile practices are implemented in a team has a great bearing on how they support or constrain agility – for example, a daily stand-up meeting that spends 45 mins getting status updates from everyone is really not going to help a self-organising team co-ordinate their actions for the day. Even a stand-up of 10mins where the three standard questions are posed can be ineffective if the team doesn’t engage and feel ownership of it. Therefore, assessment by compliance evaluates, well…, compliance – not agility which is probably what you want to know.
2) Since each project & team implements their development method differently (a scientific fact from extensive research), and since that implementation evolves over time, using compliance as the basis for assessment hinders inter-team comparisons – akin to comparing apples and oranges. A lot of the value in assessing a team is so you can benchmark and compare to other teams as a way to identify possible paths to improvement. Without the ability to effectively compare, the assessment just isn’t all that valuable.
3) Most methods already used by teams have some really good aspects. Moving to agile should preserve these (unless replacing them with something even better). If attempts to be compliant with some textbook method causes these to be lost, then we’re really ‘throwing out the baby with the bathwater’.
To overcome these, my colleagues and I have been developing an assessment that looks at agility from first principles – regardless of what method is being used by the team. Of course agility is a complex concept with many facets such as creativity, responsiveness, simplicity and quality. By focusing on how any given method contributes to these facets, we can assess how it contributes or detracts from agility as a whole. We can also compare very different methods, like scrum, XP or indeed waterfall. And we can make recommendations which preserve whats valuable in what you do today while tackling those areas promising improvement.
In another post I’ll discuss an alternate assessment technique I use – rather than assessing agility, this one identifies barriers to adoption and helps map out an adoption strategy tailored to a team, project and organisation.