Our Agile Disciplines

23 June, 2016 by Martin Aspeli | AgileDesignEngineeringRetail

Martin's second blog post of a series of three on our development philosophy.

In a previous article, we described the core values of our Agile software development approach: the things we instill in each team member, and that we use to guide our decisions as we tailor our approach to individual clients and projects. In this article, we will describe our core disciplines: the activities we perform continuously, throughout every project, regardless of the specifics of the development process we are following.

Like all aspects of our methods, our core disciplines evolve over time. At the time of writing, we have identified six:

1. Risk Management

IT projects often carry a great deal of risk, in terms of both the quality and cost of the solution, and the wider impact on the client organisation. Therefore, effective risk and issue management is essential.

On our projects, everyone is responsible for risk identification. We have a rule that there is never a penalty for raising a risk, however trivial or detailed, but there are serious consequences for sitting on one. If we are aware of a risk early, we can mitigate against it.

We believe that risk and issue management should go hand-in-hand with requirement and deliverable evolution. Where possible, risks and issues should be explicitly related to the requirements they may affect the delivery of.

2. Change Management

We know that solution requirements are rarely, if ever, fully understood up front. As the project evolves, so does the business' understanding of its requirements. This means that change is not only inevitable - it is often desireable. A requirement that was incorrectly deemed unimportant should not be ignored if it is later discovered to be business critical, and vice-a-versa.

The ways in which we write and track requirenments: design, develop and test software; and manage system acceptance are all built on the realisation that change is inevitable. At a basic level we aim to:

  • Minimise the negative impact of change, by minimising inter-dependencies in the way we write requirements, delivering work in full verticals rather than having many requirements in progress simultaneously, and completing the highest priority requirements first so that these drive any required changes to requirements of lesser importance, rather than the other way around.
  • Minimise the cost of change, by having light-weight, effective change control processes that give us quick visibility of the impact any given change request might have and avoid the cost of unnecessary process overhead.
  • Maximise the timeliness of change, by giving frequent opportunities for feedback, allowing the client to see the system develop throughout the project, and providing opportunities for corrective action early on.

3. Quality assurance

A significant portion – often a majority – of the costs of enterprise IT systems are incurred not during development, but in support and maintenance over the lifetime of a product. Compromise on quality up front, and the long term cost increase outweighs any perceived short term gain. Poor or ad-hoc quality assurance can also make releases less predictable, as the number of serious issues found late in the testing cycle tends to increase significantly.

We consider quality from two perspectives:

  • External quality is a measure of how well the solution meets functional and non-functional business requirements, treating the system as a "black box". External quality is assured through effective and on-going requirements management; requirements tracing from definition through to design, build and test; quality assurance by the business analysis team; and user acceptance testing performed by the client.
  • Internal quality is a measure of how well the system has been built. This impacts on-going maintenance and support costs, as well as the cost of future enhancements and integration with other systems. Internal quality is assured through a combination of automated and manual unit, integration and system tests, code review and static code analysis. Key metrics include the number of defects raised over the lifetime of the project and the rate at which defects are being raised and closed

Internal quality is closely related to the concept of technical debt, a lead indicator of project quality. Technical debt is incurred when a deliverable is completed, but has outstanding defects or was built by cutting corners that will make the feature difficult to maintain or build upon. These are often "known issues", although they may be "known unknowns" if the system is inadequately tested.

Left unmanaged, technical debt can crush a project as new functionality has to be built on an unstable foundation, increasing uncertainty, reducing the effectiveness of testing, and undermining trust in the solution. The implementation team is also likely to be frequently distracted to fix issues not directly related to the requirements they are currently working on.

As a rule of thumb, technical debt should be paid down soon after it is discovered. Like monetary debt, technical debt is subject to compound interest. In the long run, it is a false economy to attempt to deliver more features at the expense of paying down technical debt for features already considered completed.

4. Requirements management

It is common for a project to begin with a phase of requirements capture and validation, usually structured around business stakeholder workshops. The output of this phase is typically a functional requirements document, which is presented for sign-off by the business, and may include process flows, user personas and scenarios, narrative descriptions of features, and wireframes or mock-ups, as applicable.

The functional specification sign-off step can be an important stage in securing funding and buy-in for a project, but it must not be viewed as the end of requirements management. Requirements will be refined, added, removed and changed over the lifetime of the project. This means that we must write requirements in a manner that makes them malleable and maintainable.

To this end, we manage requirements in a hierarchy:

  1. A project vision sets out the business need that justifies the project. It should be captured succinctly and in a form that is agreeable to all stakeholders. If the project vision needs to change, the impact on the project is likely to be significant.
  2. A number of epics are then defined, each describing an area of functionality that is to be delivered. An epic could cover anything from a few days' to a few weeks' worth of work, but will generally encompass multiple user journeys or outcomes.
  3. Epics are broken down into stories, each of which represents a single action or outcome a user may perform with or expect from the system. Stories are the main unit of deliverable that developers work on, and the most granular unit of planning. Note that it is not always necessary to break an epic into stories straight away: if an epic is not of high priority or suffers from high uncertainty, it can be left as a ‘large’ placeholder until it can broken down.
  4. We define acceptance criteria against stories just-in-time for delivery, which specify precisely what criteria the implementation of a given story must conform to in order to be accepted by the business. The acceptance criteria also form the basis of automated and manual system tests.

5. Planning

If we take change and uncertainty as a given, we must accept that planning is an on-going discipline, not primarily something done at the beginning of the project. Software development is a creative, problem-solving process, where no two deliverables are exactly the same. This means that it is difficult – even counter-productive – to plan at the activity level. Instead, we plan based on deliverables (epics and stories) and milestones (iterations or delivery cadence).

The law of diminishing returns applies to the effort put into estimating against the accuracy of those estimates. Therefore, we will tend to plan against two horizons:

  • Medium-term release planning, which aims to indicate when discrete chunks of high level functionality (epics) will be delivered. We will typically produce a release plan at the beginning of the project, and refine it at the end of each iteration.
  • Short-term iteration planning, which provides a detailed picture of the functionality (stories) that will be delivered by the team in an immediately upcoming iteration. We perform iteration planning in conjunction with the customer at the beginning of each iteration.

For both release and iteration planning, we use structured estimation techniques to assign a measure of size (complexity) to each piece of functionality, whether at the epic or story level, and then use historical data and estimates about the team's performance to translate this into time estimates and concrete plans.

6. Continuous delivery

Taken together, the disciplines above allow us to deliver complete verticals of functionality – representing actual business value – on a continuing basis. By automating our build, test, deployment processes as much as possible, we aim to be able to accrue and measurevalue delivered to the business on a weekly or even daily basis. This requires an investment in tools and disciplined developers who as far as possible fully complete one deliverable before moving on to the next.

Continuous delivery serves several important purposes:

  • The business will be able to give more meaningful feedback when presented with completed functionality; by doing this frequently, we improve the quality and frequency of the feedback, and so ultimately the external quality of the solution.
  • By delivering in manageable chunks, we can control and limit technical debt, improving the internal quality of the solution.
  • If the project has to be cut short or scaled back, it will still have delivered something useful, especially if the business has been proactive in prioritising the most important requirements. This avoids the temptation to "throw good money after bad" in search of a completed project where every feature is only partly delivered and so effectively still unusable. Better to have some features fully delivered and others entirely missing.

Conclusion

These disciplines help us keep the project focused on what the client really needs, maintain our standards of quality, manage risks and issues proactively and deal with project change. If we get them right, the other aspects of our methods become easier, and our projects become more likely to succeed. Get them wrong, or miss them out altogether, and we lose resilience and agility, driving up costs and increasing the risk of project failure.

In practical terms, these are the areas where we value experience the most. We will typically make sure that there is someone on each project responsible for the execution, feedback and continuous improvement of each of these disciplines.

What are your core disciplines?