Requirements: How much is too much?

15 April, 2015 by Martin Aspeli | AgileEngineering

Martin's final blog post in a series of three on our development philosophy.

In a previous article, we discussed why it is desirable to work in small batches: we want to deliver frequently and incrementally to reduce risk and increase feedback. In enterprise software delivery, however, there is one commonly required document that is responsible for more large-batch dysfunctional behaviour than any other: the functional specification.

Functional specifications are a traditional artefact of “waterfall” development approaches. After a (usually lengthy) requirements analysis phase, a single document of all the solution requirements is produced and presented for sign-off. This is then used as a baseline for design, development, change control, testing and planning.

The challenges

Even in nominally Agile projects, clients often insist on a signed off, detailed functional specification document. On the surface, this would seem to be in their interest. Used as a contractual document (whether formally or informally), it appears to improve clarity of project outcomes and allow them to assess value for money before the project is complete.

As we discussed in that previous article, however, this is based on a fallacious assumptions: that the requirements can truly be understood up front. Unfortunately, even when clients agree, based on experience, that this is not the case, they would often rather still have the document and perform change control against it as new requirements emerge, rather than dispense with it completely. The problem with this is twofold:

  • It is inefficient, as the overhead of managing what often amounts to several hundred pages of text can be significant. Moreover, the sign-off is often illusory, as the document becomes so complex that it is impossible for any one person to warrant that it is internally consistent and complete .
  • It implies a lengthy up-front requirements analysis phase, and usually leads to further large-batch behaviour in design, development and testing.

An alternative

On our Agile projects, we will typically spend 2-6 weeks on site at the start of a project to confirm scope and get set up for development. In this time, we will will produce what we call a functional guide: a short document (10-15 pages) describing at a high level the strategic goals behind the project and the value the client hopes to derive from it. We may also describe high level business processes and other relevant context.

We will then produce an initial requirements catalogue consisting of epics and user stories (see the discussion of requirements management as one of our core disciplines), which will be refined over time. In any given week, we will split, combine, replace, re-prioritise and reword several items in the requirements catalogue in collaboration with the client, as our our understanding of the business requirements evolve. This is Agile at its best: we are using feedback and learning to iteratively improve our understanding of the solution and, as a result, the solution itself.

Clients in fact tend to like this, but by dispensing with the detailed functional specification, they are left with several uncomfortable questions:

How do we know what we are getting for our money if we don’t put it in writing?

In other words: if the scope isn’t nailed down, how do we know we’re not paying for something we don’t want?

The truth is that the scope is never nailed down. Our shared understanding of it evolves with the project. Writing down some incorrect scope does not make us better able to judge value for money. In fact, the large-batch behaviour and change control overhead that tends to follow from a “Big Requirements Up Front” project reduce value for money.

True value for money can only be ascertained when something valuable has been delivered. That is one reason for delivering incrementally. If the client can see some of their most important requirements being built and tested to production quality within the first month or two of the project, and continuing regularly thereafter, they will have real data on which to base their value-for-money evaluation.

Over time, this shifts the conversation away from paying for a set of features towards paying for a team that is able to deliver value for money even in the face of potentially significant requirements change. For this reason, we will usually write contracts that allow the client to wind down the project after any development iteration: when they feel the team has delivered sufficient value for money, they can initiate a short and orderly handover, potentially saving some of the original budget.

How can we have effective change control without a baseline to work from?

Fixed-price, fixed-scope projects require rigorous change control. Ask for a feature that, whilst it may be critical and even obvious, you did not write down in the original scope, and you will need to pay the incremental cost. From a client’s perspective, there are several problems with this:

  • Evaluating a single proposed change out of context can be difficult. We often see change boards reject changes out of fatigue or a desire to cut costs. This often happens late in the project, when the changes that emerge are often critical issues discovered in late testing.
  • Formal change control processes are expensive. It is not uncommon to pay thousands of pounds just for the change impact assessment, even if the change ends up being rejected.
  • Suppliers tend to be better at this game than clients and usually win the argument about whether something is a change (and so should cost more) or not.

All of that is not to say we don’t need to track change. Decisions that materially impact the scope or shape of the solution need to be discussed and tracked appropriately, and relevant documents updated. We simply want to do this in as lean a way as possible, minimising the cost of change and allowing us to focus on higher-value activities. At the end of the day, it is the quality and efficacy of the final solution that matters, not its fidelity to an outdated requirements document.

How can we prepare for testing?

Most clients want to perform some kind of formal, independent testing of the solutions we build, beyond the primary quality assurance we perform. They often engage testing specialists to do so. These testers are not business users who can make a subjective judgement about whether a solution is fit for purpose, but they are usually very methodical and rigorous in their testing.

The problem comes when a tester sees his (or her) job as comparing what what the system does in a given scenario to what the functional specification says it should do. This requires absolute detail in the functional specification. Moreover, testers need time to turn the specification into a series of detailed test scripts, leading to large-batch behaviour since the specification has to be detailed earlier than would otherwise be required.

Our solution to this is to do some of the testers’ job for them. It goes like this:

  • We write the high level functional guide and initial requirements catalogue to give an initial view of scope and project context, and refine the catalogue throughout the project.
  • Each user story in the requirements catalogue is written in terms of an outcome, not a system behaviour. For example, “As a user, I can log in using my corporate username and password, so that I do not have to remember a separate username and password for the system”. Already, this is starting to look at the outline of a test script.
  • As we prepare a user story for delivery, just-in-time, we flesh it out with acceptance criteria that give absolute clarity of the behaviour the system should exhibit. This takes the shape of one or more scenarios of the form:  

    Scenario: Successful login Given the user is on the login screen
    And user ‘jsmith’ with password ‘secret’ exists in the corporate Active Directory server
    When the user enters ‘jsmith’ and password ‘secret’ to log in
    Then the user is successfully logged And a welcome message is shown  


    The idea is that these acceptance criteria are the very tests required to prove compliance with the requirement.
  • We ask the client to sign these off, just-in-time. This is the client’s way of saying, “We warrant that if you can demonstrate that the system behaves as stated in the given scenarios, we will be happy that the requirement has been delivered.” 

    This step is important: without it, we’d be marking our own homework. In some projects, clients may also help us write the stories and acceptance criteria, although we tend to find that they prefer us to do that.
  • We write whatever code we need to meet the requirement and automate the acceptance tests. This is known as Behaviour Driven Development (BDD) or Acceptance Test Driven Development (ATDD). This gives us a robust suite or automated regression tests, which is run every time we make a change to the application, giving us fast feedback if we break something.

By using the signed-off acceptance criteria as the basis for their own test scripts, testers get the same level of detail that they would get with a functional specification. By asking the client to sign them off at the latest responsible moment, in small batches, we avoid the waste of up-front definition and improve the quality and accuracy of the requirements as captured because we are able to harness as much project learning as possible.

How can future projects understand the capabilities of the system?

As systems are maintained, extended, integrated with, decommissioned and replaced, it is often desirable to have a single document that describes what the system does, to avoid a forensic exercise of trying to gain this knowledge by using every feature of the system or reading its source code. A complete functional specification would seem to be a good document to serve this purpose.

Or is it? A functional specification is usually not produced with this as its primary purpose, and so may not have the right structure to act as post-project documentation. Furthermore, the true state of the system usually evolves from when the requirements specification was written, so the document may be incomplete or reference to several separate change control documents.

Our approach is, again, to use the requirements catalogue and acceptance criteria as the primary documentation of the system as built. It should be more accurate and complete, because the detailed documentation (the acceptance criteria) is defined just-in-time for as the relevant code is about to be written and because the test automation approach alerts us immediately to discrepancies between the behaviour of the system and the signed-off requirement. The functional guide can serve to put this into context. If further documentation is required, this can be produced separately and prioritised relative to other work as required.

Agile documentation

A key principle underlies all of this – what Scott Ambler calls Agile Documentation: Only write down what is unlikely to change.

When there is great uncertainty or fluidity, committing our current understanding to text creates overhead or confusion as the information that is written down can easily become outdated. Conversations, models and diagrams are more appropriate forms of communication in this scenario. Since knowledge is a rapidly depreciating asset (i.e. we tend to forget things, and information tends to be dependent on the context in which it is discussed), working in small batches and moving from requirements analysis to design to build and test in short order improves efficiency and accuracy.