Executable Specifications: An Agile Core Practice
Test-Driven Development (TDD)
The steps of test first development (TFD) are overviewed in the UML activity diagram of Figure 1. The first step is to quickly add a test, basically just enough code to fail. Next you run your tests, often the complete test suite although for sake of speed you may decide to run only a subset, to ensure that the new test does in fact fail. You then update your functional code to make it pass the new tests. The fourth step is to run your tests again. If they fail you need to update your functional code and retest. Once the tests pass the next step is to start over.
I like to describe TDD with this simple formula:
TDD = Refactoring + TFD.
TDD completely turns traditional development around. When you first go to implement a new feature, the first question that you ask is whether the existing design is the best design possible that enables you to implement that functionality. If so, you proceed via a TFD approach. If not, you refactor it locally to change the portion of the design affected by the new feature, enabling you to add that feature as easy as possible. As a result you will always be improving the quality of your design, thereby making it easier to work with in the future.
Instead of writing functional code first and then your testing code as an afterthought, if you write it at all, you instead write your test code before your functional code. Furthermore, you do so in very small steps – one test and a small bit of corresponding functional code at a time. A programmer taking a TDD approach refuses to write a new function until there is first a test that fails because that function isn’t present. In fact, they refuse to add even a single line of code until a test exists for it. Once the test is in place they then do the work required to ensure that the test suite now passes (your new code may break several existing tests as well as the new one). This sounds simple in principle, but when you are first learning to take a TDD approach it proves require great discipline because it is easy to “slip” and write functional code without first writing a new test.
Tests as Requirements
Figure 2 depicts a customer acceptance test description (it is shortened for the sake of brevity, it would need to be expanded with more steps to truly validate the functionality described). As you’d expect, the test has instructions for setting up and then running it. Additionally, a description, test ID (optional), and expected results are also indicated. Acceptance tests should be fully automated so that you can run them as part of your application’s regression test suite. The FITNesse testing framework is a popular choice for doing so.
|Description||Checking accounts have an overdraft limit of $500. As long as there are sufficient funds (e.g. -$500 or greater) within a checking account after a withdrawal has been made the withdrawal will be allowed.|
|Expected Results||Account #12345:
Ending balance = -$500
Tests as Design Specifications
Similarly developer test can form the majority of your detailed design specification. Developer tests are typically written with xUnit family of open source tools, such as JUnit or VBUnit. These tests can be used to specify both application code as well as your database schema.
Are Tests Sufficient Documentation?
Very likely not, but they do form an important part of it. For example, you are likely to find that you still need user, system overview, operations, and support documentation. You may even find that you require summary documentation overviewing the business process that your system supports. When you approach documentation with an open mind, I suspect that you will find that these two types of tests cover the majority of your documentation needs for developers and business stakeholders. Furthermore, they are a wonderful example of AM’s Single Source Information practice and an important part of your overall efforts to remain as agile as possible regarding documentation.