How I Reviewed Specs @ Microsoft #1 – Purpose

Sunny Sunday morning. Everyone still dead asleep. Perfect time to pick up what I had started months ago: my HIRS@M series. Sorry for keeping you waiting. [:$]

#1 – Purpose

In this post I want to shed light on the purpose of reviewing specs. Reasons for which I value a review; reasons often overlooked IMHO.

And next?

So this is #1. What's next? #2 and #3! Promised. However no promise when.

Let me give a sneak preview:

  • #2 – Practice: so how did I (we) perform a review
  • #3 – "after Party": so once the spec has been reviewed and implemented, "what value does it have?"

As I implicitly pointed out in the prologue of this series good speccing is about driving quality upstream. Good speccing is about making sure you are designing what is needed. Not a Pontiac Aztek, when a Roll Royce is wanted. Or indeed a Pontiac Aztek, if that is what's asked for.

So what's good speccing?

There are many actions, many skills, many requisites that make up and define good speccing and one of them is spec reviewing.

BTW: if you are looking for the full Monty on specs I would like to recommend you Karl Wiegers' Software Requirements.

Team effort – Multiple Perspectives

As with many disciplines speccing is a team effort and actually I would say: should be. For this reason the multidisciplinary development teams I worked in at MS were involved in getting specs in place right from the start. Whereas the program manager (PM) would primarily be gathering requirements and writing the spec, dev, test and, if available, UA (i.e. user assistance being the documentalist) first contribution would be reviewing this. This setup would facilitate a multi-disciplinary, and as such a multi-perspective view on the matter. And thus, as Steve McConell states it in CODE Complete:

Page 482:

"Better code quality arises from multiple sets of eyes …"

To me it's as plain as that: multiple sets of eyes looking at the same. You often even do not need to be a an expert on the topic to be able to make a valuable contribution to a spec review. Well skilled spec writers do often enough experience that their initial perspective was too narrow! Missing a wide enough range of perspectives/disciplines in spec review potentially inflicts problems down stream. Or as Wiegers points it out with respect to involvement of the customer:

Page 33:

"… we know that lack of customer involvement greatly increases the risk of building the wrong product"

At the end of the day it's all about cost. Every hour spend on reviewing pays off:

Page 263:

"Several companies have avoided as much as 10 hours of labor for every hour they invested in inspecting requirements documents …"

From multiple sources McConnell calculates the relative costs for fixing defects based on "when they're introduced and detected":

Time Detected






Time Introduced




System Test

















Or in words (page 28):

"In general, the principle is to find an error as close as possible to the time at which it was introduced. The longer the defect stays …, the more damage it cause further down the chain"

The numbers depend of course on the type of software development business you're in. When building a solution specifically for a very small group of customers defects introduced at an early stage will not have as dramatic (cost) consequences as in case of building a standard solution for hundreds or thousands of customers, like Dynamics NAV.

Team Effort – In the Know

IMHO it's not only the multitude of perspectives during reviewing that helps to drive quality upstream. It's also the simple fact of getting a team engaged. In general, being part of a process from the start creates bigger involvement, as people will be in the know they will feel and know they can make a difference, and thereby are set to contribute to the quality of the product. You might say this is true even when the team members concerned do not (always) actively make contributions to the spec review. They where there, they got the opportunity.

It's clear that "everyone owns quality" as How We Test Software at Microsoft states (page 392), so you better make sure your team is in the best position to carry this weight.


  1. Wiegers, Karl, 2003, Software Requirements, 2d ed. Redmond, WA: Microsoft Press
  2. McConnell, Steve, 2004, CODE Complete, 2d ed. Redmond, WA: Microsoft Press
  3. Page, Johnston, Rollison, 2009, How We Test Software at Microsoft, Redmond, WA: Microsoft Press

Leave a Reply

Your email address will not be published. Required fields are marked *