In this chapter
Requirements are contracts.
What they are. A requirement is a written statement that defines, precisely, what the system must do or constrain itself to. It is not a wish, a preference, or a description of the future. It is a contract between the people who specify what is needed and the people who build it — binding both sides to the same definition of done.
Why they exist. Without contracts, every interpretation is equally valid. Two engineers reading the same vague phrase walk away with different mental models, build different things, and only discover the mismatch at integration or, worse, at acceptance. The cost of arguing about requirements at the start of a project is hours. The cost of arguing about them at the end is the project.
The mental model. Imagine ordering a custom suit. “Make me a nice suit” produces something the tailor thinks is nice, which may not be what you wanted. “Charcoal grey wool, single-breasted, two buttons, sleeves to my wrist bone, trousers no break” produces a suit that matches what you ordered or fails an objective check. Requirements are the second sentence. They eliminate the gap between “what you meant” and “what got built.”
The discipline. Every requirement is testable, traceable, and unambiguous. Testable means you can construct an objective check that says yes or no. Traceable means you know where it came from and what depends on it. Unambiguous means two competent engineers reach the same conclusion when they read it. A requirement that fails any of these three tests is not a requirement — it is a wish in formal clothing.
How to write requirements.
Good requirements have a recognizable shape. The shape exists because it forces every author to confront the parts that vague language hides — what subject, what action, under what condition, with what tolerance. Once you internalize the shape, writing requirements becomes mechanical; the discipline is in refusing to take shortcuts that obscure any of the four parts.
The SHALL statement format.
The aerospace and defense world standardized on a specific verb — SHALL — for binding requirements, and the convention is worth adopting whether or not you work in those domains. SHALL means “this is required.” Other verbs have different meanings: SHOULD is a recommendation, MAY is a permission, WILL is a description of expected behavior. Mixing them up leaks ambiguity into the contract.
Bad: "The system should be efficient and not use too much power."
Good: "The system SHALL consume less than 12W average power and 25W peak power during normal operation, measured at the +28V input over a 1-hour duty cycle."
Why the second one works:
- Subject: The system (clear scope)
- Action: SHALL consume (binding)
- Quantity: <12W average, <25W peak (numbers)
- Conditions: +28V input, 1-hour duty cycle (testable)
Notice what the bad version is missing: every quantity. “Efficient” means whatever the reader thinks it means. “Too much” is undefined. “Should” is non-binding. The good version is testable because it is specific, and it is specific because the author refused to let any of the four parts — subject, action, quantity, condition — remain implicit.
Characteristics of good requirements.
Beyond the shape, four properties separate requirements that survive a project from requirements that don’t. Each one has a failure mode, and the failure mode tells you why the property matters.
Atomic.
One requirement, one statement. A requirement that contains the word “and” is usually two requirements wearing one number, and at acceptance you’ll have to decide whether half-passing is acceptable.
Failure mode: compound requirements that pass partially and fail partially, with no clear answer to whether they pass overall.
Verifiable.
You can construct an objective test. There is a measurement, a procedure, a yes-or-no answer. “User-friendly” is not verifiable; “90% of users complete task X in under 30 seconds” is.
Failure mode: requirements that get checked off based on opinion rather than measurement. Acceptance becomes negotiation.
Unambiguous.
Two competent engineers read it and reach the same conclusion. If you have to explain what a requirement means in conversation, the requirement has failed — you should rewrite it.
Failure mode: the same words producing different implementations on different teams. Discovered at integration.
Complete.
Specifies behavior in all relevant cases — nominal, off-nominal, fault, edge. The error case is part of the requirement, not an oversight.
Failure mode: the system handles the happy path and crashes when reality deviates. Six months later, in the field.
Requirements decomposition.
What it is. The process of taking a high-level mission requirement and breaking it down through layers — system, subsystem, component — until each leaf requirement is small enough that one team can verify it independently. The decomposition tree is what makes a complex project tractable; it’s how a multi-thousand-page specification turns into individual responsibilities a team can act on.
Why it matters. No single engineer can verify a mission-level requirement directly. “The system shall communicate with the ground station” spans antennas, radios, protocols, security, schedules, and ground hardware. You can’t test it as one thing. The decomposition turns that one impossible test into a tree of possible ones, each owned by someone with the expertise to verify it. When the leaf tests pass, the parent passes by construction.
The mental model. An organizational chart for a company. The CEO has a goal that no individual person can execute alone. Each layer below decomposes that goal into specific accountabilities — VPs own divisions, managers own teams, engineers own deliverables. At the bottom of the chart, work happens. Above it, ownership is allocated. Decomposition is the same shape applied to engineering work.
Decomposition example.
Level 1 (Mission): "The satellite shall provide 1 Mbps continuous data to ground users."
Level 2 (System):
- "The communications subsystem SHALL transmit at 1 Mbps"
- "The data storage subsystem SHALL buffer 24 hours of data"
- "The power subsystem SHALL supply continuous operation"
Level 3 (Subsystem - Comms):
- "The transmitter SHALL output 1W RF power"
- "The antenna SHALL provide 3 dBi gain"
- "The modulator SHALL implement QPSK"
Level 4 (Component - Antenna):
- "Antenna SHALL be omnidirectional"
- "Antenna SHALL operate at 2.4 GHz"
- "Antenna SHALL withstand -55C to +85C"
Reading the tree. Each level’s requirements derive from the level above. If you change a Level 1 requirement — say, the data rate becomes 10 Mbps — you walk the tree downward and reassess every leaf. Some will be unaffected. Others will need to change. The tree is what makes change tractable; without it, a top-level change becomes a whole-system audit.
How it fails. The most common failure is requirements that aren’t actually decompositions — they’re inventions that someone added without a parent. A leaf requirement that doesn’t derive from any higher-level need is either dead code waiting to happen or a hidden scope addition that nobody approved. The discipline is to require every requirement to point upward to its parent. If a requirement has no parent, it has no business in the spec.
Requirements traceability.
What it is. A directed graph linking each requirement to the design elements that satisfy it, the test cases that verify it, and the higher-level requirements it derives from. In practice it’s a matrix — a spreadsheet, a database, or a tool — that lets you ask three questions and get an answer:
- For any requirement, which tests verify it?
- For any test, which requirements does it cover?
- For any design element, which requirements does it implement?
Why it matters. Without traceability, the project loses memory. A change request comes in: “the data rate is now 10 Mbps instead of 1 Mbps.” What does that affect? Without traceability, the only honest answer is “a lot of things, we’ll find out as we go.” With traceability, you can name the affected requirements, design elements, and tests in minutes — and the cost of the change is knowable rather than estimated.
The mental model. Building permits in a city. Each modification to a structure has a permit, the permit references the original plans, the plans reference the building code, and inspectors can walk any chain backward to verify compliance. When you renovate, the existing chain tells you what you’re affecting. When something fails, the chain tells you what was supposed to prevent it. Traceability is the engineering version of that paper trail.
Traceability matrix example.
| Req ID | Requirement | Source | Design Element | Test Case | Status |
|---|---|---|---|---|---|
| SR-001 | 1 Mbps data rate | CR-100 | Comms subsystem | TC-201, TC-202 | ✓ Passed |
| SR-002 | 24-hr buffer | CR-100 | Storage subsystem | TC-205 | In Progress |
| SR-003 | Continuous ops | CR-100, CR-105 | Power subsystem | TC-210, TC-211 | ✓ Passed |
| SR-004 | Encryption | CR-110 | Crypto module | TC-220 | [!] Not Started |
Reading the matrix. Each row says: this requirement comes from this customer request, is implemented by this design element, will be verified by these test cases, and is currently in this state. The status column tells you at a glance which requirements have proven evidence behind them and which are unfinished. The matrix becomes the project’s memory.
What traceability catches.
The matrix earns its keep by surfacing problems that would otherwise hide until late:
- Orphan requirements: rows with no design element. Either the requirement isn’t implemented yet, or it slipped through and nobody noticed.
- Untested requirements: rows with no test case. The system might do what was asked; you have no proof.
- Tests with no requirement: tests that exist but don’t cover anything in the spec. Either the test is wasted effort, or there’s an unwritten requirement that someone is verifying anyway.
- Design elements with no requirement: code or hardware that isn’t justified by the spec. Almost always either dead code or unauthorized scope.
- Conflicts: two requirements that contradict each other become visible when both trace to the same design element.
Types of requirements.
Not all requirements are the same shape. Different kinds answer different questions, and a complete specification names them deliberately rather than mixing them together in one undifferentiated list. The taxonomy below is the one most projects converge on; the names matter less than the discipline of recognizing each kind when you write it.
Functional.
What the system does. The actions, transformations, and behaviors in scope.
Examples: "Decode AIS messages." "Trigger alarm on detection." "Log events to non-volatile storage."
Performance.
How well it does it. Speed, capacity, throughput, accuracy — with numbers.
Examples: "Process 100 messages/sec." "Latency <10ms." "Position accuracy ±5m."
Interface.
How it talks to other systems. Protocols, formats, signaling, timing relationships at the boundary.
Examples: "Send NMEA-0183 over RS-232 at 4800 baud." "Accept TCP connections on port 8080."
Environmental.
Conditions it must survive. Temperature, vibration, humidity, EMI, radiation — with numbers.
Examples: "Operate -40C to +85C." "Withstand 10g vibration." "Tolerate 100 krad."
Safety.
What harm it shall not cause and how it shall fail. Hazard mitigation and failure-mode behavior.
Examples: "Fail to safe state on power loss." "Detect collision <100ms." "Provide manual override."
Quality attributes.
Cross-cutting properties — reliability, maintainability, security, usability — quantified to be testable.
Examples: "MTBF >10,000 hours." "Mean time to repair <30 min." "FIPS-140 compliant."
The discipline of naming the type forces clarity. A requirement labeled “functional” that says “the system shall be reliable” is misclassified — reliability is a quality attribute and the requirement is unverifiable as written. Forcing the type tag pushes back on vague requirements at the moment they’re written, which is much cheaper than catching them at acceptance.
Common requirements mistakes.
The same handful of mistakes show up across most projects, and each has a recognizable signature once you know to look for it.
Confusing requirements with design.
Bad: "Use AES-256 encryption."
Good: "Protect data per NSA Type 1 standards."
The first sentence locks the implementation. The second states the actual need and lets the designer choose the right means. Requirements should specify what and how well, not how.
Missing error cases.
Bad: "Display sensor reading."
Good: "Display sensor reading. Show 'INVALID' when sensor is offline. Show 'OUT OF RANGE' when value exceeds limits."
Real systems spend most of their lives off the happy path. The error case is part of the requirement, not an oversight.
Untestable requirements.
Bad: "System should be user-friendly."
Good: "90% of users SHALL complete task X in <30 seconds without training."
If you can’t construct an objective test, the requirement is a wish.
Scope creep via adjectives.
Bad: "Robust, scalable, flexible system architecture."
Each adjective is a hidden requirement with no acceptance criteria, no priority, and no budget. They sound good in slides and become argument fuel at acceptance. Strip them.
Requirements best practices.
The mechanics of running a healthy requirements process come down to a small number of habits practiced consistently. None of them are intellectually difficult; the difficulty is in not skipping any of them under schedule pressure.
- Start with stakeholder needs: requirements come from real needs, not imagined ones. The first question is always “who needs this and why.”
- One requirement per statement: compound requirements hide complexity and complicate verification. Split them.
- Number every requirement: persistent IDs (SR-001, IR-002) survive renumbering and let you reference requirements unambiguously across documents.
- Maintain traceability religiously: the matrix is your memory. Update it as the design evolves, not as a year-end exercise.
- Review with stakeholders: get sign-off before development. Late discovery of misunderstood requirements is the most expensive bug class there is.
- Version control requirements: they will change, and you’ll need to know what they were when you made earlier commitments.
- Distinguish needs from wants: separate “must have” from “should have” from “nice to have.” Not everything is critical. Treating everything as critical means nothing actually is.
- Test the requirements before testing the system: can each requirement actually be verified with the test approach you have? If not, fix the requirement now, not at acceptance.
Requirements engineering looks tedious from the outside. It is the most leveraged work on a project. Hours invested here save weeks downstream — in misaligned teams, missed acceptance criteria, change-order disputes, and rework. The teams that take requirements seriously ship; the teams that don’t spend the second half of the project relitigating what they thought they agreed to at the start.