Engineering Project Phases
Every phase has a purpose: Don't skip ahead. Concept phase isn't the time to write code. Design phase isn't the time to freeze requirements. Each phase builds on the last. Shortcuts create problems later.
Phase 0: Concept
Goal: Understand the problem, feasibility
- Stakeholder interviews
- Problem definition
- High-level requirements
- Feasibility analysis
- Rough cost/schedule estimate
- Go/no-go decision
Phase 1: Requirements
Goal: Define what success looks like
- Functional requirements
- Non-functional requirements
- Interface definitions
- Acceptance criteria
- Requirements review with customer
Phase 2: Design
Goal: How we'll meet requirements
- System architecture
- Trade studies (COTS vs custom)
- Interface control documents
- Design reviews (PDR)
- Risk mitigation plans
Phase 3: Implementation
Goal: Build it
- Detailed design & coding
- Unit testing as you go
- Integration (incremental)
- Code reviews & peer checks
- Critical Design Review (CDR)
Phase 4: Verification
Goal: Prove it meets requirements
- System-level testing
- Environmental testing
- Requirements traceability
- Test reports & documentation
- Test Readiness Review (TRR)
Phase 5: Delivery & Support
Goal: Deploy and maintain
- Customer acceptance
- Training & documentation
- Deployment support
- Bug fixes & patches
- Lessons learned
Key Deliverables by Phase
Document what matters: Don't over-document, but don't under-document. Requirements, design decisions, test results—these must be captured. Meeting notes and status emails? Less critical.
| Phase | Critical Deliverables | Review/Gate |
|---|---|---|
| Concept | Concept of Operations (ConOps), Feasibility Study, Cost/Schedule Estimate | Concept Review |
| Requirements | Requirements Specification, Interface Control Documents (ICDs) | Requirements Review (SRR) |
| Design | System Architecture Document, Trade Study Reports, Risk Register | Preliminary Design Review (PDR) |
| Implementation | Detailed Design Docs, Source Code, Unit Test Results | Critical Design Review (CDR) |
| Verification | Test Plans, Test Procedures, Test Reports, RTM | Test Readiness Review (TRR) |
| Delivery | As-Built Documentation, User Manuals, Training Materials | Acceptance Review |
Code Freeze & Release Management
Stop adding features: Code freeze means FREEZE. No new features, only critical bug fixes. Every "just one more thing" delays delivery and destabilizes the build. Discipline here separates professionals from amateurs.
Release Process
Typical Release Timeline
- T-8 weeks: Feature complete. All planned functionality implemented.
- T-6 weeks: Code freeze. Only bug fixes allowed, require approval.
- T-4 weeks: Integration testing complete. System-level tests passing.
- T-2 weeks: Regression testing. Verify nothing broke from bug fixes.
- T-1 week: Release candidate. Final smoke tests, documentation review.
- T-0: Release to customer. Deployment support begins.
Managing scope creep during freeze: Customer wants new feature after code freeze? Document it, add to backlog for next release. Don't compromise the current release for "one more thing."
Hotfix vs Patch vs Update
Know the severity: Not all post-release changes are equal. Critical security flaw? Hotfix immediately. Minor UI bug? Wait for next scheduled update. Triage based on impact and risk.
Hotfix
Emergency repair
- Critical bug in production
- Security vulnerability
- System down or data loss risk
- Deploy ASAP (hours/days)
- Minimal testing (focus on fix)
Patch
Important bug fix
- Non-critical bugs affecting users
- Performance issues
- Accumulated small fixes
- Deploy within weeks
- Regression testing required
Update/Release
Planned new version
- New features
- Minor bug fixes
- Performance improvements
- Scheduled deployment
- Full test cycle
Anticipating Customer Needs
Think beyond the stated requirement: Customers don't always know what they need. A great engineer spots unstated requirements and addresses them proactively. "The spec says X, but what they actually need is Y."
Example: Data Logger Request
Customer says: "We need to log sensor data to a file."
Stated requirement: Data logging functionality.
Unstated but obvious needs:
- What happens when disk fills up? (rotate logs, alert operator)
- How do they retrieve logs? (USB export, network transfer)
- What if power fails during logging? (flush buffers, resume on reboot)
- Can they search/filter logs? (timestamps, event types)
- How long do they keep data? (retention policy, compression)
Result: Anticipate these needs in design. Customer will appreciate you thought ahead.
ROI & Business Thinking
Engineering decisions are business decisions: Every choice has cost implications. Custom hardware vs COTS? Development time vs recurring cost? These aren't just technical decisions—they affect ROI, time-to-market, and profitability.
Build vs Buy Analysis
Example: Custom Power Supply
COTS Option:
- Cost: $200/unit
- Available now
- Proven reliability
- Vendor support
- 100 units = $20,000 recurring
Custom Option:
- NRE: $50,000 (design + prototyping)
- Unit cost: $80 (at 100 qty)
- 6 months development time
- Maintenance burden on us
- 100 units = $50k NRE + $8k = $58,000
Break-even: At ~416 units, custom becomes cheaper. But consider:
- Will we sell 416+ units? (market analysis)
- Can we afford 6-month delay? (time-to-market)
- Do we have resources for support? (long-term cost)
Think like a business: Your job isn't to build the most elegant solution—it's to deliver value to the customer at acceptable cost and risk. Sometimes that means boring COTS components. Sometimes it means custom engineering. ROI drives the decision.
Lessons Learned Process
Actually capture lessons learned: Every project has mistakes and successes. Document them while they're fresh. "We should have..." means nothing if you don't write it down and review it before the next project.
Questions to Ask
What Went Well?
- Which processes worked smoothly?
- What decisions paid off?
- What tools/methods were effective?
- How do we repeat this success?
What Went Wrong?
- What caused delays or rework?
- Which assumptions were wrong?
- What would we do differently?
- How do we prevent this next time?
Process Improvements
- Update templates/checklists
- Add to risk register for similar projects
- Share with team (brown bag lunch)
- Review lessons before next project kickoff