Quick reference · Bookmark this one

Reference.

The lookups, red flags, and decision frameworks you’ll come back to. Not a chapter to read end-to-end — a chapter to bookmark and skim when you need an answer faster than you need an explanation.

How to use this page.

This is the chapter to bookmark. The other chapters teach principles. This one collects the lookups, formulas, checklists, and decision frameworks you reach for under time pressure — the things you skim when you need an answer faster than you need an explanation.

Each entry here is short on purpose. If you want the full teaching, follow the links into the relevant chapter. If you just need to remember whether kTB is in dBm or watts, or how to compute a Doppler shift for a low-Earth-orbit pass, this page is where the answer lives.

Project red flags.

Patterns that show up reliably in projects that are about to slip, fail acceptance, or ship something the customer didn’t actually want. None of these are proof of trouble on their own. Two or three together, especially across categories, is signal worth acting on.

Requirements red flags

  • "We'll figure out requirements as we go"
  • Requirements change weekly with no process
  • No one can explain why a requirement exists
  • Requirements conflict with each other
  • "The customer will tell us when they see it"

Design red flags

  • "Let's make it generic for future use cases"
  • No trade studies, just gut feelings
  • Margins below 10% or completely unknown
  • "We'll optimize later" (spoiler: you won't)
  • Architecture changes during implementation
  • "We're doing it that way because the last project did"

Process red flags

  • No code reviews or peer checks
  • Testing happens only at the end
  • The first integration test is the entire system
  • Code review comments are about style; nobody mentions the problem domain
  • Documentation is "we'll catch up later"
  • "The latest version is the one I emailed you"
  • Meetings to plan meetings to discuss meetings
  • Decisions made by committee, no ownership

Team red flags

  • Key person is single point of failure
  • Engineers padding estimates 3-5x
  • Constant firefighting, no time to think
  • "That's not my job" mentality
  • Finger-pointing instead of problem-solving

Schedule red flags

  • Schedule created by management, not engineers
  • No buffer for integration or testing
  • Delivery date set before requirements defined
  • Critical path entirely on one person
  • "We'll catch up next sprint" (you won't)

Customer relationship red flags

  • Customer hasn't seen progress in months
  • Surprises at every demo
  • "They'll be happy with what we deliver"
  • No customer involvement in reviews
  • Different stakeholders want different things

Troubleshooting checklist.

When something doesn’t work, the temptation is to start with the most interesting hypothesis — the bug must be in the new code, in the protocol, in the architecture. Most of the time the bug is in something boring. The discipline is to check the boring things first, in order, before reaching for the interesting ones.

The list below is roughly ordered from cheapest to most expensive to check, and from most-likely to least-likely as a root cause. It is intentionally beginner-friendly at the top — the experienced engineer skims it, the new engineer learns to skim it, and both benefit from the discipline of not skipping ahead.

Layer 0: Is it actually broken?

  • Did you check the right device? (Wrong board, wrong port, wrong terminal — very common.)
  • Is the symptom new, or did you just notice it?
  • Can you reproduce it deliberately, or is it intermittent?
  • What changed since it last worked?

Layer 1: Power.

  • Is it plugged in? Is the supply on? Is the LED lit?
  • Measure voltage at the input. Is it the rated voltage?
  • Measure voltage at the load. Did it sag under load?
  • Check current draw. Is the regulator current-limiting?
  • Check the fuse, the protection diode, the polarity.

Layer 2: Signal integrity.

  • Put a scope on the signal. Does it look clean?
  • Check rise and fall times at the load — are they within the receiver’s tolerance?
  • Look for ringing, overshoot, undershoot, reflections.
  • Verify termination on transmission lines (50Ω, 75Ω, 100Ω differential).
  • Check for ground bounce, common-mode noise, ground loops.

Layer 3: Cabling and connectors.

  • Is the cable seated all the way? Connector locked?
  • Continuity test the cable end-to-end with a multimeter.
  • Check for bent pins, damaged shielding, contamination.
  • Swap cables. Cables are unreliable for the same reason fasteners are: they’re cheap, they’re moved often, and they fail silently.

Layer 4: Protocol and framing.

  • Capture the protocol — logic analyzer for digital, scope for analog, sniffer for network.
  • Are bits being transmitted at all? At the right rate? With the right framing?
  • Check addresses, IDs, framing markers, checksums, CRC.
  • Verify endianness on both sides. Surprisingly common, surprisingly slow to find.
  • Check timing — setup time, hold time, propagation delays. Logic analyzer with timing mode.

Layer 5: Software state.

  • Is the device in the state you think it’s in?
  • Initialization sequence: did every step succeed? Many drivers fail silently in init.
  • Is an interrupt enabled? Masked? Stuck?
  • Is the buffer overflowing? Underflowing? Wraparound?
  • Check for race conditions, especially at startup or under load.

Layer 6: Configuration.

  • Are you running the version of code you think you are? (See configuration management.)
  • Is the configuration file the one in source control, or the modified one on the test fixture?
  • Build flags, defines, optimization levels — same as last successful build?
  • Has someone “helpfully” updated firmware on the test setup without telling anyone?

Common debug mistakes.

Don’t do this

  • Change multiple things at once
  • Assume it’s the new code without checking
  • Skip basic measurements (voltage, signal, framing)
  • Trust documentation over actual measurement
  • Try to fix it before understanding the failure
  • "Reboot and hope" as a strategy

Do this instead

  • Change one variable, observe, document, repeat
  • Verify the basic path works before chasing complex ones
  • Measure first, then form hypotheses
  • Trust your scope, your meter, your logic analyzer
  • Reproduce the bug deliberately before fixing
  • Write down what you tried and what you observed

Formulas worth knowing.

The handful of formulas below come up often enough in everyday engineering that having them memorized — or at least bookmarked — saves real time. Each is paired with the situation where you reach for it. None of them substitute for understanding the underlying physics; they exist to remind you what you already know.

Power and electrical.

Ohm’s law

When you have any two of voltage, current, and resistance, and need the third.

V = I × R

V in volts, I in amps, R in ohms. The most-used formula in electrical work; commit it to memory.

Power dissipation

Sizing regulators, heatsinks, current-sense resistors, or anything where heat could matter.

P = V × I = I² × R = V² / R

P in watts. Three forms because you usually have two of V/I/R and need to compute heat from whichever pair you have.

Battery life (first approximation)

Estimating runtime from battery capacity and average load.

Hours = (Battery_Wh) / (Average_W)

Multiply by an efficiency factor (typically 0.7–0.9) for real-world losses. Battery_Wh = battery_Ah × battery_V.

RF and link budgets.

Free-space path loss

Computing how much signal you lose between two antennas in clear air.

FSPL_dB = 20·log₁₀(d) + 20·log₁₀(f) + 20·log₁₀(4π/c)

d in meters, f in Hz. The constant for d in km and f in MHz is 32.45; for d in km and f in GHz, it’s 92.45.

Thermal noise floor (kTB)

Computing the noise floor of a receiver to figure out the minimum signal you need.

N_dBm = -174 + 10·log₁₀(B) + NF

B in Hz (the receiver bandwidth), NF the receiver’s noise figure in dB. The -174 dBm/Hz is kT at 290 K, the standard reference temperature.

Doppler shift

Estimating the frequency offset for a moving emitter or receiver, especially low-Earth-orbit satellite passes.

Δf = f × (v / c)

v is the relative line-of-sight velocity, c is the speed of light, f is the carrier frequency. For a 437 MHz amateur satellite at LEO, max v is about 7.5 km/s, giving roughly ±10 kHz.

Carson’s rule (FM bandwidth)

Sizing the bandwidth needed for an FM signal, including spectrum-allocation work and filter design.

BW ≈ 2 × (Δf + f_m)

Δf is the peak frequency deviation, f_m is the highest modulating frequency. Approximation; captures roughly 98% of signal power.

Wavelength

Antenna sizing, transmission line analysis, anything where you care about the physical size of the wave.

λ = c / f

For 2.4 GHz, λ is roughly 12.5 cm. A quarter-wave whip antenna for that band is about 3 cm.

Signals and data.

Shannon’s capacity limit

Sanity-checking whether a proposed data rate is physically achievable on the link you have.

C = B × log₂(1 + S/N)

C is channel capacity in bits/sec, B is bandwidth in Hz, S/N is the linear (not dB) signal-to-noise ratio. If your design needs more, the link won’t deliver it.

Nyquist sampling rate

Choosing a sample rate for an ADC or DSP design.

f_s > 2 × f_max

Sample at strictly more than twice the highest frequency in your signal. In practice, oversample by 2–5× to leave room for filtering.

Bit error rate vs SNR (rough)

Estimating the SNR you need for a target BER on a given modulation.

QPSK: BER = 10⁻⁶ needs Eb/N0 ≈ 10.5 dB; BER = 10⁻³ needs ≈ 6.8 dB.

Different modulations have different curves. The exact numbers come from textbook tables; what you usually need is the order of magnitude.

Reliability.

MTBF and reliability

Estimating how often something will fail, or sizing redundancy.

R(t) = e^(-t / MTBF)

Reliability R(t) is the probability of surviving to time t with no failure. For t = MTBF, R = 0.37; you only have about a 37% chance of running for the full MTBF without a single failure.

Series and parallel reliability

Sizing redundant subsystems.

Series: R = R₁ × R₂ × ... ; Parallel: R = 1 - (1-R₁) × (1-R₂) × ...

Two units in series with R = 0.95 each give R = 0.90. Two in parallel give R = 0.9975. Redundancy is expensive and effective.

Decision framework: when to act.

When a project shows red flags, the question isn’t “is this bad” (it usually is) but “what action does the severity warrant?” The matrix below maps observed signals to action levels. The action levels are deliberately tiered — not every problem is a fire, and treating everything as a fire drains the meaning out of the response.

Severity What you observe Action Timeframe
Critical Multiple red flags. Stakeholder loss of confidence. Schedule slipping with no recovery plan. Stop, assess, restructure. Bring in outside perspective. Days
Serious Specific red flags emerging. Team dysfunction. Quality dropping. Address directly with team and stakeholders. Document concerns. Weeks
Concerning Subtle warning signs. Things are off but unclear what. Increase observation. Talk to team members. Watch for patterns. Sprints
Routine Normal project friction. Expected challenges. Continue working through standard processes. Ongoing
The complexity check. Before adding any subsystem, interface, or layer of architecture, ask: is this complexity solving the problem in front of me, or inherited from a different problem? If you can’t answer the first version, you’re inheriting solutions designed for someone else’s constraints. (See complexity by inheritance.)

Project kickoff checklist.

Before any project kicks off, the answers to a small set of questions need to be in writing — not in someone’s head, not in a hallway conversation, in writing. Each of these is cheap to answer at the start and expensive to answer later, when the project is in motion and the answer matters.

Problem definition

  • What problem are we solving?
  • Who has this problem?
  • How is it being solved today?
  • What's wrong with current solutions?
  • How will we know we've succeeded?

Requirements and scope

  • What MUST the system do? (Critical)
  • What SHOULD it do? (Important)
  • What's nice-to-have? (Optional)
  • What's explicitly out of scope?
  • Who approves changes?

Constraints and resources

  • Budget: NRE and recurring
  • Schedule: hard deadlines and milestones
  • Team: skills available and gaps
  • Technology: required vs preferred
  • Standards: regulatory, industry, customer

Risks and dependencies

  • Top 5 risks (technical, schedule, business)
  • Mitigation strategies for each
  • External dependencies (vendors, parts, approvals)
  • What kills the project? (Worst-case scenarios)
  • Who decides if we're in trouble?

Code review checklist.

A peer review by someone who understands the problem domain catches different things than a peer review by someone who only knows the language. The checklist below is a starting point, ordered roughly by impact: correctness first, design second, testing third. A review that surfaces only items from the third category and never the first is a signal to reach for reviewers with more domain context, not to give up on reviews.

Correctness

  • Does it implement the requirement?
  • Are edge cases handled? (NaN, zero, max, empty)
  • Error handling: graceful degradation?
  • Concurrency: race conditions, deadlocks?
  • Resource management: leaks, cleanup?

Design and maintainability

  • Is the abstraction at the right level?
  • Single responsibility per function/module?
  • Naming: does it read naturally?
  • Will I (or someone) understand this in 6 months?
  • Could this be simpler?

Testing and documentation

  • Tests for happy path and edge cases?
  • Tests verify behavior, not implementation?
  • Comments explain WHY, not WHAT?
  • Public APIs documented?
  • Updated relevant external docs?

Common-sense checks.

Before any number you compute leaves your spreadsheet, run it past three quick filters that catch the order-of-magnitude errors that subtle algebra misses. The answer that’s off by 1000× will pass every formal check and fail every common-sense one.

Order of magnitude.

Does the answer make sense in human-scale units? A satellite with a 10 kilowatt continuous draw is wrong. A 50-meter-long handheld antenna is wrong. A controller running at 100 GHz on a board that costs $20 is wrong. If your answer would surprise an experienced colleague at a glance, the answer is wrong before the colleague checks the math.

Units.

Are the units consistent through the calculation? Most catastrophic errors come from mixing units — meters with feet, dB with linear ratios, watts with volts. The Mars Climate Orbiter was lost to this exact mistake. Annotate every variable with its unit. Cancel the units symbolically along with the numbers. If the answer should be in seconds and the units don’t cancel to seconds, the answer is wrong.

Sign and direction.

Is the sign right? A power that came out negative, a velocity in the wrong direction, a phase that went the opposite way — these are usually setup errors, not real results. Check the sign on the way out, not just the magnitude.

Limits.

What does the formula predict at the extremes? At zero, at infinity, at the boundary cases? If the answer at zero or infinity is nonsensical, the formula or the model is wrong somewhere. The limits are cheap sanity checks that catch errors the middle range hides.

The professional habit. The most common source of catastrophic engineering errors is not bad math — it’s good math applied to the wrong setup. Common-sense checks are how the professional catches their own errors before someone else has to.

Useful acronyms.

Engineering documents are dense with abbreviations. The list below covers the ones most likely to show up in mixed-discipline conversations — not exhaustive, but enough to follow most engineering meetings without having to ask. Each entry has a one-line note on what it actually means in practice, not just what it stands for.

Term Meaning What it actually is
AITAssembly, Integration, TestThe phase where things stop being subsystems and start being a system.
BERBit Error RateThe fraction of received bits that are wrong; sets your link margin.
BOMBill of MaterialsThe list of every part in your product. Drives cost and procurement.
CDRCritical Design ReviewThe review that closes detailed design before fabrication.
COTSCommercial Off-The-ShelfYou buy it. It already exists. Almost always cheaper than custom.
EDREngineering Design ReleaseThe point where engineering hands off to manufacturing.
EMCElectromagnetic CompatibilityWhether your device coexists with everything else without interference.
EMIElectromagnetic InterferenceThe interference itself, the thing EMC tries to manage.
FATFactory Acceptance TestCustomer tests at your factory before shipment.
FMEAFailure Mode and Effects AnalysisSystematic walkthrough of how each part can fail and what happens when it does.
HALTHighly Accelerated Life TestYou stress the device until it breaks, to find the weakest link.
HASSHighly Accelerated Stress ScreenProduction-line stress to weed out infant mortality.
ICDInterface Control DocumentThe contract between two subsystems. Lock it early.
IV&VIndependent Verification & ValidationExternal team that audits your testing — no skin in the game.
MIL-STDMilitary StandardDefense specification documents. Numbered (810, 461, 1553, etc).
MTBFMean Time Between FailuresThe reliability number on a datasheet. Usually optimistic.
NRENon-Recurring EngineeringThe one-time cost of designing something. The other large line item.
PDRPreliminary Design ReviewThe review that closes architecture before detailed design begins.
PHYPhysical LayerThe hardware that actually moves bits, below any protocol.
SATSite Acceptance TestCustomer tests after deployment, on site.
SDRSoftware-Defined RadioRF processing in software rather than dedicated silicon.
SNRSignal-to-Noise RatioHow loud the signal is relative to the noise; usually in dB.
SoCSystem on ChipProcessor, peripherals, often FPGA fabric, on one die.
TRLTechnology Readiness Level1 through 9; how mature a technology is, NASA’s scale.
V&VVerification & ValidationThe combined discipline; verification is internal, validation is external.
WCETWorst-Case Execution TimeNot the average. The slowest the task can ever run. Hard to measure honestly.