Thoughts on Requirements Decomposition
Requirements are the DNA of the design. Too often they’re the weak link: overlaps, gaps, and unverifiable clauses accumulate until they reappear downstream as integration surprises or redesigns. A simple organizing rule helps: Mutually Exclusive, Collectively Exhaustive (MECE), a structured-thinking discipline that decomposes problems into parts that do not overlap (mutually exclusive) and leave no gaps (collectively exhaustive). Each requirement should own a clear aspect of what the system must do, and the set should cover the full scope with no gaps or duplicates. MECE turns “complete and consistent” from a slogan into something you can inspect in the structure of the requirement set itself.
MECE in an MBSE World
In MBSE, we don’t invent what the system must satisfy; we model it. Behavior lives in use cases, activities, and state machines. Structure lives in block definition and internal block diagrams. Constraints live in parametrics, and requirements live in requirement diagrams. The model becomes the single source of truth, and safety, security, environmental, and regulatory needs belong there too. From that model, we derive the complete set of stakeholder needs, constraints, and requirements. Each requirement traces to a specific modeled portion of need. If those porions are nonoverlapping and collectively cover the whole, the requirement set is MECE. I believe one could implement indices that flag overlaps, gaps, unverifiable requirements, and orphaned tests through automated queries on trace data scripted in tools like DOORS and Cameo. This aligns with the DoD’s reference design principles for vetted software factories.
MECE used to live mainly as a whiteboard exercise. I am proposing we treat it as a measurable property of a requirement set: calculate coverage and overlap, track trends over time, and correct structural issues before they reach integration.
An Example: Cleaner Backlogs, Cleaner Pipelines
Consider a cloud-native mission application progressing through a DevSecOps pipeline. On good days, code moves from commit to production under clear controls in Continuous Integration/Continuous Delivery (CI/CD). On bad days, checks accumulate and overlap until the pipeline becomes decorative rather than diagnostic. The remedy is to make the requirements set MECE and bind each aspect to one specific piece of verification evidence, so every gate enforces a single requirement rather than “security in general.”
For container-based cloud applications, the evidence should be verifiable and portable. Container images carry cryptographic signatures and provenance attestations that prove who built them, how, and from which sources. Each release includes a Software Bill of Materials (SBOM) so we know exactly what is inside. Automated tests demonstrate functional behavior and conformance to defined performance targets (Measures of Performance, MOPs). Platform policy rejects deployments that lack required proof. In practice, this looks like release artifacts signed during the build, an SBOM attached to the image, tests that show both correctness and performance against MOP thresholds, and deployment gates that block anything unsigned, out-of-policy, or missing results. When a gate stops a release, we can point to the exact requirement not met (feature behavior, reliability MOP, security control, compliance obligation, or operational readiness) rather than waving at the whole pipeline.
For custom hardware stacks and embedded targets, the logic is the same even if the artifacts differ. Treat firmware images, bootloaders, FPGA bitstreams, device drivers, and Real-Time Operating System packages as first-class software artifacts: build them reproducibly, sign them, generate SBOMs (including third-party libraries), and retain build provenance so origin can be attested. At runtime, a secure-boot chain verifies signatures before execution. Verification includes emulation and hardware-in-the-loop (HIL) testing, timing and boundary analysis, power and thermal characterization, and update/rollback drills to prove recoverability. Device telemetry and attestation reports extend the same evidence into the field, so requirements are verified on operational devices, not only in CI.
The objective is not a heavier process but a legible one: one requirement, one aspect, one form of evidence. No orphan checks and no orphan requirements. Whether the deliverable is a container image or a bitstream, the pipeline stops being a mystery and starts reading like a contract the team can test, audit, and defend.