AI for MBSE: Dynamic Systems Models with Agent-driven Design Validation

For decades, Model-Based Systems Engineering (MBSE) has promised something powerful: engineering systems where requirements, architecture, and design remain tightly connected throughout the entire product lifecycle. In practice, the vision has been difficult to achieve. AI for Model-Based Systems Engineering will change that. In this article, we’ll explore the current state of MBSE, challenges and limitations, and how companies can use AI agents: not just to update systems models, but to help engineers make design decisions in real-time.
MBSE in the real-world: what challenges might AI solve for?
MBSE’s core promise has been difficult to realize. Even in organizations that adopt MBSE, validation still happens in bursts—during design reviews, verification milestones, or late-stage testing. Engineers make decisions, models evolve, and only periodically does the organization step back to ask a critical question: does the design still satisfy the system’s requirements and constraints?
This gap exists because maintaining continuous validation across complex systems has always required more human attention than engineering teams can realistically provide. Artificial intelligence is beginning to change that. AI can function as a network of specialized agents that continuously evaluate designs as they evolve—checking requirements alignment, identifying constraint violations, and flagging risks as soon as they emerge. In other words, AI turns design review from a meeting into a continuous validation loop.
Maturing from DBSE to MBSE to AI-Powered MBSE
Most engineering organizations today operate across three overlapping stages of maturity: document-based engineering, model-based engineering, and emerging agent-driven engineering systems. Each stage represents a different way of managing the relationships between requirements, designs, and validation artifacts as systems grow more complex. Understanding these stages helps clarify how AI is reshaping the role of Model-Based Systems Engineering.
Stage 1: Document-Based Engineering (DBSE)
- Focuses on documentation of requirements, design artifacts, and validation artifacts
- Artifacts are not inherently connected to one another
Limitations:
- Understanding how a requirement relates to a particular design decision or verification result often depends on human interpretation.
- Engineers must mentally connect requirements documents, design artifacts, and validation reports to determine whether the system still satisfies its intended constraints.
Bottom line: As systems grow more complex, maintaining traceability across these relationships becomes increasingly difficult. In this environment, the system model effectively lives in people's heads.
Stage 2: Model-Based Engineering
- Instead of relying on documents, engineering teams create formal system models—often using languages such as SysML—that represent architecture, interfaces, behaviors, and relationships between system elements.
- These models provide a structured way to reason about complex systems earlier in the development process.
In theory, MBSE enables strong traceability across engineering artifacts. Requirements can be linked to architecture elements, simulations can be associated with system behaviors, and models provide a shared representation of how the system is intended to function.
Limitations:
- Maintaining these relationships requires ongoing manual effort.
- As designs evolve, engineers must continually update the connections between requirements, system elements, CAD artifacts, simulations, and documentation.
- When these updates lag behind design changes, the system model can gradually drift from the current state of the design.
Bottom line: MBSE makes system relationships explicit, but maintaining those relationships remains a human responsibility.
Stage 3: AI-Enabled Model Based Engineering
- AI systems can observe engineering artifacts as they evolve and automatically infer relationships between them.
- When a design changes, AI agents can analyze the change and evaluate its implications across the broader engineering system
- When new verification artifacts (such as simulation results or test data) are produced, agents can automatically associate those artifacts with the requirements and system elements they validate.
Bottom line: The result is a shift from manually maintained traceability to continuously maintained system relationships. This is the foundation for agent-driven design validation loops, where validation becomes an integral part of everyday design iteration rather than a step that occurs after decisions have already been made.
The next frontier: systems models that shape engineering decisions in real-time
The progression from document-based systems engineering, to model-based systems engineering, and now to AI-enabled MBSE represents a major shift in how engineering knowledge is structured and maintained. For the first time, it becomes possible to keep the system model continuously aligned with the evolving design. But maintaining a synchronized system model is only part of the opportunity AI creates. The next frontier is allowing the system model itself—requirements, validation data, and accumulated engineering knowledge—to actively shape engineering decisions as they are made. Instead of serving only as a record of the engineering process, the system model can become an active participant in it, surfacing constraints, applying lessons learned, and helping engineers evaluate design decisions in real time.
Achieving this vision requires rethinking how AI augments the mechanisms engineers already rely on to connect requirements, designs, and validation evidence—particularly across requirements management, design reviews, simulation, and engineering standards. In the next section, we’ll explore how each of these engineering systems might evolve with AI.
Design reviews: from periodic checkpoints to continuous validation
Design reviews are one of the primary mechanisms engineering teams use to validate complex systems. In a formal review, a design owner presents engineering artifacts that describe the current state of the system—requirements tables, CAD models, analysis results, and test data. Subject matter experts evaluate the design by asking questions, probing assumptions, and examining the evidence behind key design decisions. Much of the discussion centers on design rationale: why a particular approach was chosen and what data supports it.
In many organizations, however, these reviews occur only at discrete milestones in the development process. Formal review gates may be separated by weeks or months of engineering work. Between those checkpoints, engineers continue making design decisions as the system evolves. By the time a design is formally reviewed, many of those decisions are already embedded in the system. When issues are discovered during the review, correcting them often requires revisiting earlier design choices and performing significant rework.
AI agents create the opportunity to fundamentally change this dynamic. Instead of relying solely on scheduled review meetings, AI systems can analyze evolving engineering artifacts—such as CAD models, drawings, and analysis results—and evaluate them against requirements, design standards, and historical program knowledge. Rather than waiting for the next milestone review to surface risks, engineers receive signals earlier, while designs are still evolving and decisions are still flexible. Formal design reviews still play an important role. But instead of serving as the primary mechanism for discovering problems, they become the moment where engineering teams confirm that the system is behaving as expected.
Design standards: from institutional knowledge to real-time guidance
Engineering organizations accumulate a large body of knowledge over time:
- Lessons learned (captured in spreadsheets at the end of programs)
- Formal design standards (created once and, hopefully, updated over time)
- Third party design guidelines from standards bodies like ISO or ASME
And while most engineering leaders rate the importance of these documents as critical, even in model-based environments, applying standards consistently depends on individual engineers knowing which rules apply and checking them during the design process. This becomes increasingly unreliable, as systems get more complex. The data backs this up: in a survey, 250 engineering leaders told us that design standards and guidelines are documented, up to date, and consistently referenced in reviews only 55% of the time.
AI agents create the opportunity to embed institutional knowledge directly into the engineering process. Instead of requiring engineers to manually reference standards, AI systems can evaluate evolving design artifacts—such as CAD models and drawings—against known design rules and historical program constraints. Engineers receive immediate feedback when a design deviates from established guidelines or introduces risks that previous programs have already encountered. Plus, as design standards evolve from static reference materials into active engineering guidance, organizations have a newfound reason to keep them up to date.
Requirements and system models: from static traceability to continuous alignment
In most MBSE environments, system intent is captured across both requirements management tools and system modeling tools. Platforms like DOORS or Jama manage requirements and verification plans, while system modeling tools such as Cameo represent architecture, interfaces, behaviors, and relationships between system elements. In theory, these tools allow engineering teams to trace requirements through the system architecture and into downstream design and verification activities. In practice, however, maintaining these relationships requires ongoing manual effort. Traceability becomes something engineers reconstruct during reviews rather than something the system maintains automatically.
AI agents create the opportunity to change this dynamic. As engineers review designs—whether during formal reviews or everyday design work—they can use AI agents to interrogate requirements and system models directly. Based on the design question an engineer is trying to answer, those agents can surface information about how a decision affects system constraints, interfaces, or verification requirements. In this way, requirements and system models begin to shape engineering decisions in real time, rather than simply acting as documentation after the fact.
Simulation: from delayed verification to real-time design signals
Simulation is one of the most powerful tools engineering teams use to validate whether a design satisfies system requirements. In most development environments, however, simulation becomes a bottleneck to design iteration. Preparing models, running analyses, and interpreting results can take days or weeks. As a result, engineers often continue iterating on designs while waiting for simulation results to confirm whether earlier decisions were correct. When simulations eventually surface issues, teams may need to revisit design choices made several iterations earlier. Alternatively, teams delay decisions while waiting for validation data, extending design cycles for weeks at a time.
This presents another opportunity for AI agents to accelerate design iteration. As engineers explore design options, they can use agents to initiate simulations that validate—or invalidate—their design choices as they work. This unlocks even more value from existing investments in simulation tools such as ANSYS and SimScale. Instead of functioning primarily as delayed verification, simulation becomes a continuous source of engineering feedback that helps guide design decisions as they are made.
AI-powered MBSE: from static models to active engineering systems
The original promise of MBSE was never just better documentation. It was engineering systems where requirements, architecture, design, and validation remain connected throughout the product lifecycle. AI-powered MBSE brings that vision closer to reality. But it also points to something more ambitious: a shift from static system models that describe engineering decisions after the fact to dynamic system models that help shape those decisions in real time.
That is the real opportunity. Not simply to automate traceability, but to create engineering systems that continuously surface constraints, apply lessons learned, and help teams validate decisions as designs evolve. For engineering leaders, realizing that vision requires more than deploying a single AI tool. It requires a clear strategy for how AI will work across the broader engineering system.
If you're exploring how AI can help your organization realize the full promise of MBSE, learn how CoLab works with global engineering teams to define enterprise-wide engineering AI strategies.