Skip to content

How Simulation Teams Actually Use AI

| March 23, 2026 | By

We surveyed our modelers. The results probably look a lot like yours.

The AI conversation in simulation and optimization is loud. Every week brings another prediction about automating decision-making, replacing analysts, or rendering modeling obsolete. We wanted to know what’s actually happening on the ground. So we surveyed 25 of SimWell’s simulation and optimization practitioners (modelers, consultants, project leads) about where AI shows up in their work, where it falls short, and where they think it’s heading.

Adoption is universal. The impact is mostly invisible.

96% of respondents use AI in their daily work. Adoption isn’t the question anymore. But when we asked whether AI has changed what clients actually receive, 72% said no. The deliverable looks the same. They just get there faster and with more insight.

24% said AI is starting to change deliverables: richer scenario summaries, clearer output communication, better-structured reports. One modeler described feeding raw simulation outputs into an LLM and getting back a structured narrative of what the data showed, which went directly into a client dashboard.

That gap is probably the most important finding in the survey. Teams have adopted AI broadly, but almost entirely as a backstage productivity tool. The frontier, where AI changes what the client sees and can do with the model, is visible but early.

The biggest use isn’t what the industry talks about.

The conversation about AI in simulation centers on code generation. AI writes the model, AI builds the simulation faster. Our data says something different.

Select all that apply. Also reported: understanding existing model logic (28%), scenario generation (12%), building AI layers for non-modelers (4%). n = 25.

The team uses AI more for words than for code. They use it to translate technical work into language that stakeholders can act on: sprint reviews, status reports, project documentation, result summaries. The bottleneck AI relieves most isn’t “build the model faster.” It’s “explain what the model found.”

If that pattern holds across the industry, it reframes where AI creates the most value in simulation work. The constraint was never computing power or development speed. It was the gap between what the model knows and what the organization can access.

When AI touches model logic, nobody trusts it blindly.

60% of respondents use AI to help generate or modify model logic. Among them, the validation is rigorous: 56% run the model against known baselines. 48% read through generated code line by line. 28% test edge cases and boundary conditions. No one described letting AI-generated logic go unreviewed.

“AI can reason about whether code is syntactically valid, follows patterns, avoids common bugs. What it lacks is the why layer, the accumulated decisions, constraints, and trade-offs that live in our team’s heads.”

That distinction should matter to anyone commissioning simulation work. AI drafts. The modeler decides whether it’s right. The engineering judgment (why a model represents an operation the way it does, which constraints bind, which interactions matter) stays human. AI is a drafting tool, not a decision-maker.

Models are hard to share. AI is starting to change that.

28% of the team reported using AI to understand models they didn’t build, stepping into an unfamiliar model and using a conversational interface to ask how the logic works before making changes.

In any consulting or operations environment where modelers rotate across projects and inherit each other’s work, that’s a meaningful shift. The knowledge locked inside a model becomes accessible to someone who didn’t write it, without pulling the original developer off their current project.

At SimWell, we’re building on that. We’re developing conversational layers on top of finished models so planners and leaders can ask questions, adjust assumptions, and generate scenario comparisons without a specialist in the loop. The first implementations are functional and the capability applies across sectors and model types.

Data handling requires clear policies, not just good intentions.

40% of respondents said client data policies or AI restrictions come up on certain engagements. The workarounds are practical: generating synthetic data rather than exposing client data to cloud-based AI tools, reverting to manual processes when clients restrict AI-powered meeting recording.

These aren’t barriers to adoption. They’re the normal friction of using new tools in environments where confidentiality matters. But they point to something worth considering: if your simulation or analytics teams are using AI (and at this point, they almost certainly are) clear organizational guidelines about what data can touch which tools make everyone’s job easier. The teams making these calls project by project would benefit from policy that applies organization-wide.

Where the early experiments point.

We asked the team what they want to try but haven’t deployed yet. Three themes stood out.

  1. Data pipeline automation. Multiple respondents described wanting AI to handle the mechanical work of cleaning, formatting, and structuring raw input data, freeing modelers to spend more time on the decisions the model is actually built to support.
  2. Output interpretation. Modelers want to combine simulation results with natural language generation so that findings come back as insights, not data tables. Dashboards that explain themselves. Tradeoff analyses that read like briefings.
  3. Model comprehension at scale. One respondent described wanting AI that could analyze an unfamiliar model’s architecture and generate documentation from summary-level down to detailed code explanations, collapsing what currently takes days of onboarding into hours.

These are aspirations, not capabilities. But they cluster around the same idea: AI’s highest-value role in simulation isn’t replacing the modeler. It’s closing the gap between what the model produces and what the organization can act on.

One question worth asking.

If you lead an operations team that uses simulation, optimization, or any form of analytical modeling, ask this: do you know how your team is using AI today, and do they know what you expect?

The tools change monthly. The answer to that question changes with them. We plan to keep asking it.