Lucas Stretton is a Partner and General Manager of SimWell Australia. Recnetly, he's been working on a problem most organizations don't name clearly: the gap between a working model and a decision that actually moves. We asked him about two capabilities SimWell has been building, and why they matter more than the models themselves.
You've written about putting a conversational layer on top of simulation models. What problem are you actually solving?
When someone asks why the model made a particular call, answering that question usually means pulling in the original developer, tracing event sequences, or digging through custom code. Validation slows down. Onboarding takes longer. Stakeholders either accept the output without understanding the assumptions, or they reject it for the same reason. In both cases, nobody actually knows what the model is doing or why.
Walk me through a real example.
I stepped into an active client project recently where the model was already built and functional. The client had requested some changes, but the original developer was occupied on another engagement. Before I could touch anything, I had to understand what was already there: the rules, the logic, how the operation was represented in the model.
I used the conversational layer to do that. I asked it: "Provide a simple short-form description of how order batching is managed for presses."
What came back:
Each entity represents a pile. Piles are grouped into an order batch using two variables, with the number of piles per order drawn from a lookup table. When the first pile of an order is assigned to a press, all remaining piles in that batch are forced to the same press. The press does not wait for the full batch to arrive before starting. Multiple piles from different orders can queue on the press input conveyor, but the press processes one pile at a time. In short: orders are batched at the pile grouping stage, but presses operate as single-pile processors with a shared input queue rather than processing entire order batches together.
That kind of answer would normally require the original developer in the room. The AI reads the existing logic and explains it in plain language. The decision logic stays unchanged. The ability to interrogate it opens up.
What does that change for the people using the model?
Trust, mostly. And speed. Models that can't be questioned don't earn trust. They create dependency. When you can ask the model to explain itself and get a structured answer, stakeholders engage differently. They probe the assumptions. They push back. That's exactly what you want from a decision tool.
You've described a second capability: making models operable without a specialist. What are you describing there?
Explainability is about understanding what's in the model. Accessibility is about operating it. If only one person can run scenarios, you don't have a decision tool. You have a bottleneck with good math behind it. Organizations end up testing one scenario when they could test ten. Experimentation stalls before it starts.
What does the accessibility layer actually look like?
Three things, and these are real working prompts from our deployments.
- Natural language scenario changes. Instead of digging through parameters, a planner can say "create a scenario that increases demand by 10%" or "reduce labor capacity in DC2 by 15%" and the model updates the inputs.
- Conversational output analysis. Rather than scanning output tables, someone can ask "which products went into backorder" or "what drove the drop in service level" and get a structured answer back.
- Self-service analytics. Charts and comparisons generated on demand. "Show me throughput by week." "Compare service levels across scenarios."
Where does this actually matter?
The pattern shows up wherever a model is smarter than the conversation around it. Leadership needs to sign off on a decision but can't interrogate the logic behind it. A planner needs to test a scenario but has to wait for the analyst who built the model. A new team member needs to understand what's already there before they can touch anything. The black box isn't a technical problem. It's an adoption problem. These capabilities exist to close that gap.
If your team has models they can't interrogate, or models only one person can run, reach out and we'll show you what it looks like in practice.
