Skip to content

Where the Decision Intelligence Magic Quadrant Stops

| May 1, 2026 | By

A VP of Operations at a mid-sized manufacturer signed off on a Decision Intelligence Platform last year. Seven figures, a year of implementation, a platform that scores transactions against business rules and runs continuously in the background. A year in, the team uses it daily, and it does what it was bought to do.

Now she has to decide whether to expand the Dallas distribution center or build a new one in Memphis. Capital decision, multi-year payback, downstream effects on three plants and two carriers. She opens the platform and finds nothing she can use.

Dallas-Memphis isn’t the only question on her desk. Next quarter, the team needs to test whether a new product line will break throughput at the main plant. The quarter after, they’re rethinking the shift pattern across two sites. Every month, planners ask whether the current DC network can maintain service levels as demand shifts east. Her team treats these as normal planning questions, and the platform doesn’t touch any of them.

The platform handles decisions where the rules are known and the inputs are clean. Her questions sit somewhere else. Expanding Dallas changes lead times out of two plants, which changes the carrier mix, which in turn affects how Memphis performs during peak. The answer depends on how the pieces interact. Questions like these need a different kind of math underneath, and software built to model complex systems and run scenarios, rather than scoring.

Gartner drew a map

Gartner published the first Magic Quadrant for Decision Intelligence Platforms in January. Seventeen vendors made the list, and the leaders are strong companies with real products: FICO, IBM, SAS, Aera, ACTICO, and Quantexa. Their platforms solve real problems for supply chain and manufacturing operators: replenishment at every store, order routing across distribution networks, dynamic pricing on thousands of SKUs, and inventory balancing across facilities. These decisions share a shape: they happen constantly, run on clean data, follow rules that can be encoded, and reward automation.

Two shapes of decision

Operators face two shapes of decision, and the difference matters more than most frameworks admit.

Transactional decisions run one item at a time: replenish this SKU, route this order, price this pallet, accept this carrier bid. Each call is low-stakes on its own, and the aggregate gets huge because the volume is huge. These decisions need to happen fast, consistently, and at scale. Software wins here.

System-level decisions are about how the system behaves. Does the new DC improve service in the southeast, or shift congestion to Memphis? Can the plant absorb the new product line without breaking throughput? If we change the shift pattern at Site A, what happens to service at Site B? How much capacity do we actually have under the new schedule, and where does it bind first?

Every one of these questions turns on how changes in one place cascade through the rest. Rules-and-scoring platforms model single transactions by design. The scope is what makes them powerful at what they do, and it’s also what puts these questions outside their reach.

These questions show up throughout a planning organization’s year, in every planning cycle, and around every major capital decision. Once a model of the operation exists, the team uses it to address variations and sensitivities arising from the original question and extends it as new questions come up. Modeling scales from a single project into a planning capability that the team relies on.

The software underneath

Both shapes of decision run on software, but on different kinds.

The Magic Quadrant vendors sell decision-execution platforms. Buy one, configure it, turn it on, let it run. It routes orders, flags exceptions, and rebalances inventory twenty-four hours a day.

Simulation runs on different software. When an engineer tackles a system-wide question, they build a discrete-event or agent-based model of the operation, run scenarios, and produce a recommendation.

Gartner’s Magic Quadrant focuses on running and monitoring live decisions at scale. It’s a useful map of the transactional side, and it leaves the system-level side off the page.

For an operations leader, the model matters more than the software category. Whichever software is underneath, the answer depends on whether the model reflects how the operation actually behaves.

The same split exists in other professions

Accounting software is a real category. QuickBooks, Xero, NetSuite, and the rest do real work, and a small business runs its books on them. The category exists because much of accounting is repetitive, rules-driven, and well-suited to automation.

CPAs are also real. A CPA closes the books for a complex company, handles an audit response, structures a transaction, and negotiates with the IRS. Nobody confuses the CPA with the software, and nobody says the CPA isn’t “in accounting” because the CPA doesn’t sell a SaaS platform. Accounting work can take two forms, and the buyer chooses based on the shape of the problem.

Architecture follows the same pattern. Autodesk and Bentley run billion-dollar businesses selling CAD platforms. Architects run their own businesses designing buildings. Both produce architecture. The CAD platform is a tool the architect uses, sized to the job.

Decision Intelligence is following the same pattern. Gartner defined a software category around a single decision shape. The decision intelligence work operators need on questions about how the operation itself behaves lives elsewhere entirely and runs on different software to get there.

Cheat sheet

A reference for the next time you’re trying to figure out which kind of help your decision actually needs.

 

Transactional decisions

System-level decisions

Example

Replenish this SKU, route this order, price this pallet

Expand the DC, redesign the shift pattern, absorb a new product line

What’s being decided

One item at a time

How the whole system behaves

Inputs

Clean, structured, consistent

Messy, incomplete, interacting

Cadence

Continuous, runs in the background

Periodic, tied to planning cycles and capital decisions

Stakeholders

Software, with humans on exceptions

Multiple leaders, often disagreeing

What makes it hard

Volume and consistency

System complexity and consequence

What it needs

Rules, scoring, automation

Modeling, scenarios, judgment

Software category

Decision-execution platforms

Operation-modeling platforms

Right kind of help

A platform that runs continuously

A model of how your operation behaves

 

Three questions to tell them apart

What’s being decided? One item against a set of rules points to the transactional side. How the system behaves under a change points to the system-level side.

Who makes the call today? A system that handles exceptions through human review points transactional. A group of leaders working through analysis and arguing about the answer points system-level.

Would you ever automate it? Replenishment, yes. Order routing, yes. A capital investment in a new DC, never. The decisions you’d willingly hand to software belong on the platform side. Everything else needs the other kind of help.

The half Gartner left off the map

The Gartner Magic Quadrant maps the transactional side of the space well. The system-level side uses different software, solves a different shape of problem, and today sits outside the map.

SimWell builds models for that side of the work. We use the platform that fits each problem, and we build the models with teams that have done it before.

The first thing the work does is answer the question on the desk. Should we build the DC. Can the plant absorb the new line. What happens if we change the shift. We build the model, we run the scenarios, we produce the recommendation. The question gets answered.

The second thing the work does is build a capability the team keeps using. A model built well doesn't get retired when the first few questions get answered. The team extends it for the next decision, and the one after, until it becomes the place the planning organization goes whenever a system-level question is on the table.

What the model does to the organization

The third thing the work does is the part we didn't understand for a long time.

System-level decisions sit with a group of leaders who each see a different part of the operation, and they disagree because their views are different. The disagreement isn't a communication problem. It's that no shared picture of the operation exists for them to disagree against.

Building a model forces that picture into existence. To model the plant or the network, you have to define how it actually behaves: what feeds what, where the constraints bind, which decisions in one place change outcomes in another. That definition is what the leaders were missing. Once it exists, the arguments get shorter, the decisions get faster, and the operation starts to be managed as a system instead of as a federation of departments.

Who this is for

The decisions that show up at the top of the org are system-level by nature. Multi-leader, cross-functional, hard to encode, expensive to get wrong. The operators who handle them well aren't the ones with the best dashboards. They're the ones with a shared picture of the system and a model that holds up when the next decision comes.

That's the work we do.