Supply chain leaders have long turned to optimization to make sense of complexity. From network design to replenishment, mathematical models promise clarity in the face of uncertainty. However, while these models excel at generating optimal solutions, they often do not communicate them. The optimization software outputs do little to reassure a planner, who must execute the plan. When the plan cannot be explained, it will not be adopted.
This communication gap creates a paradox. Companies invest heavily in optimization engines, and still the resulting plans are reworked, delayed, or ignored. Planners build “shadow” spreadsheets. Executives request simplified summaries that strip nuance and, with it, confidence. In some cases, the summary arrives late, and the window for action has closed.
In 2024, a national hardlines retailer confronted the problem directly. With offshore lead times stretching to 20 weeks, small forecasting errors cascaded into service risk. At one distribution center, inventories were projected to dip below zero at a certain week, setting up significant margin loss and downstream penalties. The optimization team had the solution; what they lacked was trust beyond a narrow technical circle. The organization needed more than math; it needed translation.
The answer was a two-layer design. The optimization engine remained the source of truth. An LLM-based layer became the source of meaning, converting results into managerial narratives. Analysts received diagnostic detail; planners saw executable moves; executives learned of the risk and return in each alternative. The combination not only averted the stockout; it restored organizational confidence in analytics.
Optimization is a powerful discipline, but it has structural blind spots when deployed in organizations. These barriers echo earlier SCMR narratives in procurement and inventory. Indirect procurement matured when standards and governance imposed coherence. Inventory science advanced when constructs monetized the hidden cost of delay. Planning now requires its own bridge: an interpreter that preserves rigor while making it legible to the enterprise.
When AI becomes a translator
Large language models do not replace optimizers; they reframe their outputs. The cycle begins when a planner poses a natural question: “Which DCs are at risk of stockouts next month, and what transfers would avert them?” An AI agent captures the intent and maps it to model parameters. The optimization solver then does its quiet work, balancing constraints and costs. Traditionally, this is where the process ends with a spreadsheet of decision variables with optimal values. In the new design, results flow to the language layer. The LLM reshapes numbers into a story tailored to each managerial role. In this retail store case, the analyst receives detailed SKU/DC flows and an explanation of which constraints were binding and why. The planner is briefed operationally: move “XXX” units in Week “YY” and “ZZ” in Week “AA.” The executive sees the business case: service continuity preserved at the DC, and penalties and lost margin avoided. Crucially, the LLM answers the “why.” It ties the shortfall to a supplier or a demand issue. It explains why certain SKUs and DCs are selected. In effect, the LLM behaves like a seasoned analyst who has run the model, studied the output, and now briefs the organization in language each role can act on. The engine remains the brain; the language model becomes the voice.
Case study: From shortage to stability
A DC served a cluster of Midwestern stores, with replenishment almost entirely dependent on long, oceanic transits. In mid Q3 the control tower projected a glide path toward zero inventory in September. The optimization software prescribed a plan that was cost-effective and, more importantly, operationally feasible. This is where the old process would have offered a dense variable table, but our new system produced a decision brief. It stated plainly that the DC’s risk was driven by a two-week slip from a particular supplier and an 8% increase in demand for a certain category of items. It showed arrows from DC2–DC5 into DC1, overlaid on a weekly chart of before/after weeks of supply. It demonstrated that DCs remained above their minimum target requirements, and it quantified the alternative if DC3 could not supply. The message was clear enough for executives to approve and specific enough for planners to execute.
Execution matched the brief. The DCs remained within safe bounds, and the network rebalanced as inbound containers arrived two weeks later. Perhaps the most significant outcome was intangible: trust. Planners stopped building shadow plans; executives asked for the “explainer view” in S&OP. A mathematical tool had become an organizational asset.
Operational details: Under the hood
The weekly transfer model optimized a cost–service objective under practical constraints: lane capacities by cube and weight, lift/receive labor, item compatibility and temperature controls, and other DCs’ WOS floors. A Bayesian approximation produced fast warm starts, with a MIP solver finalizing execution-ready plans when precision mattered.
The LLM layer was firmly grounded. It drew from model outputs and curated master data, applied company lexicon (SKUs, node names, policies), and generated role-specific narratives and visuals. Crucially, it did not prescribe; it explained. Where users posed counterfactuals, the assistant reformulated assumptions and requested a re-solve, keeping recommendations traceable to verified computations.
Each narrative packet paired text with visuals: before/after WOS curves, cost decompositions showing transfer and handling versus avoided penalties, and sensitivities (e.g., DC unavailability or delayed inbound). Text made the math legible; visuals made the implications obvious.
Looking ahead: The future of AI-enabled planning
The fusion of optimization and language models is in its early innings. Private, fine- tuned LLMs will protect data while adopting a company’s dialect. Transfer learning will make the explainer fluent in industry shorthand—WOS (weeks of supply), OTIF (On Time in Full), flow-through—cutting cognitive friction. Reinforcement learning will adapt explanations based on which narratives historically persuaded which stakeholders. Optimization provides the what; explanation supplies the why. End result: The outcome was tangible—stockouts averted, margin protected—and cultural: analytics became intelligible, making AI-enabled LLM the smarter Copilot for Supply Chains.
Author Contact
For further information or inquiries about this article, the authors can be reached at: [email protected] (Saravanan Venkatachalam) and [email protected] (Arunachalam Narayanan).
SC
MR

More Demand Planning
What's Related in Demand Planning

Explore
Topics
Procurement & Sourcing News
- Unlocking the last mile: A strategic framework for in-store fulfillment
- 20 GPT prompts every procurement professional needs
- How AI helped a retailer prevent stockouts
- Paying for it: 4 ways to reduce equipment lease expenditures
- Building globally resilient value chains for sustained operations
- 2025 Warehouse/DC Operations Survey: Tech adoption marches on
- More Procurement & Sourcing
Latest Procurement & Sourcing Resources

Subscribe

Supply Chain Management Review delivers the best industry content.

Editors’ Picks
