•   Exclusive

AI update: Decision-maker or decision-supporter?

The promise of AI offers greater decision-making power, but there are some decisions that AI should not make.

Subscriber: Log Out

Sorry, but your login has failed. Please recheck your login information and resubmit. If your subscription has expired, renew here.

This is an excerpt of the original article. It was written for the July-August 2024 edition of Supply Chain Management Review. The full article is available to current subscribers.

July-August 2024

Artificial intelligence is everywhere these days. But what if it isn’t? I would guess that at least 50%, and probably closer to 70%, of the article pitches I receive these days involve AI. Most conversations I’ve had at conferences this year have at least touched on AI and its impact on the supply chain. Almost every technology company touts its AI-infused software. It seems that AI is not only mainstream, it’s Main Street.
Browse this issue archive.
Already a subscriber? Access full edition now.

Need Help?
Contact customer service
847-559-7581   More options
Not a subscriber? Start your magazine subscription.

Almost four and a half years ago, I wrote my first Insights column on artificial intelligence (AI) titled: “Rely on AI to make decisions? Yes, but warily” [March/April 2020]. I had planned to do an annual update, however, the COVID-19 pandemic came along—consuming much of the oxygen for columnists like me. Now that the pandemic has subsided, and a lot is being made of OpenAI’s launch of ChatGPT in November of 2022, there appears to be renewed optimism regarding the potential for AI to change the world.

SC
MR

Sorry, but your login has failed. Please recheck your login information and resubmit. If your subscription has expired, renew here.

From the July-August 2024 edition of Supply Chain Management Review.

July-August 2024

Artificial intelligence is everywhere these days. But what if it isn’t? I would guess that at least 50%, and probably closer to 70%, of the article pitches I receive these days involve AI. Most conversations I’ve…
Browse this issue archive.
Access your online digital edition.
Download a PDF file of the July-August 2024 issue.

Almost four and a half years ago, I wrote my first Insights column on artificial intelligence (AI) titled: “Rely on AI to make decisions? Yes, but warily” [March/April 2020]. I had planned to do an annual update, however, the COVID-19 pandemic came along—consuming much of the oxygen for columnists like me. Now that the pandemic has subsided, and a lot is being made of OpenAI’s launch of ChatGPT in November of 2022, there appears to be renewed optimism regarding the potential for AI to change the world.

In a Wall Street Journal (WSJ, March 23, 2023) article titled “Gates Calls AI Most Revolutionary Tech in Decades,” Bill Gates is quoted as saying: “The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the internet, and the mobile phone.” And “entire industries will reorient [a]round it,” as well as “businesses will distinguish themselves by how well they use it.” Gates had already put his money where his mouth is, having invested billions of dollars in OpenAI. With Microsoft planning to invest more in the future (WSJ, Jan. 24, 2023).

Other WSJ articles paint a picture of the scale of optimism as well as cautions regarding AI.

  • “AI Regulation Advances in European Union” (June 15, 2023)
  • “AI Is About to Be Everywhere. Skeptics Risk Being Left Behind” (Sept. 30, 2023-Oct. 1, 2023) 
  • “Business Schools Are Going All In on AI” (April 4, 2024)
  • “Musk, Dimon Are Hyped on AI, But Not Everyone Is On Board” (April 11, 2024)
  • “Amazon CEO Touts AI, Commits to Cut Costs” (April 12, 2024)

Thus, there is a lot of hubbub touting the bright future for AI: the good, the bad, and the ugly. (The latter two potentially requires governmental regulation to hinder AI’s potential for nefarious use by bad actors.)

China is investing $50 billion in it, and the U.S. is likely to follow suit, maybe even upping the ante—to not fall behind in the global AI technology race. The big question is whether it will be embraced by corporations to significantly improve business performance. Or is it “déjà vu all over again,” as the late great New York Yankee catcher Yogi Berra quipped. That is to say, I’ve seen this technology hype before and it hasn’t gotten very far yet.

Brief history of AI

I’ve been intrigued by the pursuit of creating systems that could replicate and improve upon human intelligence. I’ve long taken note of the AI research and innovation done by IBM. AI research has a long history at IBM, dating back to the 1950s. IBM sees its commercialization as one of its most important business initiatives. Its AI history has included the development of a chess-playing computer system known as Deep Blue and Watson, a question-answering computer system capable of responding in natural language. However, despite this research history, IBM has been less than successful when it comes to selling AI products and services.   

In the early years, governments heavily funded AI research. Eventually, in the 1970s, disillusion set in and funding stopped. This led to the “AI Winter.” Sometime later, Japan’s AI initiatives inspired others to invest billions of dollars in AI, but by the late 1980s, investors withdrew funding. Then for the third time (think AI 3.0), AI started to boom once again at the beginning of this century. I believe this current iteration might be AI 4.0.

Despite AI’s ups and downs, it has had some incremental success. Simple thinking has been imbedded into hardware items—such as smart TVs and refrigerators, as well as the ‘intelligent’ software that supports driving a car and flying an airplane. However, I don’t believe we have yet seen any AI game-changing killer apps that have enabled significant business processes and operational improvement. For example, an article, “Retailers Say Skip Returns of Unwanted Items” (WSJ, Jan. 11, 2021), stated that: “Amazon.com, Walmart Inc. and other companies are using artificial intelligence to decide whether it makes economic sense to process a return.” I wouldn’t term this type of thinking as AI—it’s not much more thinking than a refrigerator’s to keep items in it cold and frozen.     

ChatGPT could be a game-changer

ChatGPT, however, does appear to have the potential to be a game-changer for some businesses. As a Generative AI, it does have the ability to replicate human thinking as it deploys neural-net concepts, for example. My grandson and I like to play with our Amazon Alexa, asking it all sorts of weird questions. I tell him that Alexa is dumb because it only knows how to search, not think rationally. When I ask Alexa: “Who is my parent’s child?” it used to respond: “not enough information.” Recently it says: “I didn’t find any notes about parent’s child?” Of course, the simplest answer to the question is merely: “You.” It could also add: “If you have siblings, it would be them too.”

When I asked ChatGPT the same question it responded: “It seems like you’re asking about yourself. If so, then you are your parents’ child. If you’re looking for a different answer, could you please provide more context?” So, it has some capability to reason.   

According to “Business Tech Is Finally Having Its Moment Thanks to AI” (WSJ, Feb. 15, 2024): “Generative AI is ideally suited for transforming large organizations by making people and processes radically more productive.” ChatGPT, for example, improves the writing and researching processes, including speeding up software coding. In “Chatbots Attempt to Figure Out Where Shipments Are” (WSJ, Aug. 31, 2023) it states that, “Logistics companies are increasingly building artificial intelligence technology into their operations.” Several freight brokers, for example, are “looking at how generative AI [such as using ChatGPT’s bot] could transform their customer-service divisions by automating tasks such as tracking shipments, booking loads, and declaring imports.

Many of the current activities in AI are being done to bolster technology companies’ iCloud service platforms. So, they are largely focused on the supply-side of AI technologies. Activities are based on the premise that “if you build it, they will come.”

The hope is that big data analyses will eventually be used to develop the AI decision models incorporating decision variables. To analyze large data sets, businesses might buy machine-learning technologies that amass large amounts of data drawn from the internet. In addition, Nvidia’s high-resolution graphics chips are currently selling like hot cakes. Its gaming-based graphical user functionally is needed to visualize big data, as well as its ability to do simultaneous calculations. However, there is still a dearth of demand-side sales of AI applications—i.e., there are no game-changing killer apps yet. (i.e., think Excel, Word, etc.).

AI for decision-making

I’ve always taken the position that technology is only useful when it enables business process changes that improve operational and financial performance. Computers should be decision support systems (DSSs) and should not necessarily make all final decisions for managers. Of course, this is not to take away from the fact that many decisions can be made without managerial intervention.

For example, an ABC Pareto analysis on stocked items can help an inventory manager be more productive by determining which items should have more time spent on them. The most important A items represent the largest share of revenue, thus requiring a lot of time to support the fewest major customers that buy A items. Meanwhile, for the greater, yet less important B items, a manager relies on the computer to do much of the inventory management, intervening on an exception basis. Lastly, C items—the largest number of items—represent the smallest share of revenues, so they are on autopilot with the computer doing all the work except when a crisis arises. Automated AI inventory management technology would be most useful for C items, less useful for B, and would be least useful for A items.          

System-1 versus System-2 thinking

In one business decision-making course I’ve taught, I used a textbook that discussed two types of cognitive thinking. It stated that “the current theory is that there are two distinctly separate cognitive systems underlying thinking and reasoning and that these different systems were developed through evolution.” These systems are often referred to as implicit and explicit, or by the more neutral System-1 and System-2, as coined by Stanovich and West [Stanovich, K E.; West, R F. (2000). “Individual difference in reasoning: implications for the rationality debate?” Behavioral and Brain Sciences].

System-1, the most used thinking, does not involve much thinking. As a result of this type of thinking, an untold number of routine and insignificant actions might be taken predicated on gutfeel, tried-and-true methods, and heuristics. Actions are effortless, taken quickly, and intuitively. System-2 represents rational decision-making because the actions to be taken are more critical, strategic, and impactful. Decision analysis is slow, conscious, effortful, and logical. Thus, AI will be most useful for automating System-1 decisions. However, for System-2 decisions, not so much. These decisions might not benefit from AI’s real-time decision-making & optimization capabilities, as well as its lack of humanity.

Built-in latency is important

I once debated an analyst on the concept of zero latency being a long-term goal for computerized systems (latency is defined as the time from receiving data that triggers a need for action until the action is taken). I took the position that zero latency is not necessarily always good for business. For example, when navigating a big ship, turning it too quickly will result in it turning over and sinking. Moreover, there is a reason that cars have shock absorbers to absorb sudden shocks from bumps in the road. All complex systems (including supply chains) need latency buffers built into them for sustainability and survival.

Realistic optimal decisions

Advanced planning and scheduling (APS) systems often have optimization software engines (or brains) in them, turning them from supporting managers with a what-if analysis to prescribing what is the best decision. AI will certainly improve these, and automated real-time optimization will be appealing. Automating optimal decisions might be great for System-1 decisions, however, not for System-2 decisions. Their impact is significant and thereby requires cross-functional collaboration
among managers to develop realistic quantitative decision models.

An analyst I knew asked plant managers if they turned on their APS’s optimizer. They largely said no because they knew their plants were constrained by materials. Thus, they were only interested in learning which materials were constrained, so they could find more supply of them. Basically, why accept the system’s so-called optimal answer if you know you might improve on it by making more product?

All optimization analyses are predicated on quantitative modeling and assumptions. So-called optimums need to be vetted among managers to make sure all interests and concerns are incorporated before taking action. In short, since optimization usually involves System-2 decisions, no one should believe an automated optimal decision from a computer. Its acceptance requires an enterprise-wide collaborative vetting process.

AI might be unjust

A newspaper clip I once read a long time ago discussed a judge presiding at a trial in a small town. One of the lawyers put into evidence a computer printout. The judge quipped: “If the computer says it, then it must be true.” As we know years later, it is oftentimes far from the truth because computers are programmed by humans. Thus, decisions made by a computer might be susceptible to not only being wrong, but being socially unjust and inhumane, as well.

In my Insights column, “Don’t build weapons of math destruction” [March/April 2019], I discussed the book: “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy” by Dr. Cathy O’Neil, a former financial quant. She wrote how she became disillusioned with mathematical models that affect society.

Her premise is that the vast amount of big data on the internet is being used in ways that are: 1) opaque; 2) unquestioned; and 3) unaccountable. In simple terms, the detailed data used is not transparent to the person affected by the decision-making it supports; the use of the data is beyond reproach in modelers’ minds; and modelers refuse to defend the model other than to say, “it is what it says” (often equating correlation with causality). She discusses a variety of applications that have “vicious, self-reinforcing feedback loops” whereby things get worse for those affected—especially minorities and the poor. I bring up this book in the context of AI because one has to remember that AI software will be developed by humans, and so it will always be subject to potential bias, corrupted behavior, and simple errors. In addition, AI will have no empathy and feeling for the people who might be adversely affected by its decision-making.

An AI cautionary tale

I recently attended a play—”Machine Learning,” written by playwright Francisco Mendoza—about a software developer who programmed a service bot to take care of his aging father that lived alone. He programmed it with machine learning, whereby the bot learned how to interact with his father by observing how the developer interacted with him. In the end, the developer finds his father dead from what looks like a medical problem of some sort. He discusses what happened with the bot and asks why it decided not to call 911. It said: “You wanted me to make him as comfortable as possible and I should act as you (the programmer) would have done so. Therefore, I decided that his death would be the best alternative for your father’s comfort, along with knowing that you inherently wanted him out of your life.” 

In summary, I recommend that managers be wary of using AI to make System-2 decisions for them. While fast real-time planning with no latency has appeal, it should largely be used to automate operational execution, rather than planning and forecasting processes, which are more tactical and strategic in nature. For example, whenever I’ve been asked by a company how it might improve forecast and planning accuracy, I jokingly tell them to go out of business. Forecast error would be zero, and plans would be non-existent and perfect. AI might seriously give supply chain managers the same advice—that they should work toward putting their companies out of business. Not good advice. •

SC
MR

(Photo: Getty Images)
What's Related in Artificial Intelligence
Artificial Intelligence (AI) Meets Critical Intelligence
The white paper discusses the urgent need for the logistics and supply chain industry to adapt to AI advancements, based on a study by Odyssey…
Download

About the Author

Larry Lapide, Research Affiliate
Larry Lapide's Bio Photo

Dr. Lapide is a lecturer at the University of Massachusetts’ Boston Campus and is an MIT Research Affiliate. He received the inaugural Lifetime Achievement in Business Forecasting & Planning Award from the Institute of Business Forecasting & Planning. Dr. Lapide can be reached at: [email protected].

View Lawrence's author profile.

Subscribe

Supply Chain Management Review delivers the best industry content.
Subscribe today and get full access to all of Supply Chain Management Review’s exclusive content, email newsletters, premium resources and in-depth, comprehensive feature articles written by the industry's top experts on the subjects that matter most to supply chain professionals.
×

Search

Search

Sourcing & Procurement

Inventory Management Risk Management Global Trade Ports & Shipping

Business Management

Supply Chain TMS WMS 3PL Government & Regulation Sustainability Finance

Software & Technology

Artificial Intelligence Automation Cloud IoT Robotics Software

The Academy

Executive Education Associations Institutions Universities & Colleges

Resources

Podcasts Webcasts Companies Visionaries White Papers Special Reports Premiums Magazine Archive

Subscribe

SCMR Magazine Newsletters Magazine Archives Customer Service

Press Releases

Press Releases Submit Press Release