The more things change, the more they stay the same

Just as with other technologies, AI will enable people to get more done than previous generations of workers.

Subscriber: Log Out

ChatGPT has sure made a splash in the news lately. As usual, its appearance has provoked the chorus of “Chicken Littles” bemoaning the end of the world. The era of critical thinking is winding down, and the end of learning by a whole generation of students will result because of AI! Office and data workers of all stripes will cheat the system by asking the AI to do the work for them.

All for the better! Anybody who can do twice the work in half the time is someone who deserves rich rewards—and eventually others will learn from them. Our society needs to continue to reward this sort of innovativeness.

I was one of the few kids with a computer in college way back in the middle to late Stone Age. And I got an earful from certain elders about how being able to copy information so easily was going to end learning and lead to a wave of lazy workers who didn’t know how to think. Some professors made it very clear that they would only take typewritten assignments, and some even mandated hand-written assignments. One told me, “Students can still cheat, but at least they’ll still learn something as they write 50 pages.”

Those computer skills have enabled me to get more done than previous generations of workers. And AI including ChatGPT and its successors will do the same for the next generation of workers.

AI will never, ever replace human beings with critical thinking skills for one simple reason: they don’t exist out in the real world that humans inhabit. That means humans must provide everything that an AI needs to be useful:

The data that AI needs must somehow go from the realities of inventories, trucks, machines, humans, and point-of-sales systems into the sphere of AI—and we all know how accurate that data isn’t.

The algorithms and logic of the AI must be trained by humans because as it turns out, a lot of the data we have is faulty and biased. That means blindly unleashing an AI to train itself on data can lead to inaccurate or even evil results, especially data related to social and organizational phenomena.

AI simply can’t do context, and unlikely it never will, at least not seamlessly. Their digital existence is simply too foreign and distinct from ours. Humans must explain everything to the AI, asking questions in just the right way, and sometimes that takes hours or even years to figure out. And when the AI comes out with an answer, it requires humans to relate it back to reality. This issue gets worse the more complex the reality.

AI doesn’t have a human consciousness and empathy. We might try to program that in, but lacking a physical nervous system and reward systems, it could be a best-case scenario that AI will be a little (or even a lot) psychopathic by human standards. This does not make an AI necessarily bad or dangerous, but on a very fundamental level it won’t share the same emotional needs and motivations as humanity.

Of course, AI will be weaponized both between competing nations and between competing companies and other social institutions. Things could get ugly. Hopefully, we will continue humanity’s trend of the past several centuries toward channeling competition into channels that generally improve our well-being. This includes competition on technology and trade, social and artistic clout, athleticism and other arenas. Seems like a smart AI would avoid nihilistic strategies simply out of self-interest.

It turns out that humans are pretty good at processing our uncertain and messy world, and we’re not good at the sorts of problems that AIs are most suited to help us to solve. As we’ve done with past technologies, our best bet is to train the next generation of leaders to work with AI and other advanced information technologies to achieve society’s goals of healthier, happier humans.

Perhaps the most important distinguishing feature of humans is that we ask questions. Even the smartest animals that we’ve taught language don’t ask real questions. It’s not surprising that the pinnacle of our technology is designed to answer questions. AIs will challenge us as never before to ask the right questions.

SC
MR

Latest Podcast
Talking Supply Chain: Understanding the FTC’s ban on noncompetes
Crowell & Moring law partner Stefan Meisner joined the Talking Supply Chain podcast to discuss the recent decision by the Federal Trade…
Listen in

About the Author

Michael Gravier, Associate Professor
Michael Gravier

Michael Gravier is a Professor of Marketing and Supply Chain Management at Bryant University with a focus on logistics, supply chain management and strategy and international trade. Follow Bryant University on Facebook and Twitter.

View Michael's author profile.

Subscribe

Supply Chain Management Review delivers the best industry content.
Subscribe today and get full access to all of Supply Chain Management Review’s exclusive content, email newsletters, premium resources and in-depth, comprehensive feature articles written by the industry's top experts on the subjects that matter most to supply chain professionals.
×

Search

Search

Sourcing & Procurement

Inventory Management Risk Management Global Trade Ports & Shipping

Business Management

Supply Chain TMS WMS 3PL Government & Regulation Sustainability Finance

Software & Technology

Artificial Intelligence Automation Cloud IoT Robotics Software

The Academy

Executive Education Associations Institutions Universities & Colleges

Resources

Podcasts Webcasts Companies Visionaries White Papers Special Reports Premiums Magazine Archive

Subscribe

SCMR Magazine Newsletters Magazine Archives Customer Service