As factories and plants begin integrating artificial intelligence into their operations, most of the conversation has centered around tools, platforms, and performance metrics. However, Chris Draper, CEO and co-founder of morriganAI and author of Safe AI Basics (and co-author of Governing Artificial Intelligence), believes the true danger lies in the way people interact with these systems over extended periods.
According to Draper, this relationship between humans and machines is one of the most neglected elements of rolling out AI, particularly in settings where people are assigned to oversee automated processes.
“If you assign a person to perform a repetitive task within a system, their performance will inevitably fluctuate depending on the time of shift, their level of experience, and how long they’ve been on the job.”
Many organizations operate under the assumption that having a person “in the loop” automatically guarantees safety. Draper, however, contends that this isn’t necessarily the case.
“Simply including a person in a system doesn’t make AI inherently safe.”
He cautions that badly engineered systems—particularly ones in which people are meant to passively watch over performance—can actually raise the level of risk.
“Those setups are extremely dangerous.”
Need fast answers on assembly and manufacturing questions?
Try Ask ASM, our brand-new smart AI-powered search tool.
Ask ASM
Draper outlines three frameworks for embedding people into AI systems: “in the loop,” “on the loop,” and “out of the loop.” In a human-in-the-loop setup, the process halts until an individual confirms each step, resulting in a very high degree of safety but restricting throughput. In a human-out-of-the-loop arrangement, operations are entirely automated with no human involvement, which can work well if architected properly. Yet Draper points out that most businesses are actually deploying “on-the-loop” systems, where the software runs on its own while people are tasked with spotting mistakes—a model he warns is fraught with significant risk.
“Those need to be introduced with a great deal of care. In nearly all of my strategic consulting, I advise against putting a human on the loop in a production environment, because a human on the loop in production is essentially like a Tesla on autopilot. It can technically run itself. But once it begins performing reliably enough, the human who is supposed to serve as your safety net starts to decline in effectiveness. They grow complacent. They begin overlooking mistakes because they’ve grown accustomed to the system being correct 80 percent of the time, and before long they’re missing most of the remaining 20 percent they’re responsible for catching.”
Draper has dedicated much of his career to studying how humans engage with complex, high-stakes technologies. Together with Tom Foreman, he also co-established the AI Action Lab to assist organizations in gaining a deeper understanding of how AI performs in real-world operational environments.
Central to his viewpoint is a fundamental shift in how we define what AI is—and what it isn’t.
“If you approach your AI rollout as a purely technological initiative, it is destined to fail. There’s simply no way around that.”
Instead of treating AI like conventional automation machinery, Draper insists it belongs to an entirely different class.
“AI isn’t comparable to something like video conferencing. It’s a tool much more akin to YouTube.”
That distinction is critical because AI doesn’t merely follow predetermined commands. It analyzes incoming data and produces results based on probability.
“Every AI tool constantly generates synthetic content.”
This produces a degree of unpredictability that manufacturers aren’t always equipped to handle.
“AI is an ultra-hazardous technology. Even a slight shift in risk can lead to potentially exponential consequences.”
A frequent error, according to Draper, is the assumption that AI can simply be layered on top of current workflows to boost efficiency.
“AI as a tool that merely speeds up an otherwise unchanged process will never deliver a return for you.”
In some scenarios, that approach can make existing problems even worse.
“With AI, you can go bankrupt faster than you ever could before.”
This risk is amplified when companies deploy AI agents without fully grasping how they work or how closely they need to be supervised.
A core challenge, Draper explains, is that many organizations have a flawed understanding of AI’s role within a system. Instead of replacing human judgment, AI should be viewed as enhancing it.
This means the success of AI hinges largely on where and how people are situated within the workflow.
“If you position the person incorrectly, you’re heading for serious, significant trouble.”
For manufacturers, the key message is clear: AI is more than just a technology purchase. It reshapes how decisions are reached, how workflows function, and how people engage with those processes.
“It will always act as something that magnifies, that accelerates the human element.”
As adoption accelerates, organizations that embrace that truth—and engineer their systems accordingly—will be far better equipped to sidestep the hidden pitfalls that accompany AI deployment.



