technologyliberal
A New Look at “Human in the Loop” and AI Safety
USAThursday, March 26, 2026
A classic case of this happened with the Therac‑25 radiation machine in the 1980s. The device combined two older machines into one and promised faster, safer treatment. Operators were told to confirm each step, but the machine still delivered dangerous overdoses. The operator’s confirmation became a routine that did not prevent error, and six patients were harmed before the flaws were discovered. The root cause was a design that relied too much on human oversight.
Today, developers are rushing to add AI into safety‑critical areas like autonomous weapons or decision support. They often dismiss concerns by saying that a human will monitor the AI. But AI behaves in ways that are probabilistic and unpredictable, especially when used in high‑stakes situations. Even though AI is new, the way it can fail is similar to older software systems that have been studied for decades. The failure modes are not new; they just happen faster.
Leaks from the Pentagon suggest that AI might already be influencing where bombs are dropped. If people believe a human is watching over the AI, they may trust it too much and not put real safeguards in place. Over the next decade, hiding unsafe AI behind a “human in the loop” could lead to serious real‑world problems.
The lesson is clear: relying on a human observer is not enough. Systems need robust design, thorough testing, and continuous monitoring that goes beyond simple approval. Only then can we trust AI to work safely in critical environments.
Actions
flag content