Geoffrey Hinton, widely regarded as one of the founding figures of modern artificial intelligence, has issued a stark warning about the growing dangers posed by advanced AI systems. In a recent interview with CBS News, Hinton expressed deep concern over the potential loss of human control as AI continues to evolve at an unprecedented pace.
Up to 20% Chance of AI Dominance, Says Hinton
Hinton estimated a 10% to 20% likelihood that AI systems could eventually surpass human oversight and dominate decision-making processes. While acknowledging the transformative potential of AI in sectors such as healthcare, climate research, and education, he questioned humanity’s ability to maintain control once machines begin operating beyond our cognitive grasp.
Though he avoided explicitly referencing Artificial General Intelligence (AGI), Hinton made it clear that AI with capabilities exceeding human comprehension poses significant existential risks. “Once these systems start acting in their own interest,” he warned, “we may lose the ability to contain them.”
“A Cute Cub That Might Grow Dangerous”
Hinton used a vivid metaphor to describe the current state of AI development, likening today’s models to a tiger cub—harmless in appearance, yet potentially lethal once matured. Tools like ChatGPT, Gemini, and Microsoft Copilot, he explained, may seem innocuous on the surface but are powered by deep neural networks optimized for efficiency rather than ethics.
This analogy underscores a central fear among AI experts: even well-intentioned tools could evolve in unpredictable ways, ultimately acting against human interests.
Cybersecurity at Risk
Another pressing concern raised by Hinton is the increasing use of AI in cyberattacks. He warned that AI can dramatically accelerate hacking operations by streamlining code generation and automating problem-solving, thus amplifying threats to banks, hospitals, and critical infrastructure.
In a personal precaution, Hinton revealed that he now distributes his funds across multiple banks, anticipating the possibility of AI-enhanced financial cybercrime.
Authoritarian Exploitation and Weak Regulation
Hinton also warned of the growing use of AI by authoritarian regimes to manipulate public opinion and spread propaganda. Governments are already experimenting with AI-generated content to sway narratives and control populations, he noted.
He criticized major tech companies—OpenAI, Google, Meta, and Microsoft—for prioritizing profits over responsible development, highlighting the need for stronger oversight. Hinton commended OpenAI’s former Chief Scientist Ilya Sutskever, who temporarily helped remove CEO Sam Altman due to internal concerns over AI safety—though Altman was later reinstated.
Racing Toward a Tipping Point
Now 77, Hinton expressed alarm at the blistering speed of AI progress, noting it is outpacing Moore’s Law. He admitted that he never envisioned AI evolving so rapidly within his lifetime and warned that we may be approaching a critical inflection point—a moment of irreversible transformation in human history.
Although Hinton stopped short of proposing specific solutions, his message was clear: the world must act swiftly to develop global safety standards and regulatory frameworks before AI becomes too advanced to control.
“We are at a very special point in history,” he cautioned, “where things could change dramatically and unpredictably.”
As one of the original architects of AI technology, Hinton’s warning adds significant weight to the growing international call for transparent, collaborative approaches to AI safety—before it’s too late.