Yoshua Bengio, one of the pioneers of modern artificial intelligence and recipient of the Turing Award, is raising serious concerns about the pace and direction of AI development. In a recent Diary of a CEO podcast interview, he said the moment he first truly grasped the potential risks of AI was after the public release of ChatGPT in 2023. For decades, many experts expected human-level language understanding and associated risks to remain far in the future. That belief changed rapidly when large language models demonstrated fluency and reasoning ability much sooner than anticipated. 

Bengio stressed that realizing machines could already “understand” language after ChatGPT’s launch was a turning point for him. It convinced him that the trajectory of AI development might outpace humanity’s ability to control or fully understand it, prompting him to speak publicly about the dangers even though he tends to shy away from media attention.

Why Bengio Sees 2023 as a Turning Point

Before ChatGPT, many researchers thought it would take decades for machines to reach this level of natural language competence. Bengio explained that witnessing language-capable AI arrive earlier than expected shifted his perspective on when serious AI safety challenges would emerge.

He drew a historical analogy to Alan Turing’s early ideas about artificial intelligence, noting that while language mastery alone doesn’t make machines as intelligent as humans, it marks a key milestone toward more powerful systems.

Bengio emphasized that AI systems today are still limited in reasoning, planning, and long-term goals, but that doesn’t mean risks are negligible. He warned that within a few years or decades, these limitations could diminish, raising concerns about AI’s impact on society, governance, and the balance of power.

What Risks Bengio Is Highlighting

Bengio’s concerns aren’t just about automation or job loss. He sees deeper structural and societal risks as AI becomes more capable. These include:

Power concentration: Whoever controls powerful AI systems could wield disproportionate influence over information, economics, and politics. 

Rapid capability gains: Models are improving faster than expected, challenging assumptions about timelines for truly sophisticated AI. 

Future threats: While current models aren’t catastrophic now, they could pose significant danger in the near future without strong governance and safety measures. 

Bengio says the realization in 2023 that language-capable AI was real forced him to advocate publicly for awareness, caution, and proactive safety work

What He Says About AI Today

Although Bengio doesn’t believe we’re in immediate danger, his position reflects a broader shift in the AI safety community. Many experts argue that rapid improvements in machine capability could outpace safeguards unless researchers, developers, governments, and international organizations coordinate more closely on safety frameworks and ethical guidelines. 

In earlier work and public comments, Bengio has also backed efforts such as international AI safety reporting and the formation of dedicated safety research organizations to support responsible development.