by DAN MILMO

Physicist Max Tegmark says competition too intense for tech executives to pause development to consider AI risks
The scientist behind a landmark letter calling for a pause in developing powerful artificial intelligence systems has said tech executives did not halt their work because they are locked in a “race to the bottom”.
Max Tegmark, a co-founder of the Future of Life Institute, organised an open letter in March calling for a six-month pause in developing giant AI systems.
Despite support from more than 30,000 signatories, including Elon Musk and the Apple co-founder Steve Wozniak, the document failed to secure a hiatus in developing the most ambitious systems.
Speaking to the Guardian six months on, Tegmark said he had not expected the letter to stop tech companies working towards AI models more powerful than GPT-4, the large language model that powers ChatGPT, because competition has become so intense.
“I felt that privately a lot of corporate leaders I talked to wanted [a pause] but they were trapped in this race to the bottom against each other. So no company can pause alone,” he said.
The letter warned of an “out-of-control race” to develop minds that no one could “understand, predict, or reliably control”, and urged governments to intervene if a moratorium on developing systems more powerful than GPT-4 could not be agreed between leading AI companies such as Google, ChatGPT owner OpenAI and Microsoft.
It asked: “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilisation?”
Tegmark, a professor of physics and AI researcher at the Massachusetts Institute of Technology, said he viewed the letter as a success.
“The letter has had more impact than I thought it would,” he said, pointing to a political awakening on AI that has included US Senate hearings with tech executives and the UK government convening a global summit on AI safety in November.
Expressing alarm about AI had gone from being taboo to becoming a mainstream view since the letter’s publication, Tegmark said. The letter from his thinktank was followed in May by a statement from the Center for AI Safety, backed by hundreds of tech executives and academics, declaring that AI should be considered a societal risk on a par with pandemics and nuclear war.
“I felt there was a lot of pent-up anxiety around going full steam ahead with AI, that people around the world were afraid of expressing for fear of coming across as scare-mongering luddites. The letter legitimised talking about it; the letter made it socially acceptable.
“So you’re getting people like [letter signatory] Yuval Noah Harari saying it, you’ve started to get politicians asking tough questions,” said Tegmark, whose thinktank researches existential threats and potential benefits from cutting-edge technology.
Fears around AI development range from the immediate, such as the ability to generate deepfake videos and mass-produce disinformation, to the existential risk posed by super-intelligent AIs that evade human control or make irreversible and highly consequential decisions.
The Guardian for more