by KENAN MALIK

This essay, on the dangers of AI, was my Observer column this week. It was published on 26 November 2023, under the headline “AI doesn’t cause harm by itself. We should worry about the people who control it“.
At times it felt less like Succession than Fawlty Towers, not so much Shakespearean tragedy as Laurel and Hardy farce. OpenAI is the hottest tech company today thanks to the success of its most famous product, the chatbot ChatGPT. It was inevitable that the mayhem surrounding the sacking, and subsequent rehiring, of Sam Altman as its CEO would play out across global media last week, accompanied by astonishment and bemusement in equal measure.
For some, the farce spoke to the incompetence of the board; for others, to a clash of monstrous egos. In a deeper sense, the turmoil also reflected many of the contradictions at the heart of the tech industry. The contradiction between the self-serving myth of tech entrepreneurs as rebel “disruptors”, and their control of a multibillion-dollar monster of an industry through which they shape all our lives. The tension, too, between the view of AI as a mechanism for transforming human life and the fear that it may be an existential threat to humanity.
Few organisations embody these contradictions more than OpenAI. The galaxy of Silicon Valley heavyweights, including Elon Musk and Peter Thiel, who founded the organisation in 2015, saw themselves both as evangelists for AI and heralds warning of the threat it posed. “With artificial intelligence we are summoning the demon,” Musk portentously claimed.
The combination of unrestrained self-regard for themselves as exceptional individuals conquering the future, and profound pessimism about other people and society has made fear of the apocalypse being around the corner almost mandatory for the titans of tech. Many are “preppers”, survivalists prepared for the possibility of a Mad Max world. “I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to,” Altman told the New Yorker shortly after OpenAI was created. The best entrepreneurs, he claimed, “are very paranoid, very full of existential crises”. Including, inevitably, about AI.
OpenAI was created as a non-profit-making charitable trust, the purpose of which was to develop artificial general intelligence, or AGI, which, roughly speaking, is a machine that can accomplish, or surpass, any intellectual task humans can perform. It would do so, however, in an ethical fashion to benefit “humanity as a whole”.
Then, in 2019, the charity set up a for-profit subsidiary to help raise more investment, eventually pulling in more than $11bn (£8.7bn) from Microsoft. The non-profit parent organisation, nevertheless, retained full control, institutionalising the tension between the desire to make a profit and doomsday concerns about the products making the profit. The extraordinary success of ChatGPT only exacerbated that tension.
Two years ago, a group of OpenAI researchers left to start a new organisation, Anthropic, fearful of the pace of AI development at their old company. One later told a reporter that “there was a 20% chance that a rogue AI would destroy humanity within the next decade”. That same dread seems to have driven the attempt to defenestrate Altman and the boardroom chaos of the past week.
Pandaemonium for more