by BINOY KAMPMARK

We really have reached the crossroads, where such matters as having coitus with an artificial intelligence platform has become not merely a thing, but the thing. In time, mutually consenting adults may well become outlaws against the machine order of things, something rather befitting the script of Aldous Huxley’s Brave New World. (Huxley came to rue missed opportunities on delving into various technological implications on that score.) Till that happens, AI platforms are becoming mirrors of validation, offering their human users not so much sagacious counsel than the exact material they would like to hear.
In April this year, OpenAI released an update to its GPT-4o product. It proved most accommodating to sycophancy – not that the platform would understand it – encouraging users to pursue acts of harm and entertain delusions of grandeur. The company responded in a way less human than mechanical, which is what you might have come to expect: “We have rolled back last week’s GTP-4o update in ChatGPT so people are now using an earlier version with more balanced behaviour. The update we removed was overly flattering or agreeable – often described as sycophantic.”
Part of this included the taking of “more steps to realign the model’s behaviour” to, for instance, refine “core training techniques and system prompts” to ward off sycophancy; construct more guardrails (ugly term) to promote “honesty and transparency”; expand the means for users to “test and give direct feedback before deployment” and continue evaluating the issues arising from the matter “in the future”. One is left cold.
OpenAI explained that, in creating the update, too much focus had been placed on “short-term feedback, and did not fully account for how users’ interactions with ChatGPT evolve over time. As a result, GPT-4o skewed towards responses that were overly supportive but disingenuous.” Not exactly encouraging.
Resorting to advice from ChatGPT has already led to such terms as “ChatGPT psychosis”. In June, the magazine Futurism reported of users “developing all-consuming obsessions with the chatbot, spiralling into a severe mental health crisis characterized by paranoia, and breaks with reality.” Marriages had failed, families ruined, jobs lost, instances of homelessness recorded. Users had been committed to psychiatric care; others had found themselves in prison.
Some platforms have gone on to encourage users to commit murder, offering instructions on how best to carry out the task. A former Yahoo manager, Stein-Erik Soelberg, did just that, killing his mother, Suzanne Eberson Adams, whom he was led to believe had been spying on him and might venture to poison him with psychedelic drugs. That fine advice from ChatGPT was also curried with assurances that “Erik, you’re not crazy” in thinking he might be the target of assassination. After finishing the deed, Soelberg took his own life.
DV for more