How intelligent is artificial intelligence?

by THOMAS KLIKAUER

IMAGE/The author & https://lexica.art

When typing “Klikauer+AI” into an AI-based website called lexica.art, the picture shown above appeared. Yet, the man in the picture looks nothing like me. This failure might lead to the inevitable question of, how intelligent is artificial intelligence (AI)?

Apparently, the supposedly intelligent AI can’t even find a picture of me on the Internet – something that is actually saved, for example, on my very own website under my very own name.

If one looks, for example, at a rather common concept of what intelligence is – the capacity for abstraction, logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, critical thinking, problem-solving, and the ability to perceive or infer information – AI doesn’t seem to do all that well.

Worse, for AI, intelligence can also be seen as the ability to retain newly learned information – not misinformation and disinformation and not made up stuff generated by ChatGPT.

Intelligence is knowledge applied towards adaptive behaviors within an environment or context. From the standpoint of what intelligence actually is, AI seems to be miles away from actually being intelligent.

Given our understanding of intelligence, the allegedly so intelligent AI even failed to find a simple picture of me and to create a reasonably close approximation of me. Worse, the much famed ChatGPT – in another self-test – got four facts wrong about me. Rather than being intelligent, AI seems to just make stuff up.

Yet, despite this, the apostles of AI – including media capitalism – have a very serious incentive to diminish AI’s – known!limitations. Hyped up by the media, AI has become big business in recent months.

Even thorough corporate media – and this is quite apart from ChatGPT failing the Turing-test and from other rather incapable AI image creation websites, AI is set to become increasingly dominant in our global online, and not so online, culture. To arrive where it is today, AI had to travel a long way.

While the term “artificial intelligence” may had been first used in 1894, the term “artificial intelligence” was turbo-charged in 1956. Today, AI continues to be popularized and most recently sensationalized. In reality, AI has no human-like intelligence. Its limited machine intelligence is radically different from what we know intelligence to be.

The prevailing myth of AI tells us that AI can – or will in the future – do almost everything. Yet, AI’s incredible success rests on narrow applications like board games It can also predict the next set of sleepwear purchased on Amazon. However, all this gets us not one step closer to general intelligence – an AI system that can do more than play games and sell things.

For the most part, today’s AI is rather successful in applying a simple and rather narrow version of something that might be called functional crypto-intelligence or machine intelligence. Quite apart from this, current AI still benefits from much faster computers compared to previous decades’ and, most importantly, from fast and cheap access – via the Internet – to lots and lots of data.

While AI is great for those kinds of things, overall, however, AI is making only incremental progress towards being more than that. In other words, today’s AI is picking low hanging fruits. Despite AI making quantitative progress – by showing some improvements – it does not however, make much qualitative progress towards human-like intelligence – including, for example, understanding irony, sarcasm, cynicism, inference, and intuition.

In other words, even if AI engineers could program “intuition” into an AI machine, it remains rather doubtful if AI ever can reach the level of human intelligence. In short, your AI-driven home robot will follow your command, get the orange juice out of the fridge, and bring it to me. But it might not “intuitively” check the expiry date of the juice. We do. And worse for AI, we do hundreds of such things every day without even thinking (much) about it.

To get out of the pickles from all the too obvious demonstration that artificial intelligence isn’t really that intelligent, AI likes to frame “intelligence” as simple and highly reductive problem solving.

Yet, the much-loved problem solving only gives us a narrow part of the human world. The great news for AI is that the problem-solving application of AI is, rather unsurprisingly, an area in which AI is very good at and does it extremely successful. The good news for AI continues when solving problems is sold as intelligence.

On the downswing and worse for AI is the fact that there seems to be an inverse correlation between an AI machine’s success in learning one thing, and its success in learning another thing.

In other words, an AI system that has learned how to play a winning game of Go won’t also learn how to play a winning game of chess. In machine intelligence, one does not lead to the other – sadly. AI remains, so far, trapped inside its own machine intelligence of simple puzzle solving.

Counterpunch for more