A guided tour of AI and the murky ethical issues it raises

By HOWARD SCHNEIDER

IMAGE/Unsplash/Frank V

Artificial Intelligence: A Guide for Thinking Humans by Melanie Mitchell (Farrar, Straus and Giroux, 336 pages).

In “Artificial Intelligence: A Guide for Thinking Humans,” Melanie Mitchell explores the workings and ethics of AI.

As I read Melanie Mitchell’s “Artificial Intelligence: A Guide for Thinking Humans,” I found myself recalling John Updike’s 1986 novel “Roger’s Version.’’ One of its characters, Dale, is determined to use a computer to prove the existence of God. Dale’s search leads him into a mind-bending labyrinth where religious-metaphysical questions overwhelm his beloved technology and leave the poor fellow discombobulated. I sometimes had a similar experience reading “Artificial Intelligence.” In Mitchell’s telling, artificial intelligence (AI) raises extraordinary issues that have disquieting implications for humanity. AI isn’t for the faint of heart, and neither is this book for nonscientists.

To begin with, artificial intelligence — “machine thinking,” as the author puts it — raises a pair of fundamental questions: What is thinking and what is intelligence? Since the end of World War II, scientists, philosophers, and scientist-philosophers (the two have often seemed to merge during the past 75-odd years) have been grappling with those very questions, offering up ideas that seem to engender further questions and profound moral issues. Mitchell, a computer science professor at Portland State University and the author of “Complexity: A Guided Tour,” doesn’t resolve these questions and issues — she as much acknowledges that they are irresolvable at present — but provides readers with insightful, common-sense scrutiny of how these and related topics pervade the discipline of artificial intelligence.

Mitchell traces the origin of modern AI research to a 1956 Dartmouth College summer study group: its members included John McCarthy (who was the group’s catalyst and coined the term artificial intelligence); Marvin Minsky, who would become a noted artificial intelligence theorist; cognitive scientists Herbert Simon and Allen Newell; and Claude Shannon (“the inventor of information theory”). Mitchell describes McCarthy, Minsky, Simon, and Newell as the “big four’’ pioneers of AI. The study group apparently generated more heat than light, but Mitchell points out that the subjects that McCarthy and his colleagues wished to investigate — “natural-language processing, neural networks, machine learning, abstract concepts and reasoning, and creativity” — are still integral to AI research today.

Undark for more

Comments are closed.