Originally published on https://zeitwurst.wordpress.com
Words by Tommaso Lecca, in Groningen
Professionals and academics urge for a deeper focus on ethical issues regarding artificial intelligence (AI). Increasing concerns are arising about robots’ behaviour and the way they will soon influence people’s lives. For example, Tesla and SpaceX CEO Elon Musk announced earlier this week that he’s launching a startup called Neuralink. The declared aim is finding a way to merge human brains with computers to avoid human intelligence being overtaken by artificial devices.
Musk’s primary concern is that AI advancement might leave behind humans’ brains capacity. Over-smart robots would shrink the status of humanity, in the best-case scenario, to the pet level as he explains in this video.
Rather than integrating new technologies into the human capacity, the answer seems inherent within philosophical and ethical principles, which have been ruling humans’ relationship with technology for a long time.
“We have to think more of embedding ethical considerations inside the machines,” says Bart Verheij, Professor of AI and cognitive engineering at the University of Groningen. Verheij already noticed a different attitude that characterises his students: “they are interested also in philosophy, computer science, mathematics, psychology, linguistics, they have a broad perspective.”
Being conscious of other subjects’ unsolved problems becomes essential for the AI community when trying to reach astonishing results. Verheij’s long-term goal is to combine legal knowledge with robotics to develop a machine that can act as a judge. It may sound like a pipe dream, but he makes clear that “from the perspective of the AI researcher, everything is possible.” Instilling a sense of justice and teaching a machine humans’ decision-making processes becomes more problematic, considering that “there is still not a proper understanding of what an ethical reasoning is in humans,” Verheij concludes.
Deepening on ethical and philosophical reasoning processes may be an antidote to neutralise apocalyptic scenarios predicted by Stephen Hawking in this hilarious interview with John Oliver (watch at 2:18 up to 4:05).
“Social awareness is what we need,” says IBM analyst Ilias Pappas, “otherwise excitement for new technologies will lead our actions, and we’ll stop thinking about consequences.” Despite the fact that the projects he’s taking part in are all focused on advancements that would facilitate people’s lives, from healthcare assistance to retailing data analysis, Pappas recognises that humans’ relationship with robots and AI might become troublesome: use and abuse of smartphones and tablets are already demonstrating that technology may turn into an addiction. “We are losing our capacity to say ‘no’ to advertisements that make us feel in desperate need of the last version of the iPhone,” he argues, “and we often do not realise how more adventurous and funny would be switching off some devices and ask for information on the street.”
On the AI related issues he thinks that “in an ideal world, governments should invest in education to technology since the early age” in such a way that tomorrow’s consumer will be more conscious when using a device. On the research side, he has high hopes for more professionals from different fields focusing on AI and robotics. “I think that philosophers and psychologists should be involved in the development process,” he suggests.
Ryan Wittingslow, Assistant Professor in humanities with expertise in the history of technology, emphasises the difficulties already mentioned by Verheij from University of Groningen. Instilling in a machine a sense of justice, ethics, but also kindness, friendship and “all those emotional patterns that human beings display just naively” is a current issue. However, there have always been concerns regarding the technological advancement: “In the Phaedrus (dialogues’ collection written in the IV century BC), Plato talks about how writing will corrode people’s memory.” At the time, writing down information was seen as a risk to become less intelligent, by losing the capacity to remember.
What seems to be clear is that a potential divorce between humans and technology is not an option anymore. As Wittingslow clarifies “the earliest instances of tools use in primates comes from Australopithecus (five million years ago). By the time the Homo sapiens shows up, which is only 250 thousand years ago, humans already managed sophisticated tools and the use of language.”
Historia est magistra vitae, history is life’s teacher, writes Cicero, Roman politician and intellectual, thousands of years later. Will history repeat itself? Or will humanity regret going this far in AI research? “Don’t worry about that,” Siri concludes.