Can the development of Artificial Intelligence be stopped?
A few weeks ago, the Future of Life Institute, a non-profit organization, published an open letter to artificial intelligence labs to halt the training of the most powerful AI systems for at least six months. The fear is that AI brings "profound risks to society and humanity". While the scope of AI uses is amazing, the risks are even greater. The best risks are those that can be anticipated, tested, and prevented; the worst are those that we don't see coming.
The letter was signed by more than 20,000 prominent figures from science, technology, and social sciences. The most surprising signatories were Elon Musk (who was part of OpenAI and left in 2018) and Steve Wozniak (of Apple). A notable case is that of writer and historian Yuval Harari, who reflected: "Artificial intelligence systems with the power of GPT-4 or greater should not get entangled in the lives of billions of people at a faster rate than cultures can safely absorb them." "A curtain of illusions could descend over all humanity, and we may never be able to run that curtain again or realize that it is there," he predicted.
At the same time, linguist and philosopher Noam Chomsky said, "This is part of what it means to think. To be right, it must be possible to be wrong. Intelligence consists not only of making creative guesses but also of making creative criticisms." "If humans are limited to the kind of explanations we can rationally conjecture, machine learning can learn at the same time that the earth is flat and that it is round," he warned.
Recently, Sundar Pichai, the CEO of Google, also raised alarms, although he was not included in the letter. "How can we develop AI systems that align with human values, including morality?" he asked. "I think it should include not only engineers but also social scientists, ethics experts, philosophers, and more."
Finally, the European Data Protection Committee (EDPC) also decided to intervene in the discussion since they doubt that Chat GPT and other AI tools comply with current legislation, especially regarding data protection.
Putting the development of AI on hold is impossible. Sam Altman, CEO of OpenAI, responded that the letter calling for a pause lacked "technical nuances regarding what should be stopped," and assured that they were not working on Chat GPT-5. "Without government involvement, it is impractical and almost impossible," said Bill Gates. Musk, who signed the letter, has just created a new AI company and said that all technology companies are buying GPUs (processors that can perform multiple tasks simultaneously, unlike CPUs), which are key to these intelligent systems. Putting a brake on technological development is like covering the sun with your hands.