Open Now
Open Now
Watch now

An AI 'Pause' is what Elon Musk, Andrew Yang, and Steve Wozniak propose. It's a poor idea that won't work in any case

People learn through trial and error, for both good and bad. Same thing will happen with A.I.

"AI systems with human-competitive intelligence can pose profound risks to society and humanity," asserts an open letter signed by Twitter's Elon Musk, universal basic income advocate Andrew Yang, Apple co-founder Steve Wozniak, DeepMind researcher Victoria Krakovna, Machine Intelligence Research Institute co-founder Brian Atkins, and hundreds of other tech luminaries. The letter calls "on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4." If "all key actors" will not voluntarily go along with a "public and verifiable" pause, the letter's signatories argue that "governments should step in and institute a moratorium."

The people who signed the letter also say that "powerful AI systems should not be made until we are sure that their effects will be good and that their risks can be managed." This means that almost perfect foresight is needed before developing artificially intelligent (AI) systems can move forward.

People are really, really bad at seeing the future, especially the end of the world. In the 1970s, hundreds of millions of people did not die from famine. Seventy-five percent of all living animal species did not go extinct before the year 2000. Since global oil production did not peak in 2006, "war, starvation, economic recession, and maybe even the extinction of homo sapiens" have not happened.

Technology predictions that aren't about the end of the world don't do much better. During the 1970s, there were no moon colonies. The world does not get most of its electricity from nuclear power, which is a shame. Microelectronics did not cause more people to lose their jobs. About 10 million cars that don't need a driver are now on our roads. Sam Altman, the CEO of OpenAI, the company that made GPT-4, says, "The best decisions [about how to move forward] will depend on where technology goes, and as with any new field, most expert predictions have been wrong so far."
Tech leaders urge a pause in the 'out-of-control' artificial intelligence  race | KSFR
Still, some of the signatories are serious people, and the results of generative AI and large language models like ChatGPT and GPT-4 can be amazing, like doing better on the bar exam than 90% of the people who take it now. They can also be hard to figure out.

Some transhumanists have been very worried for a long time about an artificial super-intelligence that gets out of our control. But GPT-4 is not that, no matter how useful or strange it is. Still, a team of researchers at Microsoft (which put $10 billion into OpenAI) tested GPT-4 and wrote in a pre-print, "The central claim of our work is that GPT-4 achieves a form of general intelligence, showing signs of artificial general intelligence."

OpenAI is also worried about the risks of developing A.I., but the company wants to move forward slowly instead of stopping. "We want to be able to handle huge risks well. When we face these risks, we know that what seems right in theory often doesn't work out the way we thought it would "In a statement for OpenAI, Altman talked about how to plan for the arrival of AI with general intelligence. "We think we need to keep learning and changing by using less powerful versions of the technology so that we don't have too many "one shot to get it right" situations."

In other words, OpenAI is doing what humans usually do to learn new things and make new technologies, which is to learn from trial and error instead of having "one shot to get it right" by using supernatural foresight. Altman is right when he says that "democratized access will also lead to more and better research, decentralized power, more benefits, and a wider range of people contributing new ideas."

As asked for in the open letter, if the governments of the U.S. and Europe put a moratorium in place, it would definitely slow down access to the possible very big benefits of new AI systems while probably not making AI safer. Also, it seems unlikely that the Chinese government and AI developers in that country would agree to the proposed ban anyway. Developing powerful AI systems in a safe way is more likely to happen in American and European labs than in labs run by authoritarian regimes.

Follow us on Google News