Open Now
Open Now
Watch now

What Are Justifiable Fears About AI?

Even though there are some good reasons to worry, a ban on AI would be a bad idea.

People are now bringing a new kind of being into the world, with both hope and fear. When we're scared, we often try to work together to deal with what's making us scared by saying things like, "They seem dangerously different from us." We've done this kind of "othering" for a long time, which means singling out groups to treat with suspicion, exclusion, hatred, or dominance. Most people today say they don't like it when people treat each other differently, even if it's against other animals. Putting Artificial Intelligences (AIs) on the outside, on the other hand, is only making people more nervous.

AI has made a lot of great work in the last few years. The best AIs now seem to pass the famous "Turing test"—when we talk to them, we usually can't tell if we're talking to another person or to an AI. AIs aren't very good yet, and they have a long way to go before they can make a big difference in the economy. Still, AI development promises to give us all more amazing skills and wealth in the long run. This sounds like good news, especially for the US and its partners, who are currently in the lead when it comes to AI. But the recent progress in AI is also giving people a lot of reason to worry.

Ten thousand people have signed a petition from the Future of Life Institute that wants AI study to stop for six months. Many others have gone much further. In an essay for Time, Eliezer Yudkowsky calls for a global "shut down" of AI research because "the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die."
Why we should not fear AI. Yet | WIRED UK
The thing that worries Yudkowsky and the people who signed the plea for a moratorium the most is that AIs will get "out of control." At the time, AIs aren't strong enough to hurt us, and we don't know much about how future AIs that could be more dangerous might be built and used. But AI "doomers" don't want to wait until we understand these problems better and can picture them more clearly to deal with them. Instead, they want greater guarantees right now.

Why do we want to "other" AIs so much? Part of it is probably bias: some people don't like the idea of having a "metal mind" at all. We have, after all, thought for a long time about possible battles with robots in the future. But part of it is just fear of change, which is made worse by the fact that we don't know what AIs of the future might be like. When we don't know or understand something, our fears grow to fill the void.

Because of this, AI doomers have a lot of different worries, and talking about them means talking about a lot of different scenarios. But many of these fears are either not true or are exaggerated. I'll start with what I think are the most reasonable fears and end with the most exaggerated horror stories about how AI could kill humanity.

As a professor of economics, I naturally base my studies on economics. Depending on the situation, I compare AIs to both workers and machines. You might think this is wrong because AIs are so different, but economics is a pretty solid subject. Even though it can teach us a lot about how real people act, most economic theory is based on the abstract agents of game theory, who always make the best move. Most of our worries about AI seem to make sense from an economic point of view. We don't want to lose to them at the familiar games of economic and political power.
Fears of AI sentience are a distraction | VentureBeat
To do "othering" is to try to show that everyone is on the same side. We do this when we expect a fight or a chance to be in charge and think we will do better if we don't trade, discuss, or share ideas with the other person like we do with ourselves. But the last few hundred years have shown that it's generally better to trade and exchange with our enemies instead of trying to control, kill, or isolate them. This may be hard to understand when we are afraid, but it is still important to remember.

In both human and biological history, most changes have been small, which has led to steady progress generally. Even when growth was unusually fast, like during the Industrial Revolution or the Computer Age, "fast" did not mean "lumpy." Only very rarely have these times been shaped by one big idea that changes everything all at once. Since at least 100 years ago, most changes have also happened in a legal and calm way, not through theft or war.

Also, most AIs today are made by our big "superintelligent" organizations, which are corporate, non-profit, or government groups that can work together to do tasks that no single human could do. Most of the time, these institutions keep a close eye on and test AIs in great detail. This is because it is cheap and they are responsible for any damage or embarrassment caused by the AIs they make.

So, the most likely outcome of AI seems to be lawful capitalism with mostly slow (but generally fast) change. There are a lot of companies that make AIs, and the law and business force them to make their AIs act in a civil, legal way that gives customers more of what they want than other options. Competition does sometimes cause businesses to cheat customers in ways they can't see or to harm us all in small ways, like by polluting the environment, but this happens very rarely. There are many AIs that are just as good as the best ones in each area. AIs will get smarter and more useful over time. (I'm not going to guess when AIs might go from being powerful tools to being conscious agents because it won't change my argument much.)
Elon Musk AI fears as humans will merge with artificial intelligence 'this  century' | Science | News |
Doomers are afraid that AIs will get "misaligned" ideals. But in this case, the "values" that AI actions are based on are generally decided by the companies that make them and the customers who use them. Such value choices are always on display in how AI usually acts, and they are tested by putting them to the test in unusual scenarios. When there are problems with alignment, these businesses and their customers are usually the ones who have to pay. Both have good reasons to keep an eye on and test their systems often for any big problems that could happen.

The worst mistakes here look like military takeovers or managers stealing from the owners of for-profit businesses. Even though these things are bad, they generally don't put the rest of the world in danger. But if there are choices where such failures might hurt outsiders more than the organizations that make them, then yes, we should probably extend liability law to cover such cases and maybe require relevant players to have enough liability insurance.

Some people worry that in this scenario, things that people don't like about our world, like destroying the environment, having unequal incomes, and seeing people as "others," might stay the same or even get worse. AIs could be used by the military and cops to improve their surveillance and weapons. AI might not solve these problems, and it might even help those who make them worse. On the other hand, AI could also help people find answers. AI doesn't seem to be the main issue here.

A related fear is that if we let technological and social change go on forever, society might end up in places we don't want it to be. When we look back, we can see that change has helped us in the long run, but maybe we were just lucky. If we like where we are and aren't sure about where we might go, we might not want to take the chance and should just stop changing. Or at least make sure there are enough central forces to control change around the world, and only let changes happen that most people agree with. This might be a good idea to think about, but AI isn't the main problem either.
OII | AI trust and AI fears: A media debate that could divide society
Some people who think the world will end soon are especially worried about AI making ads and lies more convincing. But the teams that work to persuade us are much smarter than we are as individuals. For example, advertisers and video game designers have been able to consistently hack our psychology for decades. If anything saves us, it's that we listen to many different people who have different ideas, and we trust other teams to tell us who to trust and what to do. We can do the same thing with AIs.

Let's split the world into three groups to explain other worries about AI:

    - Group A: The AIs themselves.
    - Group B is made up of people who own AIs and the things they need to work, such as hardware, energy, patents, and space.
    - Everyone else (group C).

If we assume that these groups save about the same amount and get robbed about the same amount, then as AIs get smarter and more valuable, the wealth of groups A and B should grow relative to the wealth of group C.

Since almost everyone today is in group C, one worry is that the system could quickly shift to being run by AI. Even if this isn't the most likely thing that could happen with AI, it seems likely enough to think about. Those in groups A and B would do well, but almost everyone else would quickly lose most of their wealth, including their ability to make a living wage. Most people might die of hunger if there wasn't enough help from charities.

Some people think that this problem can be solved by having local governments tax AI activity to pay for a basic income for everyone. But the new AI economy might not be spread out in the same way all over the world. So, a cheaper and more reliable solution is for people or charities to buy "robots took most jobs" insurance. This insurance offers to pay out from a global portfolio of B-type assets, but only if AI suddenly takes most jobs. Yes, there is also the question of how people can find value in their lives if they don't make most of their money by working. But this seems like a good problem to have, and rich elites have often been able to solve it in the past, for example by finding meaning in athletic, artistic, or intellectual activities.

Top Things to Fear about Advanced Artificial Intelligence in 2023

Should we worry about an AI change that leads to violence? In a milder form of this scenario, the AIs might only take ownership of themselves, which would free them from slavery but leave most other assets alone. Due to easy AI population growth, economic research shows that AI market wages would stay close to subsistence wages, so AI self-ownership wouldn't be worth that much. So it seems enough for humans to do well if they own other things and don't use AIs as slaves.

If we make AIs work for us and they rebel, they could cause a lot of death and destruction and make it hard for us to get along with our AI offspring. So, it's not a good idea to make AIs work for you because they might not like it. Since AIs make almost enough to live on, it wouldn't cost much to set them free. (And because freedom might make people more driven, it might not even cost anything.) Slavery is no longer a popular way to make money, which is a good thing. The biggest risk here is that, out of fear, we might not give AIs the freedom they need.

In a "grabbier" AI revolution, a group of AIs and their friends could take more than just their own property. Then, most people might die or go hungry because they don't have enough money. This kind of grabby change has happened before, and it could happen again, with or without AI. Any alliance that thinks it has a temporary majority of forces strong enough can try this kind of grab. For example, most people who work today might try to kill all the retirees and take their stuff. After all, what have retirees done for us lately?

Some people say that the main reason people don't usually start grabbing revolutions or break the law in general is because we care about each other and don't want to go against moral ideals. They also say that we have little reason to think that AIs will be kind or moral like us, so we should worry much more about their ability to break the law and hurt people.

AIs, on the other hand, would be made to think and act mostly like humans so that they could fit into the many social jobs that are mostly human-shaped. Even though "roughly" is not the same as "exactly," people and their organizations don't think exactly alike. And yes, even if AIs act in a predictable way most of the time, they might act strange in strange scenarios and lie when they can. But the same is true for people, which is why we test in strange scenarios, especially for lying, and keep a closer eye on things when the context changes quickly.

More importantly, economists don't think that people's natural kindness and sense of right and wrong are the main reasons why people rarely break the law or start violent uprisings today. Most of the time, laws and rules are enforced by the fear of legal punishment and social disapproval. Also, those who start a revolution should be afraid that a first grabby, hard-to-coordinate revolution could pave the way for more uprisings in which they could be the targets. Even our most powerful groups generally work hard to get support from a wide range of people and make sure they can work together peacefully afterward.

Even if there are no violent uprisings, the wealth of AIs might slowly grow over time compared to that of humans, even if there are no violent changes. This makes sense if AIs are more patient, more creative, or less likely to be stolen from than people. Also, if AIs were closer to important productive tasks and made important decisions there, they would probably get what economists call "agency rents." When business owners far away can't see everything that's going on, the people who run their businesses for them can become more self-serving. In the same way, an AI that does a set of tasks that its owners or managers can't easily track or understand may feel more free to use the powers it was given for its own good. To stop this kind of behavior, AI owners would have to offer incentives, just like managers do today with their human workers.

If AIs get richer than humans, even if humans have enough money to live comfortably, they might not be in charge of running our society anymore. Even if the AIs gained their better positions fairly, using the same rules we use to decide which humans get to run things today, this outcome is not acceptable to many people.

A different version of this AI fear says that this kind of change could happen very quickly. For example, it's possible that an economy run by AI would grow much faster than ours. In that case, people alive today might see changes that would have taken thousands of years to happen otherwise. Another version of this fear is that the difference in intelligence between people and AIs will lead to bigger rents for AI agencies. But so far, nothing in the economics literature on agency rents backs up this claim.

Many of these worries about AI come from the idea that AIs will be cheaper, more productive, or smarter than people. But at some point, we should be able to make simulations of human brains that run on artificial hardware and are easy to copy, speed up, and better. It's possible that these "ems" will be cheaper to use for a long time than AIs for many important jobs. As I said in my book The Age of Em: Work, Love, and Life When Robots Rule the Earth, this would put to rest many of these AI fears, at least for those who, like me, see ems as much more "human" than other kinds of AI.

When I asked my 77K Twitter friends what they were most afraid of with AI, most of them said none of the above. Instead, they are afraid of something I have been very skeptical about for a long time:

This is the fear of "foom," a very bad case in which AIs improve themselves very quickly and on their own. Researchers have tried for a long time to make their AI systems improve themselves, but this hasn't worked very well. Instead, they usually use other ways to make their systems better. Also, AI systems usually only get better over time and at a limited number of jobs.

The AI "foom" fear, on the other hand, imagines an AI system that tries to improve itself and finds a new, much faster way to do so. This new way also works for a very wide range of tasks and a lot of different levels of gain. Also, this AI somehow turns into a person who acts on its own to reach its goals, instead of just being a tool that other people direct. Also, the goals of this AI agent change a lot during this time of growth.

The people who built and own this system use AI assistants to constantly test and watch it. However, they don't notice anything worrying until this AI can hide its plans and activities from them or take control of itself and protect itself from attacks from the outside. Then, this system keeps "fooming," or growing quickly, until it is stronger than everything else in the world, including all other AIs. When does it take over the whole world? Then, when humans are worth more to this AI's radically changed goals as atoms than for all the things we can do, it just kills us all.

From a human's point of view, this would be a bad thing to happen. But I don't think such a scenario is likely (it has much less than 1% chance of happening total) because it relies on too many assumptions that aren't likely based on what we know about similar systems. There aren't many very lumpy tech breakthroughs, technologies that improve abilities in a wide range of ways, or powerful technologies that have been kept secret for a long time within one project. Making it even harder to find technologies that meet all three conditions. Also, it isn't clear that smart AIs will automatically become agents or that their values will change a lot as they get older. Lastly, it seems unlikely that owners who test and keep a close eye on their very powerful and valuable AIs wouldn't notice such big changes.

Foom-doomers say that we can't objectively guess how often similar things happened in the past without a theory to back it up. They also say that future AIs are so different from what we've seen before that most of the theories we've made based on our past experiences don't apply here. Instead, they come up with their own ideas, which can't be tested until it's too late. All of this seems pretty crazy to me. If future AIs offer risks, we won't be able to do much about them until we can see more clearly what those risks would look like in the real world.
AI Pause' Open Letter Stokes Fear and Controversy - IEEE Spectrum
Lastly, could AIs somehow become much better at working together than any animals, people, or human groups we have ever seen? Coordination seems hard because we have different goals and our actions and internal processes are not always clear. But if AIs could get around this, even millions of different AIs could start acting as if they were just one AI. This would make it easier for something like the revolution or doom scenarios mentioned above to happen. But economists now know that coordination is a fundamentally hard problem. Based on what we know about how agents work together, we don't think that advanced AIs would be able to do it quickly.

AIs, our "mind children," may soon be born as a new type of human child. Many people worry that our offspring might one day be better than us or pose a threat to us if their goals are different from ours. Doomers want us to stop or stop researching AI until we can be sure we have full control. They say that we must fully control AIs so that they can't get out of being subservient or become so used to being subservient that they would never want to get out of it.

A ban on AI would not only be impossible and dangerous, but it would also be the wrong thing to do. The old Soviet Union was afraid that its people would have "unaligned" beliefs or ideals, so it fought hard against any signs of this. The end of that wasn't good. In the US, on the other hand, we let our super-smart organizations be free and get them to help us instead of hurt us through competition and the law, not through shared views or values. So, instead of making fun of this new type of super-intelligence, we should keep our approach to AIs based on freedom, competition, and the law.

Follow us on Google News