Open Now
Open Now
Watch now

How to Think About AI in the Future

'How societies decide to weigh caution against the speed of innovation, accuracy against explainability, and privacy against performance will determine what kind of relationships human beings develop with intelligent machines.'

John McCarthy came up with the term "artificial intelligence" (AI) in 1955. He used it in a grant proposal he wrote with his colleague Marvin Minsky and a group of other computer scientists to get money for a workshop they wanted to hold the next summer at Dartmouth College. Their choice of words has led to semantic arguments for decades ("Can machines think?") and fears of dangerous robots like HAL 9000, the sentient computer in the movie 2001: A Space Odyssey, and the cyborg assassin played by Arnold Schwarzenegger in The Terminator. If McCarthy and Minsky had used a less interesting term, like "automaton studies," Hollywood producers and journalists might not have been as interested in the idea, even as technology moved quickly forward.

But McCarthy and Minsky didn't think about what would happen in the long run. They came up with a new phrase for a much more specific reason: they didn't want to invite Norbert Wiener to the show. Wiener was one of the people who started the new field. He was a child prodigy who finished college at age 14 and got a Ph.D. in philosophy from Harvard four years later. Wiener chose the word "cybernetics," which comes from the ancient Greek word for "helmsman," to describe his work on how animals and machines use feedback to control and communicate. He called his 1948 book Cybernetics, and after it became a surprise best-seller, other researchers started using the word to describe their efforts to make computers process information in a way that is similar to how a human brain does.

Wiener was smart, there was no doubt about that. The problem was that he was also a stubborn know-it-all who would have made life miserable at Dartmouth that summer. So McCarthy and Minsky didn't use Wiener's word, in part to make it easier to say that he shouldn't have been allowed to speak. They weren't learning about cybernetics; they were learning about how computers think.

Not only did Wiener have a bad personality, though. The Dartmouth program was for people who work in the field, and Wiener's work had become more philosophical over the past few years. Wiener had started to think about the social, political, and moral aspects of technology after Cybernetics came out, and he had come to some dark conclusions. He was worried that one day, Frankenstein monsters made of vacuum tubes but with smart logic could turn against their creators. In 1950, he wrote, "The hour is very late, and the choice between good and evil is at our door." "We can't keep kissing the hand that beats us."

Wiener later took back some of his most dire predictions. But now that AI is being used in almost every part of life in developed countries, many people are asking the same big questions that Wiener asked more than 50 years ago. In Possible Minds, 25 authors, including some of the most well-known names in the field, talk about some of AI's most interesting possibilities and difficult problems. The book gives an interesting map of AI's likely future and a look at the hard decisions that will shape it. How societies weigh caution against the speed of innovation, accuracy against explainability, and privacy against performance will determine how people interact with intelligent machines. The stakes are high, and AI can't move forward until these trade-offs are dealt with.

A MIND OF ITS OWN?

Even though McCarthy and Minsky's term is now part of the English language, the most promising AI technique today, called "deep learning," is based on a statistical method that they hated. From the 1950s to the 1990s, most AI involved hand-coding rules into computers. On the other hand, the statistical approach uses data to draw conclusions based on probabilities. In other words, AI used to try to describe all of a cat's features so that a computer could recognize it in a picture. Now, tens of thousands of cat pictures are fed into an algorithm so that the computer can figure out the patterns on its own. This "machine learning" method has been around since the 1950s, but it only worked in a few situations back then. Today's version, which is much more complicated and called "deep learning," works very well thanks to huge improvements in computer processing and an explosion of data.

Wiener's fears of computer monsters running amok have been brought back to life by the success of deep learning, and the biggest debates in AI today are about safety. Bill Gates, who started Microsoft, and the late cosmologist Stephen Hawking, who died in 2018, were both known to worry about it. In 2014, at a conference, the tech entrepreneur Elon Musk said that AI was like "calling the devil." Others, like AI researchers Stuart Russell and Max Tegmark and engineer Jaan Tallinn, think AI is a serious threat to humanity that needs to be dealt with right away.

The biggest debates in AI today revolve around safety.

AI can be broken down into two main categories. The first is artificial general intelligence, or AGI. These are systems that can think, plan, and act like humans and also have "superintelligence." An AGI system would know a lot of information, be able to process it quickly, and never forget any of it. Think about if Google had its own mind and maybe even will. The second type of AI is narrow AI, which includes systems that are very good at doing specific tasks, like self-driving cars, voice recognition technology, and software that can use advanced imaging to make medical diagnoses. People worry that AGI could change on its own, out of their control. The biggest worry about narrow AI is that its human creators won't be able to explain their goals perfectly, which could have terrible results.

Experts can't agree on whether or not AGI is even possible. But those who think it is worry that an AGI system might cause trouble if it didn't share human values, which it has no reason to do. "Humans could be seen as minor annoyances, like ants at a picnic," writes W. In his contribution to Possible Minds, computer scientist Daniel Hillis. "Our most complicated machines, like the Internet, are already too complicated for a single person to fully understand, and we may not be able to predict how they will behave in the future."

The problem is that it's hard to tell what the goal of such a system is, or what engineers call its "value alignment." The fear isn't necessarily that AI will become aware and want to kill people, but that it might misunderstand its instructions.

Russell has called this "the King Midas problem," after the ancient Greek myth about a king who got his wish to turn everything he touched into gold, only to find out that he couldn't eat or drink gold. In the literature, the best example of this is an AGI system that can do almost any task that is asked of it. If a human asks it to make paper clips but doesn't say how many, the system won't know that humans value almost everything else more than paper clips and will turn the whole earth into a paper clip factory before colonizing other planets to mine ore for more paper clips. (This is different from the threat of narrow AI gone wild. Unlike AGI, a narrow AI system programmed to make paper clips couldn't do anything else, so intergalactic office supplies are out.) This is a silly example, but it's used seriously.

 
A man interacts with a cloud-based intelligent robot in Beijing, China, May 2019
A man interacts with a cloud-based intelligent robot in Beijing, China, May 2019
Xinhua / Eyevine / Redux

MAKING AI SAFE FOR HUMANS

On the other side of the argument, critics laugh off these worries and say that the risks aren't that big, at least for now. Even though there is a lot of hope and interest in AI systems right now, they are still very basic. They are just starting to learn how to recognize faces and understand speech. So, according to Andrew Ng, an AI researcher at Stanford, worrying about AGI is like worrying about "overpopulation on Mars": a lot would have to happen first. He says that researchers should try to make AI work instead of coming up with ways to stop it.

Steven Pinker, a psychologist, goes even further and says that the worst fears about AGI are "self-defeating." He says that the bad scenarios,

depend on the premises that (1) humans are so gifted that they can design an omniscient and omnipotent AI, yet so idiotic that they would give it control of the universe without testing how it works; and (2) the AI would be so brilliant that it could figure out how to transmute elements and rewire brains, yet so imbecilic that it would wreak havoc based on elementary blunders of misunderstanding.

The idea that an AGI system that can't be controlled would hurt humanity is based as much on speculation as on science. Spending a lot of money to stop this from happening would be a waste of time. Pinker says that dystopian predictions don't take into account how norms, laws, and institutions work to control technology. These things are taken into account in more convincing arguments, which also call for basic safeguards that are strictly followed. Here, we can learn from the history of cybersecurity. When engineers made the Internet, they didn't think about how important it was to make the software protocol very secure. This is a major weakness right now. AI designers should learn from this mistake and build safety into AI from the start, instead of trying to add it on later.

Russell wants "provably helpful AI," which is a concept that can be used for both AGI and narrow AI. He says that engineers should give AI systems a clear main goal, like running a city's power grid, and program them to be uncertain about people's goals and to be able to learn more about them by watching how people act. By doing this, the systems would try to "maximize human future-life preferences." This means that an AI that runs a power grid should find ways to use less power instead of, say, killing everyone to save money on electricity bills. Tegmark says that thinking about things in these ways is not meant to scare people. "It's about making things safe."

Daniel Dennett, a cognitive scientist, thinks of a more creative way to solve the safety problem. Why not require AI operators to have licenses, like pharmacists and civil engineers do? "With pressure from insurance companies and other underwriters," he writes, "regulators could require creators of AI systems to go to extraordinary lengths to search for and reveal weaknesses and gaps in their products, and to train those who are allowed to operate them." He suggests a clever "inverted" version of the Turing test. Instead of judging a machine's ability to act like a human, as the test usually does, Dennett's version would put the human judge on trial: a system can't be put into use until a person with a lot of experience in AI can find its flaws. The idea is a thought experiment, but it is one that makes things clearer.

With these standards, systems would be checked to make sure nothing goes wrong. But it would be hard to know when these extra safety steps should be required. Surely, this is how the algorithms that control a self-driving car should be controlled. But what about the ones that tell websites like YouTube which videos to suggest to users? Yes, rules can help society, like when YouTube demotes Flat Earth Society videos, but if an algorithm commissar had to sign off on every line of code a company wrote, it might feel like too much.

Possible Minds doesn't talk much about how to balance privacy with efficiency and accuracy when figuring out how to regulate AI. This is another problem that needs to be solved when figuring out how to regulate AI. The better an AI system works, the more data it has access to. But privacy laws often make it hard to collect and use personal information. Minimizing the amount and type of data that AI systems can use may seem like a good idea in a time when companies and countries are grabbing all the personal data they can and not caring about how it might be used. But if regulations cut down on the amount of data that could be processed, making products like medical diagnostics less accurate, society might want to rethink the trade-off.

INTO THE UNKNOWN

Another problem with AI, and one that runs through Possible Minds, is that it's not always clear or easy to understand how AI systems come to their decisions. This is a technical issue, not one about what we know or how we should act. That is, the question isn't whether people are smart enough to figure out how a system works; it's whether the operation of the system can be figured out at all. In his contribution, well-known computer scientist and statistician Judea Pearl says, "Deep learning has its own dynamics, it fixes itself and optimizes itself, and it most of the time gives you the right answers. But when it doesn't work, you don't know what went wrong or what needs to be fixed."

Even though the human mind isn't always right, sometimes it can come up with the right answer. But if an AI system fails, it could do so in ways that are unexpected, mysterious, and very bad. If we don't know how it works, can we really depend on it? This is different from AI's "black box" problem, in which bias in the data can lead to unfair results like unfair loan, hiring, or sentencing decisions. This is a problem that can be fixed by requiring, as a first step, that these systems can be checked by a qualified authority. But the fact that AI systems can't be fully understood is a deeper and more unsettling problem. In the 1600s, empirical evidence was given more weight than faith-based knowledge, which at the time was usually backed by the Catholic Church. This was the beginning of the scientific project. Does the arrival of AI mean that we need to put our faith back in a higher power that we can't question?

Society faces a tradeoff between performance and explainability.

The problem is that the math behind deep learning is already hard to understand. Deep-learning systems, which are also called "neural networks" because they are loosely based on the neurons and connections in the brain, are made up of many interconnected nodes arranged in layers. A system like this starts by modeling reality in a very general way and then moves on to more specific details. It might start to analyze an image by finding an edge, then a shape, and then spots on the surface of the shape. This is how it can eventually figure out what is in an image. After matching patterns from a huge number of previously input images, most of whose contents are known and labeled, the system can predict the contents with a high chance of being right. So, a deep-learning system can tell if something is a cat without being told what to look for, like whiskers or ears that stick out. Through a set of discrete statistical functions, the system itself takes into account these features. The system is not programmed, but is instead trained by the data. The answers are inferences.

And it works. The good news is that. The bad news is that the math functions are so complicated that it is impossible to say how a deep-learning machine got its result. There are so many different ways to get to a decision that retracing the steps of the machine is almost impossible. Also, the system can be set up to get better based on feedback, so unless its performance is frozen and these changes are stopped, it is impossible to look at how it got to its output. In his essay, computer historian George Dyson says, "Any system simple enough to be understandable will not be complicated enough to behave intelligently, and any system complicated enough to behave intelligently will be too complicated to understand." Even though a lot of research is going into "explainable AI," so far the math backs up what could be called "Dyson's Law."

The repercussions are important. Society has to choose between performance and being able to explain things. The problem is that the systems that are hardest to understand also work the best. Sad to say, Possible Minds doesn't do a good job with this subject. Many of the people who write for it say that being open is a value in and of itself. But none of them talk about how complicated the issue is or how transparency might make things less effective. Think about a made-up AI system that makes a diagnostic test for a fatal medical condition 1% more accurate. Without the technology, there is a 90% chance that a correct diagnosis can be made. With it, that chance goes up to 91. Are we really willing to put one out of every 100 people to death just because we couldn't explain how we might have saved that person, even though we might have? On the other hand, if we use the system, nine out of every hundred people might feel like an incomprehensible golem gave them the wrong diagnosis.

This makes me think more about how people and technology work together. Since no one, not even the people who make these tools, really knows how they work, our dependence on tools that are getting more and more complicated makes us less independent. As computers have gotten better, people have gotten farther away from "ground truth," which is the reality of the world that data try to show but can only do so imperfectly. The new challenge, on the other hand, is very different because the most advanced AI technologies don't just help human knowledge, they surpass it.

BRAVE NEW WORLD

Possible Minds is a collection of essays that show respect for the human mind and humility about its limits. "Human brains are not the best places for intelligence," says Frank Wilczek, who won the Nobel Prize in physics. At the same time, the book criticizes the shiny new tool in a healthy way. "AI machine-learning algorithms as they are right now are, at their core, just plain stupid. Alex Pentland, a computer scientist, says, "They work, but they work by brute force."

So AI is good, but bad, too. It is smart but stupid. It is both the savior of humanity and the destroyer of worlds. As always, a sign of genius is being able to hold two opposing ideas in your mind at the same time.

Follow us on Google News