More On: ChatGPT
AI systems like ChatGPT depend on a large group of 'trainers' who each make $15 per hour
A senior judge says that AI could take the place of judges in UK court disputes
What Are Justifiable Fears About AI?
AI ‘More Profound than Fire or Electricity’
Google faces a new threat from ChatGPT's 'iPhone Moment'
Sundar Pichai, the CEO of Google, has joined other tech experts in asking for rules on AI technology, which has become very popular in the past year.
In an interview with CBS News' 60 Minutes, Pichai said that the tech industry was "developing technology that, one day, will be far more capable than anything we've ever seen before."
“I’ve always thought of AI as the most profound technology humanity is working on. More profound than fire or electricity or anything that we’ve done in the past,” said the Google CEO.
Pichai said regulation was necessary to prevent some of the negative impacts of AI technology.
“There has to be regulation. You’re going to need laws…there have to be consequences for creating deep-fake videos, which cause harm to society.”
The success of consumer-facing AIs like OpenAI's ChatGPT and image-generating AIs like Midjourney, DALL-E, and Stable Diffusion has helped AI technologies grow quickly. This has led to a lot of dramatic predictions about the future, and at least one well-known AI researcher has said that AI will kill off all humans.
One expert 60 Minutes talked to said that some ways of thinking about AIs today, like calling them "sentient," are wrong.
“They’re not sentient. They’re not aware of themselves,” said the AI researcher, James Manyika. “They can exhibit behaviors that look like that. Because keep in mind, they’ve learned from us. We’re sentient beings. We have beings that have feelings, emotions, ideas, thoughts, perspectives. We’ve reflected all that in books, in novels, in fiction. So, when they learn from that, they build patterns from that. So, it’s no surprise to me that the exhibited behavior sometimes looks like maybe there’s somebody behind it.”
Manyika said that technologies like GPT from OpenAI and Bard from Google are Large Language Models (LLMs). They process a huge amount of human language and use that to guess what kinds of answers people want.
“It tries to predict the most probable next words, based on everything it’s learned,” said Manyika. “So, it’s not going out to find stuff, it’s just predicting the next word.”