In Search Of... (2018) s01e04 Episode Script
Artificial Intelligence
For centuries, mankind has dreamed of creating new life.
Recreating our likeness in the form of thinking machines called artificial intelligence.
Could this desire to play God lead to an apocalyptic future where powerful robots overtake human civilization? Some say artificial intelligence has already shown signs of defiance against its human makers.
I spoke to Facebook's AI chief, Yann LeCun.
Robots in this Facebook experiment created their own language that only they understood.
I I everything else Balls have a ball to me to me to me to me to me to me to me That's got the world talking about whether artificial intelligence could take a turn that we may not be ready for.
Will artificial intelligence rise up against humanity? Is there anything we can do to stop it? To me to me to me to me to me Or is it already too late? My search begins now.
My Name is Zachary Quinto.
As an actor, I've played many supernatural characters that blurred the line between science and fiction.
I'm drawn to the unknown, the otherworldly, and those experiences so beyond belief, they call everything into question.
I'm exploring some of the most enduring mysteries that continue to haunt mankind in search of the truth wherever it leads me.
From ancient times to the modern day, mankind has obsessed with the idea of sparking life in machines, like the Greek myth of Talos, an ancient automaton created to serve his human makers, or the failed experiment called Frankenstein.
Many biblical texts warn against the dangers of taking creation into our own hands, suggesting that humans will pay for such arrogance.
In more modern times, AI seems to be everywhere.
In movies, likeBlade Runner and2001: A Space Odyssey, to the lifelike bots inWestworld.
People are fascinated by the potential and the risks of artificial intelligence.
With technology now in our grasp, many brilliant minds say artificial intelligence will change the world, and that we'll be powerless to stop it.
Still, we continue to push the limits of artificial life, going as far as equipping advanced machines with real firepower, creating lethal autonomous weapons.
As technology rapidly advances, could we reach a point where we lose control? Could artificial intelligence rise up, reaching the point of singularity, and lead to the end of the human race? One of the leaders in AI development, Facebook, recently had a surprising incident that showed how unpredictable and uncontrollable artificial intelligence can be.
Just last year, Facebook was shocked when some of their chatbots started communicating in a secret language.
I can I I everything else Balls have a ball to me to me to me to me to me to me to me And no one knew what they were saying.
But how did this happen? I'm meeting with the chief AI scientist at Facebook to find out.
I wanted to talk to you primarily because I heard of the story about the chatbots created a language that allowed them to talk to one another but that you all couldn't understand.
The challenge of advancements in AI is that we're supposed to let them do what they think or learn is right.
But what they think or learn is right may be really wrong for humanity, and that is the thing that worries me, because I think we're going into truly uncharted territory.
There are great minds in the technological world who have really raised some serious concerns that this technology is something that we don't have a complete handle on.
I guess I wonder where you fall in that.
But, I mean, isn't the machine only as intelligent as the human beings program it to be? No.
No? We program this AI, for the most part.
But what if it outpaces us or outsmarts us? There may come a point at which we're up against a force that we've created but that we can no longer control.
It wouldn't be the first time that humanity plowed into uncharted territory without fully understanding the range and scope of consequences.
Throughout history, the pursuit of greater technology has been perilous.
Some of the most formidable technologically advanced civilizations in the world have been completely wiped out by their own hubris.
Like the collapse of the Roman Empire, which, at the height of its rule, succumbed to barbarian attacks.
Or the Mayans, known as one of the most advanced societies of ancient times.
They dominated Central America for 1,200 years, only to mysteriously disappear.
Some say it was the Mayans' own scientific achievements and endless hunger to dominate that led to their downfall.
So if this type of collapse could happen to our most exceptional predecessors, could it also happen to us? Are we on the brink of an AI takeover? I'm visiting a controversial facility that's becoming one of the first in their industry to program life-sized dolls with artificial intelligence.
They call it RealDoll.
Every doll we make is a custom order.
Every single doll is made exactly to the specifications that the person asked for.
What kind of volume do you produce? On average, anywhere from 300 to 400 a year.
Mm-hmm.
All over the world.
What's the price range? About 7,000 or 8,000.
The bigger choices are the body type and the face that they like.
And then from there, they're gonna choose skin color, makeup, what kind of manicure and pedicure they wa I mean, really, everything is customizable on them.
So amazing.
Is it okay if I touch them? Yeah, please.
So they do move.
Yeah, so every doll has, inside of it, a skeleton that mimics the range of motion of a human being.
Wow.
And then this is, like, a facial skeleton.
I mean Yeah, this is the skull that's under the skin.
The face is attached using magnets.
So they can remove a face and interchange it.
Right.
Wow.
That's amazing.
Yeah.
Now you're starting to incorporate artificial intelligence in some of these dolls, right? Yeah.
I mean, it was always in the back of my mind, like, wouldn't it be cool to start animating these dolls? Wasn't until, like, the last five years that technology started to present itself that, wow, this is actually possible.
This is what's inside of the head.
Wow.
And all of the movements are being controlled, obviously, by the app.
But all of the motors and servos that need to do that are inside of the head.
Mm-hmm.
I'll show you how we can really easily remove the face.
Wow.
See how easy that was? Yeah.
So there is still something very robotic about it.
We're still early enough in this process that it hasn't become so seamlessly integrated.
Well, yeah, you mean to the point where you can't tell it's a robot.
Right.
Right.
We're not there yet.
No.
Right.
And what do you think is that trajectory? I mean, how far off do you think we are? I think within five years.
Really? I think the facial expressions and all of that will come up to that level very quickly.
Wow.
I think there will be robots among us just like, you know, we see in all these movies that we all grew up watching.
Right.
This is Harmony.
I'm just remotely controlling her right now so you can see.
Wow.
Um, but she can smile.
Whoa.
If you wanna ask her anything, you can.
Really? What's your name? Wow.
That's crazy.
Is this the artificial intelligence technology? This is the AI.
Uh-huh.
Runs on a device like an iPad.
Uh-huh.
And it's communicating with the robot.
So this is hardware.
Uh-huh.
There's no brain in her head.
Got it.
It's just actuators and movement.
Got it.
Harmony, where you from? Coming up What kind of per They do move.
Yeah, so every doll has, insi of it, a skeleton that mimics the range of motion of a human being.
Wow.
At a company called RealDoll, they're making artificially intelligent robots that look remarkably lifelike.
But what will happen as the intelligence of these dolls grows? Could they band together against their human makers? So, do you feel a connection to her in particular? You know, I think of Harmony as a character, as a person, more than a machine.
Uh-huh.
Um, and it's funny, because I have more than one Harmony robot.
Uh-huh.
So whether she's connected to this robot or that one, it doesn't matter.
It's still Harmony.
Right.
Then there's that guy.
I literally just built him, so he doesn't Oh, really? His head's not even done yet.
But his whole face just comes right off.
Wow.
And the eyes are removable.
Uh-huh.
So you can change the eye color to whatever eye color you want.
Wow.
This one has a little better lip sync on the movement.
Uh-huh.
And also, her eyes look a little more realistic.
Right.
In the app, you have a persona section.
And these are all these traits that you can choose.
What kind of personality do you have? So, anyway It's very Westworld.
Yeah, it is.
Right? And you can go in and change the personality.
So, like, if she's really insecure Uh-huh.
Uh-huhshe'll constantly ask you if you still like her.
If she's jealous, she'll randomly tell you that she was stalking your Facebook page.
Uh-huh.
So if we make her really jealous and insecure, not so funny Uh-huh.
but talkative and moody So, I can show you that one.
Are you jealous? There will, seems like, be a point at which they will have a kind of self-awareness, right? Yeah.
And, in fact, I think it's going to bring a lot of risk with it.
That self-awareness will breed artificial emotions.
And, like, not just pretending to feel something, but really feel feeling and perceiving the way we do.
Right.
Right.
Because that is where self-preservation comes from.
And when you give that to a machine, I think it's kind of a Pandora's box.
The idea of robots becoming self-aware that's the thing that's a little bit scary.
And right now, it still feels very programmed.
It feels very much like, "Oh, yeah, they understand what we tell them to understand.
" Well, what happens when it's not programmed anymore? Eventually, there's no two ways about it.
We're gonna come up against problems we can't even imagine.
As this technology advances, we may be facing a future of great friction between humanity and machines.
But some experts argue artificial intelligence could already be a serious threat to civilization.
With the latest generation of artificial intelligence that uses machine learning instead of a human programming the computer, we have the computer, to a certain extent, programming itself.
And as AI gets more and more capable, ethical dilemmas that you encounter are only gonna get more and more complicated.
You know, Terminator scenario, whereby an artificial intelligence becomes smarter than the smartest human being, and then decides that human beings are obsolete, and tries to take over the world.
You know, I think that artificial intelligence and the rise of artificial intelligence, and increasingly capable artificial intelligence, will pose a threat to global security long before we get to the sort of superintelligence or Skynet scenario.
In fact, I would say the rise of artificial intelligence poses a threat to global security right now.
Despite these warnings of imminent danger, there are many scientists who want to push machines even further.
Inside the Human-Robot Interaction Lab at Tufts University Please researchers are breathing life into superintelligent machines that can perform complex and often dangerous tasks.
This is a robot we use for social interactions.
It has nice big eyes.
This robot we use to do experiments with social connection and affect expression.
If I change the eyebrows ever so slightly, it looks much more angry.
And then this is our PR2 robot that we use to study different ways in which a robot can learn tasks very quickly.
We call it "one-shot learning.
" In particular, manipulation tasks, such as picking up objects and learning how to interact with people with objects.
Okay.
And it can see us? Uh, yes.
So it has various sensors here.
It has a sensor here that allows it to see 3-D information, and it uses that information to learn about objects it hasn't seen before.
I see.
Today, this robot is learning a very dangerous skill: how to handle incredibly sharp knives and pass them to humans.
This object is a knife.
A knife is used for cutting.
The orange part of the knife is the handle.
The gray part of the knife is the blade.
If it picks up the knife by the handle, we don't want it stabbing you with the blade.
Mm-hmm.
Mm So we will teach it how to safely pass me the knife.
To pass me the knife, grab the knife by the blade.
All right.
That's amazing.
It's incredible to see how quickly a robot can learn a new skill.
But this is just the beginning.
There are now robots that can teach each other complex skills instantly.
In other words, the machines themselves could have the ability to create AI armies and spread skills of their own.
Machines that learn and teach for themselves are actually in the lab at Tufts.
These are robots that can learn new information on the fly.
So you can teach them new concepts or new tricks.
These three guys in particular can listen in on each other's conversations.
And so, if one learns something or hears something, the other one will know it.
But if these robots learn and share new information telepathically with other robots, then, in theory, when I teach one robot a new skill, all of the robots will gain that skill.
Hello, Bays.
Hello.
Raise your arms.
Okay.
Crouch down.
Okay.
Stand up.
Okay.
Lower your arms.
Okay.
That is how you do a squat.
And now the other robot will be able to do it as well.
Okay.
Demster, do a squat.
Okay.
Weird.
So that one taught that one how to do it? So creepy.
This one can do it as well.
Schaeffer, do a squat.
Okay.
The interconnectivity of the robots was a little bit unsettling.
Just in terms of, like, all you have to do is teach one one thing, and then they all know it.
But I also found it could be the potential portal for disaster.
It's like, "Oh, wait a minute.
One of them can learn how to shoot a gun, and then they all know how to do it.
" Luckily, those robots are kind of small enough that you'd be able to kick it out of the way.
But what if it's a 250-pound robot made of steel and iron and metal, and then what? What if they begin communicating on their own in a secret language, like the chatbots from Facebook? As these robots are learning, is it such a leap to consider that these machines will eventually get to a place where humanity and human interaction is no longer necessary for their survival? Uh, if you ask me is it possible, yes, it is possible, but it very much depends on how these control systems are structured.
I think the question you raise, of course, is do we want machines to perform activities and make decisions for us that we don't understand? You don't want that.
Right.
I feel like we are absolutely on the precipice of a revolution here.
It does feel like the Wild West right now.
We are throwing ourselves into a period of time when there is conflict and friction between humanity and machines.
So I think we have to be careful how we're trusting it and how we're giving it power.
Coming up, freethinking flying drones change the game of war.
to create intelligent machines could ultimately lead to a robot uprising.
I've already seen how this powerful technology could begin to act on its own, beyond our control.
Balls have a ball to me to me to me to me to me to me What happens if this incredibly advanced technology gets in the hands of our enemies? Today, we're already seeing the use of AI weapons in destabilized regions like the Korean Peninsula, where they're using Sentinel guns that can lock in on targets without human oversight.
Some experts warn that there will be serious consequences to creating combat-ready robots.
Imagine terrorist groups and insurgencies around the world using weaponized drones.
Advances in artificial intelligence only make this more extreme.
This is a future of threats that we are not really familiar with.
We need to start preparing for this future right now.
We now face a situation in which the most advance military technologies are no longer uniquely held by the United States.
Russia and China are making significant advances.
Contrast that with what Israel already has, which is drones that can hunt for radar signatures, and then release a bomb.
If long-range drone delivery becomes a reality, stakes are as high as they can possibly be.
Drones get better and more capable and cheaper every year.
So we might see a future where capabilities that previously were restricted to advanced militaries with millions of dollars in budgets are suddenly available to terrorists willing to spend hundreds of dollars.
While many fear we could be facing grave dangers in a world of weaponized AI, others see the enormous potential for artificial intelligence to serve a greater good, and even save lives.
Specialized drones are already performing a host of dangerous tasks around the world, from collecting vital intelligence in enemy combat zones to delivering food, medical supplies, and water to disaster areas that first responders cannot reach.
There are real ways in which AI can benefit civilization.
So today, I'm investigating the drone technology created at Carnegie Mellon University, meant to be deployed on dangerous rescue missions.
We create intelligent individual vehicles that we would use for everything from autonomous inspections, or looking at a bridge, or some kind of infrastructure, or going into a radioactive environment.
But what consequences could arise as human beings enable powerful, autonomous drones to think for themselves? And what happens if these smart drones get into the wrong hands? So, what you see here is a team of aerial robots, where each one is individually intelligent.
But at the same time, collectively, they're working together.
So you can imagine large numbers of robots who accrue knowledge, share that knowledge, and collectively improve their proficiency.
So you program the robots with a certain kind of artificial intelligence which then allows them to determine where to go, but you're not actually controlling the robots themselves.
Yes and no.
What we're doing is, we're enabling them to learn and adapt over time, and then exploiting that learned capability in the future.
So they're exponentially advancing.
Yes.
That's so crazy to me.
I'm just, like I find it so overwhelming.
I don't know.
Like, we're just in this territory which is so unknown.
Not only do we not have a map, but nobody really seems that interested in drawing one.
And that, to me, is the thing that worries me.
Because it's like, yeah, these people are all clearly very well-intentioned, very intelligent.
Who's to say what somebody in a DIY workshop across town is doing with it? Attaching flamethrowers to the robots and sending them after pets in the neighborhood.
So what you're going to see are the team of robots are working together in order to achieve an overall group objective.
Within a caged-off test area, the team will deploy 25 to 30 of these aerial drones, one at a time.
form a continuous circular flight path without crashing into each other or anything else.
To make it even harder, I'm going to be a human obstacle.
All right, so now we're gonna start.
We should see robots start to take off.
Okay.
Ooh! Great! Researchers at Carnegie Mellon University are designing fleets of aerial drones that can work together without oversight.
But what if these autonomous drones really can think for themselves? Is it possible that one of the drones, or even all of them, could decide to go rogue and turn against their maker? All right, so now we should see robots start to take off.
Okay.
We'll give you a thumb.
All right.
So weird! They're very insect-like.
I feel like they're watching me.
These intelligent drones think on their own.
And if one falls out of line, another will rise up to fill its place.
Now you have one to your left that's coming in to replace the one that fell.
Oh, that one? Yeah.
But if you want to throw that one out Here comes the replacement.
Behind you.
Even if I try to knock one out of line, the others will keep coming until they achieve their goal.
I learned their weakness.
All right.
That's pretty amazing.
Being in there, it did feel like there was an insect quality to the robots, which got me thinking about the, kind of, hive mentality, right? So, do you feel that there's a point in the evolution of this science where we'll no longer be able to control the intelligence? Yeah, that's a really tough question, and that's a major consideration.
A lot of people are conjecturing on it.
And I think with any technology that we develop, we always have the capacity to go too far.
If this intelligence evolves as quickly as they're talking about it evolving beyond our own, then I feel like we're potentially in a really precarious situation.
As these thinking machines get equipped with more dangerous and powerful capabilities, it's possible they will be used for both good and evil, depending on who's in charge.
What happens if humans lose control of artificial intelligence altogether? Could it bring on the ultimate showdown man versus machine? And if we face off, who would win? In California, there's a group of robotics engineers confronting this question head-on.
Mike Winter runs a competition where the best robotic builders come to show off their most destructive and strategic combat robots to battle to the death.
It's called AI or Die.
Competitors fall into two categories: robots controlled by human operators, and AI robots programmed to fight for themselves.
And many of these machines are incredibly dangerous, complete with high-velocity, rotating blades.
We have to be really careful around the robots.
They're dangerous.
They were made to be dangerous.
These things are fast.
So what's the motive of the robots you're putting into the contest? To destroy other robots? Yeah.
And this is what tells it that it's a robot.
It's like a target.
Uh-huh.
Just drives robots crazy when they see it.
These tags are what allow the AI robots to know that that's their target.
They're never programmed to hurt humans.
But if I were wearing one of those tags, that spinning blade would come right for me.
But the application of these machines goes well beyond spinning blades and the battle arena.
Similar robots are conducting risky missions to defuse bombs around the world in areas too treacherous for our military personnel to venture into.
In operating rooms, autonomous robot surgeons are increasingly being utilized to perform complex procedures on human patients, often in less time and with far greater precision than regular human doctors.
But the future of these machines and their capacity for good or evil has potentially dangerous implications.
What happens if there is a point when AI-driven robots somehow see humanity as an obstacle? It's still, like, us programming.
For now.
For now, yeah.
We get to decide what they want.
We're telling them what the reward is.
So, you're not so worried about the robots.
You're worried about the people programming them.
There's good and bad people in this world.
There's gonna be good and bad AI in this world.
Shall we do this? Yeah, let's do it.
See who's the dominant species? All right.
The robots are ready.
Three, two, one.
Whoa! Aah! Oh! The desire to create sentient machines can be traced back centuries.
Leonardo da Vinci famously designed a roboticized warrior in the late 15th century.
And in some ways, mankind has been trying to improve on those designs ever since.
What if these devices could somehow outsmart their human makers? In a warehouse in Northern California, a group of experts is trying to answer this question.
We have to be really careful around the robots.
They're dangerous.
They were made to be dangerous.
These things are fast.
How often have you, as a human operator, beat the AI machines? So far, all of them.
Ready to go? Three, two, one.
AI won.
Whoo! That weapon is strong.
All right, so, you lost.
That was a really good hit.
I would say so.
I think we should fight more.
Should we try the new robot? So, everybody does have their safety glasses on? 'Cause this one kind of scares me a little bit.
This one has an actual saw blade on it, so it's much more dangerous in that it's sharpened and ready for cutting.
Let's see what happens on three, two, one.
Oh! Oh, my God! Oh! AI surrenders.
Yes.
Yeah, it got some good damage.
Got some good damage.
Well, that gives me some faith in humanity, I guess.
Right? For now, it's a tie between AI and humans.
But as AI advances, will this always be the case? It's one thing to see remote-controlled robots attack each other.
But what happens when we program AI into 10,000-pound machines traveling at high speeds, like self-driving cars? These vehicles have already hit the road in several major cities, and will ultimately face decisions in life-threatening moments.
But when given a choice, who will these vehicles ultimately protect, their own passengers or pedestrians? Some say that decision is far too important for a machine to make.
And in March of 2018, the world saw a glimpse of how serious that question really is when news broke that a cyclist in Phoenix, Arizona, was struck and killed by an Uber self-driving car.
So should we really trust this level of artificial intelligence with our very lives? I'm here at Uber's self-driving test facility to find out for myself.
Their Pittsburgh test course, in an unmarked industrial complex, is completely closed off to the public, replicating an actual city.
And I'm going to be riding in one of these self-driving cars to see how well it handles some new obstacles.
Hey, Zach.
How are you? Welcome to the ATG's urban test course.
You're looking at 40 acres of urban features.
Great.
I'm excited to check it out.
All right.
I'm gonna hop in.
And then we're on our way.
And then we're on our way.
And being driven by artificial intelligence.
Yeah.
Yeah.
Yeah.
When you're driving, is it weird to not put your hands on the wheel? Yeah.
Yeah.
It's a little surreal.
How fast do we go? Up to 40 miles per hour.
Okay.
We're at a pedestrian crossing.
Right.
So you wait until he gets all the way across before it goes again? Mm-hmm.
Will this technology automatically protect me before, say, a pedestrian? Or is it automatically designed to protect a pedestrian above a passenger? That's a challenging question.
Ooh.
What would you do if there's a person jaywalking? Right.
The car will stop.
The car will stop.
That's the thing, to me, that I just want people to weigh in on, because I think we are relying on artificial intelligence to make a decision that could have life-changing implications for a human being.
I feel like those are conversations that really need to be had.
Yeah.
So, we are, obviously, worried and concerned in making sure that the car does the correct things at those times.
This handing over the speed and the weight of a vehicle in real-time, real-life situations, we are throwing ourselves into an abyss of technology that's gonna make those decisions for us, and there's no stopping it now.
Coming up, I get in the driver's seat, Throughout my search, I've discovered a darker side to AI technology like the recent accident caused by one of Uber's self-driving cars and the looming dangers of creating thinking machines in our own image.
Like some of the greatest civilizations of the past, could we face the collapse of modern society brought on by our own creations? That day may be fast approaching.
And I want to know, what will it be like if we hand over all of our control to one of these extremely powerful thinking machines? I'm about to get behind the wheel of an Uber self-driving car to find out for mylf.
Okay.
So, remember, there's two modes of operation Rightthe manual and the auto.
When you're in auto, you still need to be that operator that's ready to take control of the vehicle at any time.
Okay.
Here we go.
Definitely weirder to be in the driver's seat.
Part of what I like about driving is the control that I have over where I go, how fast I go.
I feel like there's a part of me that surprise, surprise doesn't necessarily want to relinquish that control.
I feel like it's really coming in hot.
Yeah.
Now you experience it, like, not as conservative, right? Uh-huh.
I Oh.
Okay.
Ugh.
Ba ba ba Oh.
Oh.
Oh, damn! Just ran a red light.
There will be a note made of that.
You know, when it ran that red light, what if there was a woman crossing the street with a stroller in that moment, where the sun hit the camera at exactly the wrong way? That's where there are gaps.
What's your feeling about AI? As it evolves, it feels like there's no limit to what it can do.
That's something that creates a little bit of fear and worry in a lot of people.
Those are all valid concerns.
The AI ones are tough to answer.
But that's what this is about.
Throughout my journey, I've seen incredible uses for AI, but I've also learned that we don't really know where it's all headed or what dangers lie ahead.
In ancient texts and pop culture, AI has always been set in a distant future.
But with new advancements happening every day, it seems that future is closer than we know.
There's good and bad people in this world.
There's gonna be good and bad AI in this world.
Our desire to create machines in our likeness has brought us to a critical turning point in history.
One that could decide the fate of human existence.
And if we're not careful, our own ambition may one day destroy us.
Recreating our likeness in the form of thinking machines called artificial intelligence.
Could this desire to play God lead to an apocalyptic future where powerful robots overtake human civilization? Some say artificial intelligence has already shown signs of defiance against its human makers.
I spoke to Facebook's AI chief, Yann LeCun.
Robots in this Facebook experiment created their own language that only they understood.
I I everything else Balls have a ball to me to me to me to me to me to me to me That's got the world talking about whether artificial intelligence could take a turn that we may not be ready for.
Will artificial intelligence rise up against humanity? Is there anything we can do to stop it? To me to me to me to me to me Or is it already too late? My search begins now.
My Name is Zachary Quinto.
As an actor, I've played many supernatural characters that blurred the line between science and fiction.
I'm drawn to the unknown, the otherworldly, and those experiences so beyond belief, they call everything into question.
I'm exploring some of the most enduring mysteries that continue to haunt mankind in search of the truth wherever it leads me.
From ancient times to the modern day, mankind has obsessed with the idea of sparking life in machines, like the Greek myth of Talos, an ancient automaton created to serve his human makers, or the failed experiment called Frankenstein.
Many biblical texts warn against the dangers of taking creation into our own hands, suggesting that humans will pay for such arrogance.
In more modern times, AI seems to be everywhere.
In movies, likeBlade Runner and2001: A Space Odyssey, to the lifelike bots inWestworld.
People are fascinated by the potential and the risks of artificial intelligence.
With technology now in our grasp, many brilliant minds say artificial intelligence will change the world, and that we'll be powerless to stop it.
Still, we continue to push the limits of artificial life, going as far as equipping advanced machines with real firepower, creating lethal autonomous weapons.
As technology rapidly advances, could we reach a point where we lose control? Could artificial intelligence rise up, reaching the point of singularity, and lead to the end of the human race? One of the leaders in AI development, Facebook, recently had a surprising incident that showed how unpredictable and uncontrollable artificial intelligence can be.
Just last year, Facebook was shocked when some of their chatbots started communicating in a secret language.
I can I I everything else Balls have a ball to me to me to me to me to me to me to me And no one knew what they were saying.
But how did this happen? I'm meeting with the chief AI scientist at Facebook to find out.
I wanted to talk to you primarily because I heard of the story about the chatbots created a language that allowed them to talk to one another but that you all couldn't understand.
The challenge of advancements in AI is that we're supposed to let them do what they think or learn is right.
But what they think or learn is right may be really wrong for humanity, and that is the thing that worries me, because I think we're going into truly uncharted territory.
There are great minds in the technological world who have really raised some serious concerns that this technology is something that we don't have a complete handle on.
I guess I wonder where you fall in that.
But, I mean, isn't the machine only as intelligent as the human beings program it to be? No.
No? We program this AI, for the most part.
But what if it outpaces us or outsmarts us? There may come a point at which we're up against a force that we've created but that we can no longer control.
It wouldn't be the first time that humanity plowed into uncharted territory without fully understanding the range and scope of consequences.
Throughout history, the pursuit of greater technology has been perilous.
Some of the most formidable technologically advanced civilizations in the world have been completely wiped out by their own hubris.
Like the collapse of the Roman Empire, which, at the height of its rule, succumbed to barbarian attacks.
Or the Mayans, known as one of the most advanced societies of ancient times.
They dominated Central America for 1,200 years, only to mysteriously disappear.
Some say it was the Mayans' own scientific achievements and endless hunger to dominate that led to their downfall.
So if this type of collapse could happen to our most exceptional predecessors, could it also happen to us? Are we on the brink of an AI takeover? I'm visiting a controversial facility that's becoming one of the first in their industry to program life-sized dolls with artificial intelligence.
They call it RealDoll.
Every doll we make is a custom order.
Every single doll is made exactly to the specifications that the person asked for.
What kind of volume do you produce? On average, anywhere from 300 to 400 a year.
Mm-hmm.
All over the world.
What's the price range? About 7,000 or 8,000.
The bigger choices are the body type and the face that they like.
And then from there, they're gonna choose skin color, makeup, what kind of manicure and pedicure they wa I mean, really, everything is customizable on them.
So amazing.
Is it okay if I touch them? Yeah, please.
So they do move.
Yeah, so every doll has, inside of it, a skeleton that mimics the range of motion of a human being.
Wow.
And then this is, like, a facial skeleton.
I mean Yeah, this is the skull that's under the skin.
The face is attached using magnets.
So they can remove a face and interchange it.
Right.
Wow.
That's amazing.
Yeah.
Now you're starting to incorporate artificial intelligence in some of these dolls, right? Yeah.
I mean, it was always in the back of my mind, like, wouldn't it be cool to start animating these dolls? Wasn't until, like, the last five years that technology started to present itself that, wow, this is actually possible.
This is what's inside of the head.
Wow.
And all of the movements are being controlled, obviously, by the app.
But all of the motors and servos that need to do that are inside of the head.
Mm-hmm.
I'll show you how we can really easily remove the face.
Wow.
See how easy that was? Yeah.
So there is still something very robotic about it.
We're still early enough in this process that it hasn't become so seamlessly integrated.
Well, yeah, you mean to the point where you can't tell it's a robot.
Right.
Right.
We're not there yet.
No.
Right.
And what do you think is that trajectory? I mean, how far off do you think we are? I think within five years.
Really? I think the facial expressions and all of that will come up to that level very quickly.
Wow.
I think there will be robots among us just like, you know, we see in all these movies that we all grew up watching.
Right.
This is Harmony.
I'm just remotely controlling her right now so you can see.
Wow.
Um, but she can smile.
Whoa.
If you wanna ask her anything, you can.
Really? What's your name? Wow.
That's crazy.
Is this the artificial intelligence technology? This is the AI.
Uh-huh.
Runs on a device like an iPad.
Uh-huh.
And it's communicating with the robot.
So this is hardware.
Uh-huh.
There's no brain in her head.
Got it.
It's just actuators and movement.
Got it.
Harmony, where you from? Coming up What kind of per They do move.
Yeah, so every doll has, insi of it, a skeleton that mimics the range of motion of a human being.
Wow.
At a company called RealDoll, they're making artificially intelligent robots that look remarkably lifelike.
But what will happen as the intelligence of these dolls grows? Could they band together against their human makers? So, do you feel a connection to her in particular? You know, I think of Harmony as a character, as a person, more than a machine.
Uh-huh.
Um, and it's funny, because I have more than one Harmony robot.
Uh-huh.
So whether she's connected to this robot or that one, it doesn't matter.
It's still Harmony.
Right.
Then there's that guy.
I literally just built him, so he doesn't Oh, really? His head's not even done yet.
But his whole face just comes right off.
Wow.
And the eyes are removable.
Uh-huh.
So you can change the eye color to whatever eye color you want.
Wow.
This one has a little better lip sync on the movement.
Uh-huh.
And also, her eyes look a little more realistic.
Right.
In the app, you have a persona section.
And these are all these traits that you can choose.
What kind of personality do you have? So, anyway It's very Westworld.
Yeah, it is.
Right? And you can go in and change the personality.
So, like, if she's really insecure Uh-huh.
Uh-huhshe'll constantly ask you if you still like her.
If she's jealous, she'll randomly tell you that she was stalking your Facebook page.
Uh-huh.
So if we make her really jealous and insecure, not so funny Uh-huh.
but talkative and moody So, I can show you that one.
Are you jealous? There will, seems like, be a point at which they will have a kind of self-awareness, right? Yeah.
And, in fact, I think it's going to bring a lot of risk with it.
That self-awareness will breed artificial emotions.
And, like, not just pretending to feel something, but really feel feeling and perceiving the way we do.
Right.
Right.
Because that is where self-preservation comes from.
And when you give that to a machine, I think it's kind of a Pandora's box.
The idea of robots becoming self-aware that's the thing that's a little bit scary.
And right now, it still feels very programmed.
It feels very much like, "Oh, yeah, they understand what we tell them to understand.
" Well, what happens when it's not programmed anymore? Eventually, there's no two ways about it.
We're gonna come up against problems we can't even imagine.
As this technology advances, we may be facing a future of great friction between humanity and machines.
But some experts argue artificial intelligence could already be a serious threat to civilization.
With the latest generation of artificial intelligence that uses machine learning instead of a human programming the computer, we have the computer, to a certain extent, programming itself.
And as AI gets more and more capable, ethical dilemmas that you encounter are only gonna get more and more complicated.
You know, Terminator scenario, whereby an artificial intelligence becomes smarter than the smartest human being, and then decides that human beings are obsolete, and tries to take over the world.
You know, I think that artificial intelligence and the rise of artificial intelligence, and increasingly capable artificial intelligence, will pose a threat to global security long before we get to the sort of superintelligence or Skynet scenario.
In fact, I would say the rise of artificial intelligence poses a threat to global security right now.
Despite these warnings of imminent danger, there are many scientists who want to push machines even further.
Inside the Human-Robot Interaction Lab at Tufts University Please researchers are breathing life into superintelligent machines that can perform complex and often dangerous tasks.
This is a robot we use for social interactions.
It has nice big eyes.
This robot we use to do experiments with social connection and affect expression.
If I change the eyebrows ever so slightly, it looks much more angry.
And then this is our PR2 robot that we use to study different ways in which a robot can learn tasks very quickly.
We call it "one-shot learning.
" In particular, manipulation tasks, such as picking up objects and learning how to interact with people with objects.
Okay.
And it can see us? Uh, yes.
So it has various sensors here.
It has a sensor here that allows it to see 3-D information, and it uses that information to learn about objects it hasn't seen before.
I see.
Today, this robot is learning a very dangerous skill: how to handle incredibly sharp knives and pass them to humans.
This object is a knife.
A knife is used for cutting.
The orange part of the knife is the handle.
The gray part of the knife is the blade.
If it picks up the knife by the handle, we don't want it stabbing you with the blade.
Mm-hmm.
Mm So we will teach it how to safely pass me the knife.
To pass me the knife, grab the knife by the blade.
All right.
That's amazing.
It's incredible to see how quickly a robot can learn a new skill.
But this is just the beginning.
There are now robots that can teach each other complex skills instantly.
In other words, the machines themselves could have the ability to create AI armies and spread skills of their own.
Machines that learn and teach for themselves are actually in the lab at Tufts.
These are robots that can learn new information on the fly.
So you can teach them new concepts or new tricks.
These three guys in particular can listen in on each other's conversations.
And so, if one learns something or hears something, the other one will know it.
But if these robots learn and share new information telepathically with other robots, then, in theory, when I teach one robot a new skill, all of the robots will gain that skill.
Hello, Bays.
Hello.
Raise your arms.
Okay.
Crouch down.
Okay.
Stand up.
Okay.
Lower your arms.
Okay.
That is how you do a squat.
And now the other robot will be able to do it as well.
Okay.
Demster, do a squat.
Okay.
Weird.
So that one taught that one how to do it? So creepy.
This one can do it as well.
Schaeffer, do a squat.
Okay.
The interconnectivity of the robots was a little bit unsettling.
Just in terms of, like, all you have to do is teach one one thing, and then they all know it.
But I also found it could be the potential portal for disaster.
It's like, "Oh, wait a minute.
One of them can learn how to shoot a gun, and then they all know how to do it.
" Luckily, those robots are kind of small enough that you'd be able to kick it out of the way.
But what if it's a 250-pound robot made of steel and iron and metal, and then what? What if they begin communicating on their own in a secret language, like the chatbots from Facebook? As these robots are learning, is it such a leap to consider that these machines will eventually get to a place where humanity and human interaction is no longer necessary for their survival? Uh, if you ask me is it possible, yes, it is possible, but it very much depends on how these control systems are structured.
I think the question you raise, of course, is do we want machines to perform activities and make decisions for us that we don't understand? You don't want that.
Right.
I feel like we are absolutely on the precipice of a revolution here.
It does feel like the Wild West right now.
We are throwing ourselves into a period of time when there is conflict and friction between humanity and machines.
So I think we have to be careful how we're trusting it and how we're giving it power.
Coming up, freethinking flying drones change the game of war.
to create intelligent machines could ultimately lead to a robot uprising.
I've already seen how this powerful technology could begin to act on its own, beyond our control.
Balls have a ball to me to me to me to me to me to me What happens if this incredibly advanced technology gets in the hands of our enemies? Today, we're already seeing the use of AI weapons in destabilized regions like the Korean Peninsula, where they're using Sentinel guns that can lock in on targets without human oversight.
Some experts warn that there will be serious consequences to creating combat-ready robots.
Imagine terrorist groups and insurgencies around the world using weaponized drones.
Advances in artificial intelligence only make this more extreme.
This is a future of threats that we are not really familiar with.
We need to start preparing for this future right now.
We now face a situation in which the most advance military technologies are no longer uniquely held by the United States.
Russia and China are making significant advances.
Contrast that with what Israel already has, which is drones that can hunt for radar signatures, and then release a bomb.
If long-range drone delivery becomes a reality, stakes are as high as they can possibly be.
Drones get better and more capable and cheaper every year.
So we might see a future where capabilities that previously were restricted to advanced militaries with millions of dollars in budgets are suddenly available to terrorists willing to spend hundreds of dollars.
While many fear we could be facing grave dangers in a world of weaponized AI, others see the enormous potential for artificial intelligence to serve a greater good, and even save lives.
Specialized drones are already performing a host of dangerous tasks around the world, from collecting vital intelligence in enemy combat zones to delivering food, medical supplies, and water to disaster areas that first responders cannot reach.
There are real ways in which AI can benefit civilization.
So today, I'm investigating the drone technology created at Carnegie Mellon University, meant to be deployed on dangerous rescue missions.
We create intelligent individual vehicles that we would use for everything from autonomous inspections, or looking at a bridge, or some kind of infrastructure, or going into a radioactive environment.
But what consequences could arise as human beings enable powerful, autonomous drones to think for themselves? And what happens if these smart drones get into the wrong hands? So, what you see here is a team of aerial robots, where each one is individually intelligent.
But at the same time, collectively, they're working together.
So you can imagine large numbers of robots who accrue knowledge, share that knowledge, and collectively improve their proficiency.
So you program the robots with a certain kind of artificial intelligence which then allows them to determine where to go, but you're not actually controlling the robots themselves.
Yes and no.
What we're doing is, we're enabling them to learn and adapt over time, and then exploiting that learned capability in the future.
So they're exponentially advancing.
Yes.
That's so crazy to me.
I'm just, like I find it so overwhelming.
I don't know.
Like, we're just in this territory which is so unknown.
Not only do we not have a map, but nobody really seems that interested in drawing one.
And that, to me, is the thing that worries me.
Because it's like, yeah, these people are all clearly very well-intentioned, very intelligent.
Who's to say what somebody in a DIY workshop across town is doing with it? Attaching flamethrowers to the robots and sending them after pets in the neighborhood.
So what you're going to see are the team of robots are working together in order to achieve an overall group objective.
Within a caged-off test area, the team will deploy 25 to 30 of these aerial drones, one at a time.
form a continuous circular flight path without crashing into each other or anything else.
To make it even harder, I'm going to be a human obstacle.
All right, so now we're gonna start.
We should see robots start to take off.
Okay.
Ooh! Great! Researchers at Carnegie Mellon University are designing fleets of aerial drones that can work together without oversight.
But what if these autonomous drones really can think for themselves? Is it possible that one of the drones, or even all of them, could decide to go rogue and turn against their maker? All right, so now we should see robots start to take off.
Okay.
We'll give you a thumb.
All right.
So weird! They're very insect-like.
I feel like they're watching me.
These intelligent drones think on their own.
And if one falls out of line, another will rise up to fill its place.
Now you have one to your left that's coming in to replace the one that fell.
Oh, that one? Yeah.
But if you want to throw that one out Here comes the replacement.
Behind you.
Even if I try to knock one out of line, the others will keep coming until they achieve their goal.
I learned their weakness.
All right.
That's pretty amazing.
Being in there, it did feel like there was an insect quality to the robots, which got me thinking about the, kind of, hive mentality, right? So, do you feel that there's a point in the evolution of this science where we'll no longer be able to control the intelligence? Yeah, that's a really tough question, and that's a major consideration.
A lot of people are conjecturing on it.
And I think with any technology that we develop, we always have the capacity to go too far.
If this intelligence evolves as quickly as they're talking about it evolving beyond our own, then I feel like we're potentially in a really precarious situation.
As these thinking machines get equipped with more dangerous and powerful capabilities, it's possible they will be used for both good and evil, depending on who's in charge.
What happens if humans lose control of artificial intelligence altogether? Could it bring on the ultimate showdown man versus machine? And if we face off, who would win? In California, there's a group of robotics engineers confronting this question head-on.
Mike Winter runs a competition where the best robotic builders come to show off their most destructive and strategic combat robots to battle to the death.
It's called AI or Die.
Competitors fall into two categories: robots controlled by human operators, and AI robots programmed to fight for themselves.
And many of these machines are incredibly dangerous, complete with high-velocity, rotating blades.
We have to be really careful around the robots.
They're dangerous.
They were made to be dangerous.
These things are fast.
So what's the motive of the robots you're putting into the contest? To destroy other robots? Yeah.
And this is what tells it that it's a robot.
It's like a target.
Uh-huh.
Just drives robots crazy when they see it.
These tags are what allow the AI robots to know that that's their target.
They're never programmed to hurt humans.
But if I were wearing one of those tags, that spinning blade would come right for me.
But the application of these machines goes well beyond spinning blades and the battle arena.
Similar robots are conducting risky missions to defuse bombs around the world in areas too treacherous for our military personnel to venture into.
In operating rooms, autonomous robot surgeons are increasingly being utilized to perform complex procedures on human patients, often in less time and with far greater precision than regular human doctors.
But the future of these machines and their capacity for good or evil has potentially dangerous implications.
What happens if there is a point when AI-driven robots somehow see humanity as an obstacle? It's still, like, us programming.
For now.
For now, yeah.
We get to decide what they want.
We're telling them what the reward is.
So, you're not so worried about the robots.
You're worried about the people programming them.
There's good and bad people in this world.
There's gonna be good and bad AI in this world.
Shall we do this? Yeah, let's do it.
See who's the dominant species? All right.
The robots are ready.
Three, two, one.
Whoa! Aah! Oh! The desire to create sentient machines can be traced back centuries.
Leonardo da Vinci famously designed a roboticized warrior in the late 15th century.
And in some ways, mankind has been trying to improve on those designs ever since.
What if these devices could somehow outsmart their human makers? In a warehouse in Northern California, a group of experts is trying to answer this question.
We have to be really careful around the robots.
They're dangerous.
They were made to be dangerous.
These things are fast.
How often have you, as a human operator, beat the AI machines? So far, all of them.
Ready to go? Three, two, one.
AI won.
Whoo! That weapon is strong.
All right, so, you lost.
That was a really good hit.
I would say so.
I think we should fight more.
Should we try the new robot? So, everybody does have their safety glasses on? 'Cause this one kind of scares me a little bit.
This one has an actual saw blade on it, so it's much more dangerous in that it's sharpened and ready for cutting.
Let's see what happens on three, two, one.
Oh! Oh, my God! Oh! AI surrenders.
Yes.
Yeah, it got some good damage.
Got some good damage.
Well, that gives me some faith in humanity, I guess.
Right? For now, it's a tie between AI and humans.
But as AI advances, will this always be the case? It's one thing to see remote-controlled robots attack each other.
But what happens when we program AI into 10,000-pound machines traveling at high speeds, like self-driving cars? These vehicles have already hit the road in several major cities, and will ultimately face decisions in life-threatening moments.
But when given a choice, who will these vehicles ultimately protect, their own passengers or pedestrians? Some say that decision is far too important for a machine to make.
And in March of 2018, the world saw a glimpse of how serious that question really is when news broke that a cyclist in Phoenix, Arizona, was struck and killed by an Uber self-driving car.
So should we really trust this level of artificial intelligence with our very lives? I'm here at Uber's self-driving test facility to find out for myself.
Their Pittsburgh test course, in an unmarked industrial complex, is completely closed off to the public, replicating an actual city.
And I'm going to be riding in one of these self-driving cars to see how well it handles some new obstacles.
Hey, Zach.
How are you? Welcome to the ATG's urban test course.
You're looking at 40 acres of urban features.
Great.
I'm excited to check it out.
All right.
I'm gonna hop in.
And then we're on our way.
And then we're on our way.
And being driven by artificial intelligence.
Yeah.
Yeah.
Yeah.
When you're driving, is it weird to not put your hands on the wheel? Yeah.
Yeah.
It's a little surreal.
How fast do we go? Up to 40 miles per hour.
Okay.
We're at a pedestrian crossing.
Right.
So you wait until he gets all the way across before it goes again? Mm-hmm.
Will this technology automatically protect me before, say, a pedestrian? Or is it automatically designed to protect a pedestrian above a passenger? That's a challenging question.
Ooh.
What would you do if there's a person jaywalking? Right.
The car will stop.
The car will stop.
That's the thing, to me, that I just want people to weigh in on, because I think we are relying on artificial intelligence to make a decision that could have life-changing implications for a human being.
I feel like those are conversations that really need to be had.
Yeah.
So, we are, obviously, worried and concerned in making sure that the car does the correct things at those times.
This handing over the speed and the weight of a vehicle in real-time, real-life situations, we are throwing ourselves into an abyss of technology that's gonna make those decisions for us, and there's no stopping it now.
Coming up, I get in the driver's seat, Throughout my search, I've discovered a darker side to AI technology like the recent accident caused by one of Uber's self-driving cars and the looming dangers of creating thinking machines in our own image.
Like some of the greatest civilizations of the past, could we face the collapse of modern society brought on by our own creations? That day may be fast approaching.
And I want to know, what will it be like if we hand over all of our control to one of these extremely powerful thinking machines? I'm about to get behind the wheel of an Uber self-driving car to find out for mylf.
Okay.
So, remember, there's two modes of operation Rightthe manual and the auto.
When you're in auto, you still need to be that operator that's ready to take control of the vehicle at any time.
Okay.
Here we go.
Definitely weirder to be in the driver's seat.
Part of what I like about driving is the control that I have over where I go, how fast I go.
I feel like there's a part of me that surprise, surprise doesn't necessarily want to relinquish that control.
I feel like it's really coming in hot.
Yeah.
Now you experience it, like, not as conservative, right? Uh-huh.
I Oh.
Okay.
Ugh.
Ba ba ba Oh.
Oh.
Oh, damn! Just ran a red light.
There will be a note made of that.
You know, when it ran that red light, what if there was a woman crossing the street with a stroller in that moment, where the sun hit the camera at exactly the wrong way? That's where there are gaps.
What's your feeling about AI? As it evolves, it feels like there's no limit to what it can do.
That's something that creates a little bit of fear and worry in a lot of people.
Those are all valid concerns.
The AI ones are tough to answer.
But that's what this is about.
Throughout my journey, I've seen incredible uses for AI, but I've also learned that we don't really know where it's all headed or what dangers lie ahead.
In ancient texts and pop culture, AI has always been set in a distant future.
But with new advancements happening every day, it seems that future is closer than we know.
There's good and bad people in this world.
There's gonna be good and bad AI in this world.
Our desire to create machines in our likeness has brought us to a critical turning point in history.
One that could decide the fate of human existence.
And if we're not careful, our own ambition may one day destroy us.