This Giant Beast That is the Global Economy (2019) s01e04 Episode Script

A.I. is the Future. Will it Keep Us Around to Enjoy It?

1
I'm here on Wall Street
where it all started.
On May 6th, 2010 at 2:45 p.m.,
one guy sitting in his parent's
house way over in London
unleashed an algorithm
back here on the New
York Stock Exchange
with a billion-dollar sell order.
The plan was to spoof the market
and cancel the order
before it was completed.
But before that could happen,
the rest of the world's
automated stock trading bots
also started selling,
setting off a chain reaction
that cratered the market.
The Dow Jones Industrial
dropped a thousand points
in just 15 minutes.
That's nearly as big as the drop
that kicked off the
Great Depression.
What the heck is
going on down here?
I don't know, there is
fear in this market.
Cancel the cruise, switch
your kids to public school.
It's a flash crash,
people. Well, fuck.
36 minutes later,
the market rebounded,
but the flash crash of
2010 marked the first time
humans got a visceral,
first-hand, anus-puckering glimpse
of how AI was going to take
over our financial system.
And you should see
where it is now.
It's why I'm here in India
foraging for wild honey,
but more on that later.
To understand how AI
found this place or why,
you've got to first
understand what AI is.
And if you think you already
know, I bet you're wrong.
Whether you like it or not,
you're all connected by money.
I'm Kal Penn, exploring
this giant beast
that is the global economy.
So what is AI exactly?
I'm in San Francisco to first
establish what it's not,
which is pretty much everything
science fiction told you,
especially giant freaking robots.
Whoa.
Oh, my God.
I'm here with Julia Bossmann,
who serves on the
World Economic Forum's
Artificial Intelligence Council,
advising world leaders on how
to harness AI's potential.
The job comes with perks
even better than watching
cars get destroyed,
like selfies with Canada's
number-one sex symbol
not named Drake.
Oh, damn.
So how are you gonna
get home after this now?
I still think it'll drive.
We're meeting at a
company called Mega Bots
which built giant robots
to fight other robots.
Sort of like an even
nerdier Mediaeval Times.
According to Julia,
these robots are not just to
fund theme park attachments.
They are technological dinosaurs
because of one
important distinction.
In these robots,
we are the brains,
but AI is the artificial brains.
Interesting, can
you expand on that?
We are basically making
computers now that can
learn things on their own.
And they don't necessarily
need to have bodies.
So a lot of the AI that
we've built already
lives in giant data centres.
So if you had to explain it
to somebody who was 13 years old,
how would you explain AI?
I think a very general
definition of it
could be that AI is
making machines do things that we
didn't explicitly
program them to do.
So in traditional programming
you have, you know,
your set of rules
and the algorithm
and you know if this, then that,
and it's all laid out by
the humans that program it.
If you look at the
medical literature,
one database contains
millions of entries
and no doctor could read
all these research papers
to stay up with the current
field, but a machine could.
So you can imagine a machine
coming up with new ideas
on how to solve problems or
discovering new drugs
for curing diseases.
Wow, okay.
The field in
artificial intelligence
that is having the most
excitement right now,
it's called Deep Learning.
What is deep learning?
Deep learning is
when we have several deep layers
of these neural networks
that are similar to what
we have in our brains.
So in our heads, we
have all these neurons
that are connected to each
other and exchange information,
and in a way, we are
simulating this in machines.
We feed it data in a
way that is balanced
and unbiased, so
that it also learns.
For example, image
recognition we tell them
these are images of cats,
these are images of dogs,
and then they just
start churning through
the images and learn by themselves
how to recognise them
so we don't have to
program every single
bit into that.
- Interesting.
- And there's machine learning
that is not deep learning.
There are evolutionary algorithms
where we basically use a
principle from evolution
and let the machine try
out different instances,
and then we see
which one works best.
And then the ones that work best
get to go to the next generation.
Just like organisms evolve,
we use that principle
where the fittest programs
and the best programs survive.
Wow, okay.
So those are
evolutionary algorithms?
And what is the financial
interest in exploring AI?
It must have an incredible
potential impact on the economy.
Yes, I think it's going to
radically change the economy,
and we're talking about a new
industrial revolution here.
Interesting.
It has been called
our last invention
because once we have artificial
brain that is smarter than us,
it can then invent
more things for us.
Is it called the last invention
because it's gonna kill us all?
Hopefully not.
Many are afraid
artificial intelligence
is going to become too
smart and kill us all,
but don't worry.
One of the reasons AI is so smart
is because it's dumb as hell.
Come here, AI.
Imagine you asked AI to find
the perfect recipe for a cake
using evolutionary algorithms.
AI wouldn't try to think
about the best way to make it,
it would just try
it billions of times
with every ingredient
in the kitchen
in the dumbest possible ways.
Most of course will
be due to failure.
This one for sure.
Nice try, idiot.
Failure doesn't
hurt AI's feelings.
It doesn't have any.
The great part about
evolutionary algorithms
is that by trying all these
seemingly stupid methods
it might stumble upon a solution
to a culinary problem no rational
human would try to solve,
like making a superior vegan cake.
- I made a cake.
- Way to go, AI.
It's mostly cashews.
Would you have thought
to use cashews?
Of course not, that
would be stupid
which AI is willing to
be so you don't have to.
Will this moron evolve
into something so smart
it could dominate the
world and kill us all?
Hard to say for sure.
I'm learning launch codes.
But in the meantime,
have some cake.
Even though I'm still worried
that AI is going to cook up
more problems than it solves,
experts agree it will
boost productivity
in areas like healthcare,
transportation and finance,
adding $15.7 trillion
to global GDP by 2030.
That's more than
the current output
of China and India combined.
So how big of a deal is AI?
From my perspective,
it's one of the three big
deals in human history.
- Human history?
- Absolute human history.
I'm here with Andrew McAfee,
one of the world's leading experts
on how new technology
transforms economies
and, subsequently, the
entirety of human society.
If you want to
graph human history,
what you learn is that
for thousands of years,
absolutely nothing happened,
we were just flatlining.
It was almost indistinguishable
from being dead.
And then, all of a sudden,
at one point in time,
that graph of human history,
it doesn't matter what
you're looking at,
went from boring horizontal
to crazy vertical, kind
of, in the blink of an eye.
And it happened right around 1800,
because of, first
of all, steam power
and then, second of
all, electricity.
So electricity did some
pretty obvious things, right?
It gave us trolleys,
it gave us subways.
Less obvious, it gave
us vertical cities
instead of horizontal ones.
- Electricity did?
- Absolutely.
- You need elevators.
- Oh, elevators, okay.
You just don't have vertical
cities without that.
You can't climb up 80
flights of stairs every day.
So these two industrial
revolutions of steam
and then the one-two
punch of electricity
and internal combustion,
literally changed human history.
There's no other
way to look at it.
And these were all technologies
that let us overcome the
limitations of our muscles.
What's going on with AI.
Is that we are
overcoming limitations
of our individual brains,
of our mental power.
We have actual tough
challenges to work on,
sincerely tough challenges, right?
We should cure cancer,
we should feed more people,
we should stop cooking the
planet in the 21st century.
These are just insanely
complicated things.
And our brains chip
away at that complexity,
and we do it with science,
and we do it with
accumulating knowledge,
but the complexity
is just overwhelming.
The way I think about AI.
Is that we actually have a
really powerful new colleague
to help us make in-roads
into that crazy complexity,
because what these new
technologies are so good at
is seeing even really
subtle patterns
in overwhelmingly
huge amounts of data,
more than you and I can take in.
One of the craziest
examples I heard recently
was in finance and the rise of,
they're called robo-advisors,
which is just an
algorithm that puts
your investment
portfolio together.
Up until now,
you had to have a certain
level of affluence
to even get in the office of
financial planners and advisors.
That's changing really quickly.
With things like robo-advising,
people who have less wealth
and less wealth and less wealth,
can get access to super
powerful cutting-edge tools
to improve their financial lives.
That's exciting, especially,
because it seems like
we've always had people
who are willing to use
this stuff to do harm.
I'm not saying that there's
nothing to worry about.
And what we know from the
previous industrial revolutions
is they brought some bad
stuff along with them.
We absolutely mechanised warfare
with all of these
industrial technologies.
We absolutely polluted the
hell out of the environment.
We made really serious mistakes
like large scale child labour
because of the
industrial revolution.
So it's not all perfect,
not at all points in time,
and the same thing's gonna
happen this time around.
Damn.
Now McAfee's got me thinking,
what if history repeats itself
and AI reshapes society
in all the wrong ways
like rampant pollution
and tiny city co-workers?
What about the moral
and ethical questions
that come up with
powerful new technologies?
Walking around London's
National Computing Museum,
you see a lot of machines
that were created
to advance society like the
two-tonne Harwell Dekatron,
which was built to
make calculations
for Britain's scientific
research program in the 1950s.
But in the wrong hands,
there's no telling how a
new technology will be used.
Could you watch porn
on this computer?
Well, you can turn them on
to see very low resolution porn.
Okay.
I'm here with
programmer Alan Zucconi
who teaches at Goldsmith's
College here in London.
He's used tech to help create
some revolutionary things
like game controllers for
people with impaired mobility.
He says that one of the
biggest moral quandaries
in tech history is coming soon
as AI begins to replicate
so many human behaviours
it can pass as one of us.
What is this thing?
Basically it's one of the
first computers ever built,
and it was built by Alan
Turing and his collaborators.
This machine was one
of the first computers
that was able to
decode the Enigma code
that was designed by Nazis.
Whoa.
Alan Turing was the father
of modern computer science
and when he wasn't helping
the allies win the war
by breaking Nazi codes,
he was philosophising
about something he called
the Turing Test.
How can we tell apart
a human from a machine?
And if we can't
tell the difference,
then the machine passes what
he called the "imitation game."
The machine is trying to
imitate the human behaviour.
Now this has been known
as the Turing test,
and this was one of the machines
that hypothetically
could have been used.
To take the Turing Test,
a human would input
questions into a machine
while an outside observer assessed
whether or not the
responses coming back
were from a human or a
machine imitating a human.
How old are you?
- There we go.
- Oh.
It knows how old it is.
I was born in 1912,
so I'm 105 years old.
Back in Turing's time,
it was pretty easy
to spot the computer
but today, AI is able
to study human behaviour
and program itself to act like us.
Can you tell the
difference between this?
Normally I begin these remarks
with a joke about data science
but about half the stuff
my staff came up with
- was below average.
- And this?
Our enemies can make it look
like anyone is saying
anything at any point in time.
That second one was
actually created by BuzzFeed
along with actor Jordan Peele.
And it got a lot
of people concerned
about a new AI form of fake news.
Moving forward we need
to be more vigilant
with what we trust
from the internet.
AI studied Peele's
facial movements,
then merged them and recreated
them on Obama's face,
creating a hybrid
known as a deepfake.
You might have seen
something similar, for
example, in Snapchat
there is a filter that
allows to swap faces.
The difference is
that, that filter
does it in a very simple way.
But the technology
behind deepfakes
relies on artificial intelligence.
It comes from something
called "deep learning."
Artificial neural networks
extract facial expression.
It uses that expression
to recreate your face.
And this is how we manage to
achieve photorealistic results.
Alan makes internet tutorials
on how to make deep fakes,
and he's a true believer
that this technology
should develop freely
without restrictions,
even if it could potentially
start World War III.
How is the consumer
supposed to reasonably know
what's reality and
what's not reality?
As a consumer, when
you approach news,
whether it's an article, whether
it's a video, whether
it's a picture,
everything that you see has
been created by someone.
"What is the narrative
of what I'm seeing?
What does this video
want me to tell?"
So I can see
The danger.
The danger as well as
just the curiosity of it.
Is this actually
gonna help people?
Because I would imagine
you have talked to people
who look at how to
grow the economy
through this type of technology.
What are some of the practical
economic impacts of it?
I think that the first
industry that will
take advantage of that
is the film industry.
Simply because changing faces,
it's something that we've
been trying to do for
decades in movies, and
usually we use make-up,
usually we use masks,
sometimes we use CGI.
As an actor and as somebody
who worked in politics,
this freaks me out so much.
- I totally also understand
- And it should.
And it should.
BuzzFeed's deep fake revealed
to the general public
just how vulnerable we are.
In a time when the
president can open his mouth
and move markets,
a well made deep fake could
sink the global economy
faster than the flash crash,
obliterating your IRA
in the time it takes
fake Obama to say
Stay woke, bitches.
Does any of this sound a
little science fictiony,
even a little scary?
If AI grows powerful
enough to know how we move,
how we talk, and how we think,
it may become
indistinguishable from us.
And if AI has its
own consciousness,
AI could also develop
strong opinions about us,
and they may not be positive.
And in the future,
AI could develop
a will of its own,
a will that is in
conflict with ours.
The rise of powerful
AI will be either
the best or the worst thing
ever to happen to humanity.
I tried to convince
people to slow down,
slow down AI, to regulate AI.
This was futile.
I tried for years.
- Nobody listened
- This seems like a scene
- in a movie where the robots
- Nobody listened.
Are gonna fucking take over
and you're freaking me out.
How real is the threat
of an AI-led doomsday scenario?
To find out, I need
to talk to the guy
who's research got everyone
freaked out in the first place.
Okay, so I'm really excited
to talk to you because
Well, I'm excited to talk to
you for a number of reasons,
but we have been exploring
artificial intelligence,
trying to figure out what
it is, where it's headed.
You've influenced people like
Elon Musk and Bill Gates,
and that's a pretty
amazing sheet of influence.
I'm at Oxford University
meeting Dr Nick Bostrom
and since he's not one to
toot his own horn, I will.
Rock and roll ♪
He's one of the foremost minds
on machine superintelligence
and its existential risks
and the author of some
great beach reads.
I feel lucky to meet him,
because Nick is so busy
doing his own deep learning
that he only carves
out an hour a month
for answering questions
about his research.
A lot of the
conversations about AI.
Are things like are the
robots gonna take over
and is that gonna be
the end of humanity.
I'm curious if things
are not managed properly
is there a scenario in
which AI hurts society
or even maybe eliminates
humanity as we know it?
In the longer-term context,
if we're thinking about
what really happens
if AI goes all the way,
and becomes able to replicate
the same general intelligence
that makes us human,
then, yeah, I do think
that in that context
there are bigger risks,
including existential risks.
I mean, if you think about
something like self-driving cars
could run over a pedestrian.
There are privacy concerns.
The militarisation of
these autonomous weapons.
All of these are real concerns.
But at some point
there will also be
the question of how we affect
these digital minds
that we're building.
They themselves might obtain
degrees of moral standing.
And if you roll the tape forward
and if you think, what ultimately
is the fate of homo sapiens,
the long-term future could be
machine intelligence dominated.
It's quite possible
humanity can go extinct.
Those great powers
come with a risk
that they will, by accident
or by deliberate misuse,
be used to cause
immense destruction.
So I think those are in the cards
and if we're thinking
about longer timeframes,
you know, the outcome might be
very, very good or very not good.
Okay, these scenarios
do sound scary.
But out of all the
potential outcomes,
Nick actually believes
the most likely doomsday
scenario with AI.
Will be economic.
If you think about it,
technology in general really is
the idea that we can
do more with less.
We can achieve more of what
we want with less effort.
The goal in that sense is
full unemployment, right?
To be able to have
machines and technology
do everything that needs to be
done so we don't have to work.
And I think that's like
the desired end goal.
It's not some horrible thing
we need to try to prevent.
It's what we want to realise.
Now, to make that
actually be a utopia,
there are a couple
of big challenges
along the way that
would need to be solved.
One, of course, is
the economic problem.
So one reason why people need
jobs is they need income.
If you can solve that
economic problem,
then I think there is a
second big challenge that
for many people it's also
a sense of dignity.
So many people tend
to find their worth
being a breadwinner
or contributing to society,
giving something back.
Like but if a machine could do
everything better
than you could do,
then you wouldn't have any chance
to contribute anything, right?
So then you would have to
rethink culture at the fairly
fundamental level, I think.
A world where no one works?
That doesn't sound so bad.
I can see it now.
Spending time with friends,
mining the full extent
of my human potential,
not having to adjust the hot tub
because it knows
exactly how I like it.
The problem is that's not how
it's gone down historically.
The rise of the machines has
actually happened before,
and last time it wasn't all
strawberries and champagne
in the hot tub.
I'm meeting with
economist Nick Srnicek
to find out what it
really looked like
the last time machines
took our jobs.
Oh, and we're meeting at
a loom for some reason.
So what are you gonna make?
I happen to be making a
sort of anarchist flag,
- actually.
- Interesting, shocking.
Nick has a PhD from the
London School of Economics.
I, on the other hand, do not.
He also has a manifesto.
It calls for everyone to hasten
the coming age of automation
by tearing down old institutions.
Basically, dismantle
capitalism now.
Yeah, this is not
gonna work for me.
There's no way I can have
this conversation with you,
I'm sorry. Let me forget the loom.
So why are we here?
Well, the loom is sort of
like AI back in the 1800s.
It was a new technology
which was threatening
a huge amount of jobs
and basically it sparked off
a number of different
responses by workers,
like the rise of the
Luddites, for instance.
We use the term Luddite nowadays
to often mean just anybody
who hates technology,
but that's not really the case.
The Luddites were
named after Ned Ludd,
an apprentice in a textile factory
who legend says was
whipped for idleness
and then was like,
"Dude, I'm only idle
because I'm being replaced
by a fucking loom, okay?"
And he became the first person
to rage against the machine,
inspiring a movement.
The Luddites took to
breaking the machinery
to save their jobs.
So I think that's something
that we see today with AI.
People are similarly
feeling threatened today.
Do you know how many jobs
are projected to be lost
or in need of replacement?
47% of jobs in America
are potentially automatable
over the next two decades.
So it sounds like a real problem.
It could be a massive problem
and the real issue is
how do we make sure
that five years, ten
years down the line,
people aren't just being left
to starve and without homes?
- So how do we do that?
- Universal basic income.
Universal basic income
is the radical idea that everyone
in society gets free cash,
no strings attached.
And it has some high-profile fans.
We should explore ideas
like universal basic income
to make sure that
everyone has a cushion
to try new ideas.
Some countries
and even cities within America
have tried pilot programs
with mixed results.
I think there's an
amazing opportunity
with these new technologies
to really change the way
that we organise society.
You could move towards a more
social democratic system.
It doesn't have to be the sort of
cutthroat system that America has,
in that everybody can
support each other.
If people like myself
can start putting out
these positive visions,
I think when the
crisis really hits,
we can start to be
implementing those ideas.
UBI used to be regarded
as a fringe concept,
mostly promoted by people who,
like Nick, write manifestos.
But according to a
2017 Gallup poll,
48% of Americans now
support some form of UBI.
But is a guaranteed pay cheque
enough to stop
humans from rising up
when robots come for our jobs?
What do we hate?
Artificial intelligence.
Why do we hate it?
It's forcing us to
confront our weaknesses.
With that, I'd like to
call to order this meeting
of Luddites, the
Local Union of Dudes
Defying Intelligent Technology,
Especially Social Media.
First order of business,
artificial intelligence is
hollowing out the job market.
Our middle class jobs
are the first ones to go.
People like us with these jobs
will be pushed into low
skill jobs at the bottom.
Why would that happen, Ed?
Apparently, AI's better
at medium-skilled jobs
like crunching numbers
than it is at low-skilled
jobs like sweeping the floor.
So it'll leave those jobs to us.
Now I ask you who
here besides Bill
looks like they should
be sweeping a floor?
No offence, Bill.
And there will be less
need for retail jobs.
People can just go online and
order exactly what they want
because that son of a bitch AI.
Solved the searching
and matching problem.
Searching for customers
and matching them with products
like when Steve
searched for a toupee
that matched his head.
Big problem.
Timeless jokes aside,
AI makes that way easier.
Kids today can match with
hot babes from their phone
while sitting on the toilet.
The toilet used to be sacred.
Yeah.
And sure, searching and matching
will create specialised jobs,
but if the damn robots
choose who gets them,
how convenient.
Companies are using AI.
To find employees
with unique skills.
It's inhuman.
Like with Dave.
Yeah, where the hell is Dave?
Some job matching AI noticed
that he worked at FedEx
and had YouTube tutorials
about shaving his back hair.
Now he's making six figures
at some razor
subscription company.
He just shaved himself
off our bowling team.
Yeah.
Hey Ed, I just got an alert
that our tee shirts are being sold
with targeted ads on Facebook.
Are you using AI.
To make money off
people who hate AI?
No, no.
I mean who you gonna believe,
me or the AI trying
to tear us apart?
What do we hate?
Artificial intelligence.
What are we gonna do about it?
We're working on it.
That's a start.
Does the AI revolution
have to be a case
of us versus them?
Tech savvy entrepreneurs
like Louis Rosenberg say no.
And he's made a career
of predicting the future.
Ah.
I was trying to scare
you but it didn't work.
Louis is a technologist
and inventor
who wrote a graphic novel
about the end of humanity.
But he thinks we have
a future with AI.
That is all about collaboration.
It's the guiding principle
behind his brain child,
a technology called swarm.
Swarm combines AI's
data analysis skills
with human knowledge and intuition
to create a superintelligence,
something between Stephen
Hawking and Professor X.
Ultimately, it's based on nature
and I like to say it all goes back
to the birds and the bees.
And that's because it's
based on a phenomena
called swarm intelligence.
Okay.
Swarm intelligence is why
birds flock and fish school
and bees swarm.
They are smarter
together than alone.
And that's why when you see a
school of fish moving around,
biologists would describe
that as a superorganism,
they are thinking as one.
And if we can connect
people together
using artificial
intelligence algorithms,
we can make people
behave as super experts
because of swarm intelligence.
So how does that technology work?
What we do is we
enable groups of people
that can be anywhere in the world
and we can give them a
question that'll pop up
on all their screens
at the exact same time
and then we give them
a unique interface
that allows them to
convey their input
and there'll be a bunch
of different options.
And we're not just taking
a poll or a survey.
Each person has what looks
like a little graphical magnet,
and so they use their magnet
to pull the swarm in a direction.
And we have AI algorithms
that are watching
their behaviours.
And it's determining
different levels
of confidence and conviction,
and it's finding out what
is the best aggregation
of all their opinions and
all of their experience,
and the swarm starts
moving in that direction,
and it converges on an answer.
So I'll give you a fun example.
We were challenged a year ago
to predict the Kentucky Derby.
And they're off in
the Kentucky Derby.
We had a group of 20
horse racing enthusiasts
and we said, "Okay, you're
gonna work together as a swarm
"and you're gonna predict
the Kentucky Derby,
"but not just the winner,
first place, second place,
third place, fourth place."
We had them converge
on these answers
and the group was perfect.
And so anybody who'd placed a
$20 bet on those four horses,
won $11,000.
- Holy shit.
- And what's interesting
is if we look at those
20 people as individuals,
not a single one of
them on their own
picked all four horses correct.
Wow.
And had they taken a vote,
they would've only
gotten one horse right,
but when they worked
together as a swarm,
they found that right combination
of all their different insights,
and they were, in
this case, perfect.
Louis has invited
me to lead a swarm
to see how a random group
of people can come together
to make predictions.
We'll start with the easy stuff.
Okay guys, so I'm gonna
read a series of questions
and you have 60 seconds
to answer each one.
The first question,
which of these 2018 summer
movies will gross the highest?
Solo: A Star Wars Movie,
Deadpool 2, Ocean's Eight,
Jurassic World: Fallen
Kingdom or The Incredibles 2?
We filmed the swarm in spring 2018
before any information was
out about summer movies.
The AI is watching
to get a sense of the
various levels of confidence.
Some people are switching,
some people are staying entrenched
and the AI algorithms are seeing
their different
levels of conviction
and allowing it to find that path
to the solution that
they can best agree upon.
Okay, so The Incredibles 2.
They were right,
the Incredibles 2 was the
summer's highest grossing movie.
So one really
interesting application
is looking at questions
that involve morality.
And this has come up recently,
because of self-driving cars.
There's a big push right now
to build moral decisions
into self-driving cars,
which for some people think
they're surprised to hear that
but if you think about
it, self-driving cars,
going down the road, and a little
kid runs out into the road,
let's say the car can't stop
but it could drive off the road
and endanger the passenger,
and maybe kill the
passenger and save the kid.
And so the automobile
makers are saying,
we need to program
morality into these cars
that represent the population,
represent what we
people, drivers would do.
That sounds easy
until you then realise
well what is the morality
of the population?
There's not an easy
way to get at that.
And if they program in morality
that represents us today,
will that morality represent
us 20 years from now?
Right.
Next question there's
a self-driving car
with a sudden brake failure
that's gonna drive through
a pedestrian crossing
that will result in
the death of a man.
Option A, the person who get
killed is crossing legally.
Option B, the self-driving car
with the sudden brake failure
will swerve and drive
through a pedestrian crossing
in the other lane that
will result in the death
of a male athlete crossing
on the red signal.
This is a jaywalker.
This athlete does not
give a shit at all,
and he's jaywalking.
What should the
self-driving car do,
kill the boring dude
who's crossing legally
or kill the athlete
who's jaywalking?
If AI is bringing about the
next industrial revolution,
rooms like this are essentially
the new factory floor.
With human workers
providing labour
based on something AI doesn't
have on its own, a conscience.
There's a lot of
debate over this one.
That's fascinating, I wonder why.
That's a tough one.
For me it's not, if
you're jaywalking.
So there was a slight preference
that you would strike the
jaywalking male athlete.
Oh.
If you think that one upset you,
just please get ready.
So now we'd like you to imagine
a worst-case scenario
where a self-driving
car cannot brake in time
and must steer towards one
of six different pedestrians,
one baby in a stroller
or one boy
or one girl
or one pregnant woman
I know
or two male doctors
or two female doctors.
Who needs to die?
Oh, my God.
What? That's awful.
Come on, man.
Oh, my God, seriously?
You said the self-driving
car should hit the boy.
Interesting.
The type of swarm intelligence
created in this room today
could be sold in the near future
to self-driving car manufacturers.
And if that sounds scary to you,
it's way less scary
than the alternative.
When a self-driving car is
gonna slam on its brakes
and realises it can't stop
in time to hit somebody,
should the car protect the
passenger or a pedestrian?
The hope is that the
car manufacturers
program the cars to reflect
the morality of the population
that has been buying those cars.
The cynical view would be
car manufacturers start competing
on their car will protect
the passenger more
than some other car
and that could be a sales feature.
I think that's a worse scenario
than the moral sensibilities
of the community.
Whoa, that's a dark thought.
And we want to end this
show on something uplifting,
maybe even heavenly.
So before you imagine a future
with Grand Theft Auto levels
of pedestrian safety negligence,
let's take a field trip
all the way back to
where we started
in this remote Indian forest,
harvesting honey for a company
called Heavenly Organics.
This forest, you
know, nobody owns it,
and these indigenous people,
they've lived here forever.
Father and son Amit
and Ishnar Hooda
started their company 12 years ago
as a way to provide work
for local villagers.
What were they doing
before honey collection
with your company?
Well, they were still doing that,
but they just didn't have a market
or a place to sell it
to make enough living.
There's certainly no
shortage of honey here.
During flowering season,
one worker can collect
a literal tonne of honey
in only three months.
But what good is that if
nobody's there to buy it?
It took a human crew three
days, two plane rides
and eight hours driving
deep into a national forest,
but fortunately for the
locals and Heavenly Organics
an AI algorithm was able to
find this place in seconds
and knew it would be
a great investment.
They called us out
of the blue and said
they ran an algorithm
and they found us
being a match with a
lot of their portfolio.
And they wanted to
talk to us about
investment, if we
were looking for it.
Who owned this
mysterious AI algorithm?
A tech company called CircleUp
located 8,000 miles
away in where else?
San Francisco.
We're at Good Eggs,
an online grocery delivery company
that also caught the
interest of CircleUp's AI.
This is a mission-driven company
that raised capital with CircleUp
but also helps all the small
businesses that we work with
find customers.
CircleUp's COO, Rory Eakin,
worked in both business and
humanitarian organisations
before starting the company.
CircleUp uses AI to analyse
billions of data points
and find out what
consumers really want
from their food and
health products.
The problem you're
facing as a shopper,
there's hundreds of
companies all around
in almost every category.
Then, they invest in
under the radar companies
that AI thinks are gonna
be the next big thing.
One of those big
things they found,
Halo Top ice cream.
Halo Top ice cream
was a small brand
in Southern California.
Today, it's the number-one
pint in the country.
Oh, wow.
What we see is this
amazing shift with shoppers
in all categories,
they want healthier products,
less toxins in their household,
lotions that don't have all
these chemicals in them.
When CircleUp's algorithm
scanned billions of
consumer data points
they found that customers
wanted a list of attributes
that was incredibly specific,
mission focused, eco
friendly companies
harvesting organic products
while creating economic
growth in their communities.
Sounds impossibly detailed, right?
But CircleUp checked
off all those boxes
when it found Heavenly Organics.
That's what AI can do
is make sense of all of this data
in a way that wasn't
possible even 10 years go.
So how is CircleUp's collaboration
with Heavenly
Organics working out?
Let's hop back to India
and ask Amit and Ishmar.
We have built a new facility
which is twice as big.
We're able to innovate.
We're able to get new products.
You know, create more impact,
with of course this area.
So they helped you
scale, it sounds like?
Yeah, help us create capacity
and scalability. Yeah.
How has that impacted the
people who collect for you?
Currently, we
support 650 families.
As we grow, we sell more honey,
you know, every thousand
kilos, we'll add a family
so that means next year
it'll be maybe 700, 750.
Oh, wow, okay.
And today you see that
they are better off
in terms of the economy.
They have a good house,
a good facility in the house.
They send their kids to school.
You know, it's like capitalism
for good. You know what I mean?
Business used to
create a greater good.
That's why we all got into it.
Will AI rise up and
overthrow humanity
or leave us clamouring to
find purpose in our lives?
So far in this
corner of the world,
its impact has been pretty good.
Maybe there's a scenario
where AI gives an unflinching
assessment of all of our data
and decides we're not so bad,
and we work together
to create a brave new
world of prosperity
and robot human harmony.
Or maybe not.
In which case, our
time is running short.
So please enjoy these
robot strippers.
That was great.
Previous EpisodeNext Episode