Coded World (2019) s01e01 Episode Script

Choice

I'm Anjan Sundaram,
and I'm a journalist,
author, and mathematician.
I left mathematics
because I didn't feel like math
was connecting me to real life,
real people,
and the real world.
That has all changed.
Algorithms have changed it.
The whole world is built on algorithms.
Codes and algorithms
are all around us
and they're also in our heads.
Already, we're using algorithms
to influence what we do.
Algorithms have become
such a part of people's lives.
Social media in the beginning
was about connecting people
Hello!
and now it feels more and more like
it's actually bringing us apart too.
Corporations, governments, and
computer systems are now watching us.
We will not sit idly by.
To death with the surveillance state.
In the end, it's a battle for power.
Building a human-like robot
is the holy grail.
Human behaviours are the building blocks.
It's working.
AI is replacing the people's work.
We're taking ourselves out of the equation.
Konichiwa.
There are no limitations anymore.
There's almost no boundary.
We can't ignore the algorithms anymore.
I'm taking my questions on the road
to meet some of the
people designing this code,
and some of the people who have been
affected by this algorithmic revolution.
I want to see how this code
is changing everything.
But first, what is an algorithm?
Well, in some ways the word 'algorithm'
has just become a buzz word
that makes people think it's something
more complicated than what it is,
because it's actually quite simple.
An algorithm is a set
of instructions to follow
to achieve an end.
Whether that end is
to solve a problem
or to make a decision.
Basically, it's a recipe.
Nature uses algorithms.
Evolution is an algorithm.
Our brains use algorithms
and computers need algorithms
to understand what they're supposed to do.
So what an algorithm is,
is maybe not the important question.
What algorithms can do,
and how they impact our lives
is what's most important.
I'm going to start my journey
by exploring choice
because the way algorithms
influence our decisions
is what most of us think algorithms do
and is also how most of us
will often experience this code in action.
We make choices by using cognition,
instinct and emotion.
But how does an algorithm make a choice?
When Google shows you
answers to your searches,
when Netflix suggests what
it thinks you would like to watch,
the 'Up Next' button on YouTube,
when Spotify turns you
on to some new music,
when Amazon recommends
something for you to buy,
when adverts pop up everywhere
based on something you've looked at online
and almost everything you see
on your newsfeed on social media,
that is all happening
because of algorithms
trying to influence you
in one way or another.
So really, how much choice
do we have in today's world?
That's what I want to find out.
They're great!
And don't be late for school.
Next time, get the large order.
That way, there'll be some left for you.
The goal of advertising
has always been to influence
our consumption decisions
and advertisers have always
tried to figure out what we want,
what we're thinking,
and how we're feeling,
to try and get us to buy
what they want us to buy.
I wonder how good
algorithms are getting at this.
Now that algorithms
know so much about us,
what insights can they learn
from our collective sharing,
or over-sharing?
Black Swan is a unique advertising company
founded by data scientists.
They've moved beyond just
trying to influence what you buy.
They have developed an algorithm that
can predict what products you will want,
even before you know it.
These days we broadcast everything.
I'll say what I had for lunch.
We say who we love, who we hate,
what we had for lunch again.
All of these things,
our thoughts are now being
broadcast on the Internet
and they're there, publicly
available for us to just listen to.
An idea travels across
different social networks
from Twitter to Reddit,
to news channels.
So we're looking at all those things
and then, of course, the volume.
That gives us patterns of these
conversations sitting underneath.
How would you go from all those
public conversations to a product?
We don't come up with the original idea.
We will just follow that idea and
see which ones become important;
going from just a couple of
people talking about an idea
to become a movement.
That movement then, we're able
to look at it in our tools and say,
"Look, this is a really
interesting movement.
You should start thinking
about building a product
around this great ingredient or great idea."
Right. And so you're like
a consultant who provides an idea?
Yes, a really accurate prediction
of what's going to be popular.
They build the product.
Tell me, how do you mathematise this?
What does it even look like?
We use natural language
processing techniques
to turn those millions of
conversations into numbers.
We have an algorithm that
sits on the top and it says,
"Look, I've seen this pattern
for the last five years.
I can now project that
in six months' time,
if you invest more here,
you'll get more of a return
than if you invest more there."
They call it the world's
first Social Prediction tool,
a bit of an impressive claim, but we'll see.
The idea is that the AI-driven
predictions that it outputs
are supposed to help companies
make faster and smarter decisions
about what they should develop or sell.
What we've opened here is a lens
on all the conversations on the Internet.
If you're a brand that does snacking,
something like crisps,
or something healthy,
then you'd be interested in looking at the
database of conversations around snacking.
We were talking
just now about skincare.
So this is a map of all
the trends around skincare,
things that are just emerging,
things that are getting bigger and bigger
and this is where brands are
really going to start investing mature,
which means people know about it
but it's still going to be big for awhile.
Right So people are
talking about this stuff,
and you're telling me that
in about three or six months' time,
beta-glucan is going to become
a thing I hear about soon.
Yes, this is an ingredient that will be huge
in lots of products you'll be seeing,
probably, by the time we air this show.
And the size of the market for this
is really important.
And so, is this proprietary?
Is this Skin Routine
going to be a big thing?
-I've never heard of this.
-Well, let's have a look.
So Skin Routine comes from
something called K-Beauty, like K-Pop.
Obviously as you can see,
it's really in 17 to 24-year-olds
and we're also looking
at the kind of products
that people are using at the moment,
like toner rose water.
Like these.
This is an east to west culture,
which the west are embracing,
and we can predict that
it's going to be big in the west too.
So this has been a huge advancement
in research and in this kind of technology.
With access to such
a wealth of personal data,
and an ever-growing ability
to process that data,
I'm curious what else
this algorithm is capable of.
And what kind of things
can your algorithm be used to predict?
It can also be useful for things
like predicting cold and flu,
how people are feeling,
and we've also been working
to help predict people
who've got under-diagnosed
disease as well.
Right, because diagnosis
is the recognition of patterns.
-And so this tool can do that?
-Yes.
I read a stat 320 years to actually
learn everything you'd need to know
to look at every disease.
It's not humanly possible
so we can't do a diagnosis,
but we can, maybe, just push you
in the right direction a little bit.
It is definitely changing everything.
How do you see humans
living in that coded world?
We're already using algorithms
to make decisions,
to influence what we do.
It's just something we'll need to be very
mindful of over the next couple of years.
And in a sense, as a society,
we're making a choice
to give control to the algorithms.
We are. And I think we're almost
in an unstoppable way now,
just because it's helping health,
it's helping us build better products.
We now need to learn to plan for it
rather than try and stop it.
I guess I felt that Steve's
technology made me think,
I'm constantly discovering who I am,
what I want,
what I like,
what I want to do,
and here's technology that is
somehow able to predict that,
capture that and see that.
If the machine can
capture my personality,
then who am I?
Maybe the machine knows better.
So it looks like algorithms
are going to make
more and more
of my choices for me.
But will this code revolution
affect more than what I buy?
How will it affect bigger choices?
How much do people
even know about this?
Driving is a constant
process of choices.
What's the best way to get there?
Do I go left?
Do I go right?
Should I go faster or slower?
What do I do when someone else
does something unpredictable?
The choices aren't always easy.
And who would know better
than a London cabbie.
-Hello.
-Hi.
Black cab drivers in London
spend four years
memorising maps
and learning the roads,
but is this redundant
in a coded world?
How long have you
been a cabbie?
Nine years.
There's been a lot of changes
here in nine years.
There was no Uber
or anything like that.
The world was a better place.
Can you describe what
we're going to do
to get to Trafalgar Square, for example?
You're just going forward to Euston Road,
forward into the Angel,
forward into Pentonville Road,
forward in Commercial Street,
right into Prescot Street,
left to Mansell Street,
forward to Tower Bridge,
and then you arrive.
Excellent!
My god! You realise that nowadays,
for someone to memorise a map
is almost an ancient skill?
YesIt is.
Obviously, technology
is changing stuff now.
-Do you know what an algorithm is?
-Yes
And I know they're used quite a bit
on computers and stuff like that.
I wouldn't say they've really had
any effects on my life that I know of.
And that's the catch, isn't it?
That you know of.
We might not know of the ways
they're affecting our lives.
That's right. Yes. 100%.
Right now, we're at this point
where we're allowing algorithms
to make bigger decisions,
to make bigger choices for us.
Yes
And the choice to let
a computer drive a car,
is a life or death choice, right?
Yes, of course it is.
The computers and robots
are going to take over the world,
and actually in all professions.
You've only got to look in a bank.
There's no one working now.
It's all machines.
What do you think driverless cars
need to overcome
in order to become viable?
What are the challenges?
The main problem is safety.
I wouldn't trust it, personally.
I wouldn't trust a driverless car,
but then it's early days.
Would it make you really sad, Paul,
if half the cabs in London were
replaced by driverless taxis?
Yes, definitely.
It's not just taking away your trade,
it's taking away part of London, really.
That's evolution for you.
Thanks.
So how much longer
does Paul have left?
How far away are we from
driverless cars on our roads?
To find out and learn more about
how algorithms can steer us,
I've come to meet a team
of community coders
eager to bring autonomous
vehicles to the masses.
Cool! You have a race track.
William Roscoe is
a systems engineer by training,
a self-taught coder,
and the co-founder
of the Donkey Car project.
A project that experiments
with driverless toy cars.
So, this is the current iteration
of the Donkey Car.
-Can I hold it?
-Yes, please.
Roscoe is no AI expert,
but his creation uses
neural network software,
similar to what legal
autonomous vehicles rely on
to navigate the world around them.
So, the way we train this is,
we'll first drive it around here.
And we'll collect tens
of thousands of images,
that correspond to steering and throttle.
And then we'll move
that data onto the computer,
and we'll train the neural network.
Drawing upon this neural network,
the algorithm learns how to
successfully drive around the track,
making choices on steering and throttle
based on what it has
recorded in training;
learning how to drive from experience,
just like we do.
And so this is a somewhat trained,
but completely independent,
autonomous car?
-Yes.
-All right, let's try it.
So this is me driving it manually,
and then we can toggle the autopilot.
All right, so it's now propelling itself.
And so you can see that
it is able to follow the track.
Let's see what it does. Not bad
-It's coming back on track. Look at that
-Yes, it's seen these
That's cool.
-It's gone off track.
-Yes.
But is it going to find a new one?
-No.
-No.
A toy car seems like a good way
to test and illustrate self-driving tech.
You can do it in a small space
with cheap cameras,
you don't have to spend
money on a full-sized car,
there's a lot less potential for harm
when it comes to training mistakes.
So what's happening is,
30 times a second,
it's taking an image in,
and trying to guess what the throttle
and steering would have been,
-what I would have given it.
-Yes.
-When you were training it?
-Yes.
-Can I try training it one more time?
-Yes, definitely.
-I'm not very good at it myself.
-Yes.
So you're teaching it bad habits.
I'm a bit worried about how good
I'll be at training this neural network.
I'm not a driver myself,
so I might feed this
algorithm some poor data.
I'm curious to see how this car
performs with obstacles on the road.
Yes, of course.
HereCool.
-First, we'll train going around the obstacles.
-Yes.
I think we have, maybe, 10 iterations
of the car going around the obstacle
and that may not be
enough for it to learn.
You can just imagine
if real cars were out on the streets,
they'll be training for thousands
and thousands of hours.
Yes.
If we've got good training data
and all the code is working,
it should manage
to get around the track.
-What about the
-Autopilot?
-It's on autopilot.
-Okay, so now I've got it.
It's not bad. No crashes yet.
Getting around that corner.
It's driving rather well though
the lighting is confusing it a bit here.
Yes, but it's okay.
-Oh, Spock.
-Oh, no. Poor Spock.
Oh, no.
And, again, I think you'll
need probably, I don't know,
-Fifty
-A lot more data.
Fifty to 100 iterations of
something to actually train it.
Yes, I get the picture.
-You train it more, it does better.
-Yes.
Do you envision this stuff being
out in the real world someday?
Yes, or some derivation on it.
I guess there's an
element of choice here.
Do we trust human beings to drive cars
or do we trust computers
more than ourselves?
I don't know.
Computers are just so predictable.
Humans are lazy
and they get distracted.
Do you think it's better off
trusting algorithms?
Yes, I think that algorithms
are much easier to regulate.
But I have some questions.
If this car runs into,
or kills someone on the street,
who do you blame?
Do you blame the machine?
I think you blame the owner of the car.
And the owner of the car
would be required
to hold their software accountable.
This is splitting my head a bit.
So the car, or rather, the algorithm then
becomes an extension of the person.
The person might be
sitting in his home.
The algorithm's on the street.
The algorithm does something wrong
and the person sitting
in their home is liable.
Yes, the owner.
I think the liability issue
is probably the largest one.
Self-driving cars in the streets
are going to happen
but there's a rather big step to take,
to make that happen.
These guys are hobbyists.
They're not hardcore coders.
But what they're doing,
playing around in their garages
with these cars
is showing us the potential
of self-driving cars on real roads.
Looking at these cars,
I just think about
how much potential for harm
they could have.
It seemed like these cars
would be legal extensions of me.
And it made me somewhat hesitant
about the adoptability of this technology.
We're handing over control.
And that's scary.
If you scale up the Donkey Car project
to a real-world scenario,
there's a huge number of choices
the algorithm would need
to know how to deal with,
and there is little margin for error.
How can an algorithm learn
so much detail about our world?
It's not just driverless cars.
In recent years, there's been
an explosion of applications
that all use algorithms
to make real-time choices.
From face recognition
to AI-enabled robots,
how do algorithms make sense
of our chaotic physical surroundings
and mimic how we make
choices in the real world?
I'm at a data factory
in Beijing to find out.
What is she doing?
I see, so that is what has
been labelled already.
That's a bus, car
So she's seeing what
the self-driving car is seeing.
Yes.
-So you're teaching a car to see?
-Yes.
How much data does an artificial
intelligence brain actually need to function?
And this is the concept
behind machine learning.
You provide data that has been labelled,
-and perceived by human beings.
-Yes.
And the machine, little by little,
begins to learn.
Yes.
-Oh, it's a brain.
-Yes.
You're downloading this
into the computer's brain.
-All these books.
-Yes.
I'm learning that algorithms are only
as powerful as the data we feed them.
But feeding them useful data
isn't as easy as I thought.
-Is this what I'm going to do?
-Yes.
So, everything that the customer
needs to be seen is in that list?
Yes.
Everything?
So what do I do?
Is it nice?
I was hoping she'd say it's nice.
Seeing how much detail goes into
organising and preparing this data
makes me understand how far away
some algorithms truly are
from working on their own.
Because at this point,
if I stop what I'm doing,
any autonomous vehicle using this data
will make some pretty bad decisions.
I finished it. Made it in a truck.
The wholeEverything?
Oh no.
I knew I did something wrong.
Oh no.
It's not good, I know. It's okay.
-Can't you do a general idea for the machine?
-No.
-And the machine can figure it out?
-No.
Will it go to the machine?
You don't trust me to
I think I did a terrible job.
After a point, I was like,
"I'm done with this."
"I'm ready to do something
else with my life."
I guess when I look
at the whole picture,
there are moments of satisfaction.
What it feels like I'm doing
is translating our world
to a world that algorithms understand.
Because we don't see
the world in the same way.
I think all the people working
in this office, in this factory,
are like messengers and translators.
Maybe it's just a new form of labour.
Have we reached a point, as a society,
where we're no longer
working for other people,
but we're now working for the algorithm,
for the code, for the machine?
This isn't the only way algorithms
learn about us and our world though.
In many cases, you don't
need a factory full of workers.
We have been unknowingly
teaching algorithms for over a decade.
It's no secret that social media
has fundamentally changed the way
we communicate and learn
about the world.
What many of us don't realise though,
is that all this data
we're posting about ourselves
is actually being used
to teach AI and algorithms.
How are social media algorithms
using this data to choose what we see?
What are the motivations
behind our feeds?
Are algorithms getting in our heads?
Zeynep Tufekci is a New York Times
columnist and techno-sociologist.
Her research revolves around politics,
privacy, surveillance, data
and how algorithms watch,
judge, and nudge us.
She believes we're building an
artificial intelligence-powered dystopia,
one click at a time.
I want to know who's in charge:
the algorithm or me.
How much choice do I,
or we, really have?
Technology and algorithms
are changing the world
in an almost invisible way.
Could you tell me where
the world is moving to?
These algorithms can
pick up very subtle stuff.
If you go to a shopping site,
they will be keeping track
of where the cursor is.
It knows what you looked at.
If you go on Facebook and
you start typing something
and then change your mind and delete it,
Facebook keeps those
and analyses those.
Why did you not type?
Why did you change your mind?
Google collects your searches,
what you watched on YouTube
and all of those things,
they come together.
And then they are used
to do whatever it is
that the people
with the data want to do,
to sell you stuff,
to persuade you of things,
based on an intense profile of you.
It could have your financial information;
there's health information.
Facebook buys those things.
Whatever vulnerability you have
could be used to target you,
and just you.
And the more data they can collect,
the more powerful
these algorithms can become.
And their advertisers aren't just
trying to sell you luggage or makeup.
They're trying to change
your mind on politics.
You can find people
and scare them individually.
It's not that people haven't tried
to do these things before.
It's just that you couldn't do it
with this much data
and profiling at the scale
and quite cheaply and
hidden from the public view.
And that, I think, is the scary part.
Do you think the problem is,
if I had to sum it up,
that this technology is
causing us to make decisions
-that are not in our own best interest?
-Absolutely.
I see more and more of it
being used for social control.
Right? By the powerful.
What can we as a society do
to control this technology
or to limit the effects on us?
Well, for one of the things, we can just
outlaw all sorts of uses, and we can say,
"You cannot use it this way."
We can say we're going
to limit micro-targeting.
Why are you allowed to advertise to me
based on my vulnerabilities?
You want these things to be used
for all the good things they can do
but, if instead, you just
let it be developed in a way
that threatens our privacy,
exposes us to manipulation,
destroys our politics,
that's not a path we'd want to go down.
I hear all the concerns about technology
companies knowing us so well
that they can exploit our vulnerabilities,
and they're already doing that.
But I guess I would believe in
the capacity of human beings
to choose how to respond to that.
And I guess it relates
to what Zeynep was saying,
even though I think
these challenges are huge,
I think that, ultimately,
there is an opportunity
and a moment for us
to choose who we want to be
and how we want to live.
Overall, I would say that
I just see the world changing
and I think we're at an inflection point.
It might seem like we're fighting
an irreversible algorithmic force
trying to take away our choices,
but luckily, there are renegades
trying to help us keep our
independence from the machines.
Stephanie Kneissl,
along with Max Lackner,
created 'Stop the Algorithm',
an art project designed to trick
social media algorithms
into giving us a more diverse feed
and to show people
the true impact of their habits.
The more diverse our feed,
the more the choices we make
are ours and not the algorithm's.
What is an algorithm,
from your perspective?
And why do you think
they've become so powerful?
I think it's because they're necessary.
There is no way that we could use
the Internet the way we're using it now
without algorithms.
There are lots of junk,
lots of good stuff, everything,
-and you need some way to navigate it.
-Yes, exactly.
-Many cat videosa lot of news.
-Yes.
And so from there, what happened?
How did you come up
with 'Stop Algorithm'?
Back then, nobody talked about algorithms.
We've just started to explore
why we are seeing
what we're seeing
on Instagram feeds,
and on Twitter feeds
and on Facebook feeds.
Stephanie's approach
to hack the algorithms
was to build analogue art projects
to disrupt the data
that we feed the algorithms.
Together, we're going to rebuild them
so I can see how they work.
Okay. So this mimics the finger?
This is the finger. Exactly.
This first one is for Instagram.
On Instagram, even if you
just look at something,
you will still see the things again.
It doesn't mean that you
have to like something
-for it to appear in your feed again.
-Yes.
-You're engaging with it by looking at it.
-It measures everything you're doing, basically.
Sometimes you need to give it
a little bit of an initial spin.
Amazing!
-We'll just scroll in to oblivion forever and ever
-Oh, I see
They're feeding you an infinite
amount of junk to your feed.
You're using that fact to randomise it.
They're thinking, "We'll just feed
more content to this person."
-Yes.
-And you're like, "Yes, feed it
-Bring it on!"
-Yes.
By randomly scrolling
through the Instagram feed,
the algorithm slowly gets overloaded
and has less of an idea of what
content to target you with.
One of the things that
social media does most
is to figure out who you are as a person.
-It's also looking at who you are following.
-Right.
And then, what are they looking at?
That's something that
this machine cannot do.
But this one can.
Like it needs to distance them
-Oh, these are pushing
-Pushing.
Basically this works like a random
analogue Twitter bot, in a way.
It's working!
Why is Twitter more
powerful than Instagram?
Because it's a kind of the mentality
of how users use it.
If we read something on Twitter,
we also assume that it's true.
That also, then, almost changes
how we speak about these topics;
what politicians think is important.
When you start finding
out that something,
like a couple of thousands of tweets,
can actually shift the
political situation in a country,
that's insane.
It's just how massively important
social media has become.
This analogue Twitter bot
randomly likes,
follows and re-tweets
to diversify the Twitter feed.
It should be our choice
to decide what we see online.
And we're not making any conscious
decisions right now with social media.
This brings us to the third machine.
We started looking
at Facebook and basically,
the interface of Facebook
was so complicated
that we couldn't hack it, in a way.
Because what you're doing as a human
is very hard to emulate with a machine.
-So, you couldn't hack Facebook.
-We couldn't hack Facebook.
Facebook wants you to stay
online as long as possible,
so it's only going to show you things
which you reacted with positive emojis on.
So if you react with
a crying or an angry emoji,
you're actually going
to see less of that.
-Really?
-Yes.
We just made a video, which we looped.
And then, it just stops at some point.
Yours is supposed to choose that.
Rate the next political post
with an angry emoji,
-even if you agree with it.
-Okay.
The point is, you should be
taking out your phone,
and start changing the way
you use algorithms.
Stephanie couldn't build
an automated machine
to hack Facebook because
of its complicated interface.
Instead, the device encourages us
to interact with Facebook
in ways we wouldn't normally,
confusing the algorithm that drives it.
-We have all theInstagram
-Instagram.
-Twitter
-Twitter, and Facebook.
You could leave your devices on
all night and have these things just go,
-and you would have a fresher, more neutral feed.
-Yes.
Would you want one of these
machines in everyone's home?
It shouldn't be necessary.
It's really more about
bringing attention to this topic.
In a perfect world, these machines
definitely wouldn't have to exist.
As I watch this, I think that
what often gets forgotten
is that social media
is dependent on us.
We willingly participate
in this system that controls us.
This randomness is very important
because of the filter bubble
we are all stuck in.
-Hashtag
-What do you mean by filter bubble?
So you will, in the end, only see
content that you agree with.
And it's actually very important
that you also look at
different sides of something.
Social media, in the beginning,
was about connecting people,
and now it feels like it is
increasingly bringing us apart too.
Do you think it's taking away
a certain freedom to choose, in a sense?
Yes, definitely.
But we don't even notice it.
We need algorithms,
but it's not set in stone
how those algorithms work.
People started making the systems
and making the elements of it
before ever establishing if
this is how we want it to be.
I'm coming to see the
inevitability of algorithms,
and it's funny that these
little analogue machines
are the first time that I'm actually
thinking about stopping them.
They almost seem childish,
but they have this kind of life to them,
and I'm seeing another
aspect of the algorithms.
Not only how the algorithms control me,
but also how they influence me
to interact with them in a certain way.
But very quickly we step back
from the machines and go,
"Wait a minute.
Maybe the algorithms are usurping
some part of my identity
that I don't want them to.
That we, as a society,
should not allow them to."
Through these simple machines,
we can start to think about
how we can take back power.
So algorithms can manipulate me,
and a lot of the time, it feels like
they're making or taking my choices,
even when I don't want them to.
But does it have to be that way?
Can we synthesise this code
to transcend the idea
that it's us versus the machine?
-You're a graffiti artist.
-Yes.
Do you think algorithms and math
can be as creative as human beings?
No. I think that with robots,
you can make perfect things.
But an artist, a human being,
can make some mistakes,
and I think that mistakes
are really good for art
because so many times,
creativity comes from an error.
They have mathematics,
and mathematics is perfect.
-Art is not.
-Yes.
Possibly.
But what if the best results come
from us working with the algorithms,
as opposed to working against them?
Sharing choices.
In art, the choices made
distinguish the good from the great.
I'm at this gallery in Boston
where they're showing art
that has been made by humans
who collaborate with machines,
with artificial intelligence.
There's a bunch of stuff
that's really organic,
and kind of scary in a way.
Who created it?
Was it the human
or was it the machine?
I'm curious. It certainly intrigues me
and makes me want to know more.
All of the work currently on display
here is by one artist, Alexander Reben.
I met with the gallery's curator
to find out what is so special
about Alex's work.
There are not many artists who are using
artificial intelligence in making art today,
and he's one of the
more interesting ones.
The paintings are
what he calls amalGANs,
and they're generated through
generative adversarial networks, GANs.
It takes a bunch of imagery
and then it forces
the imagery against itself.
What I love is the process.
I love the fact that he's really
trying to make art into something
that he's not necessarily responsible for.
I think it's interesting
that Alex is relinquishing
some of that choice
in the creation of the artwork;
that choice which artists
identify with so much
as fundamental to their creation
and the act of creation.
Alex is somehow sharing
that with a machine,
and he's okay with that,
and I think that's fascinating.
I really want to meet the man,
or perhaps the machine,
behind these works of art.
Is this AI-generated art?
Those are thought renders.
These all have different
aspects of generative artworks.
What's this?
These are all robots
which play with toys.
-Can I turn this on?
-Yes, give it a shot.
Look at that
It's the automation of fun,
automating things which
probably shouldn't be automated.
-It almost takes the fun out of it.
-Yes, it's exactly the point.
And what's that?
Those are headphones which
reduce your ability to speak well.
It's like a digital drug or
a speech scrambler, in a way.
-I can see it's making me slur my speech.
-Yes.
-It's really hard work.
-Yes.
-You're doing pretty well, though.
-Do you think so?
Yes, you're not stuttering
as much as some people do.
And most people, I don't think
they understand how technology
can actually influence
our way of thinking,
and that's a 100% quick way
to just experience that sort of thing.
Yes. Is that art, as well,
that you made?
This is kind of interesting.
On the left, you see the code
which generates what you see
on the right in a web browser.
-So that piece of code made that?
-Yes, it made that
in the Chrome browser, exactly.
I'm keen for Alex to show me how
he collaborates with the AI to generate art,
like the GAN artwork
I saw in the gallery.
What I have set up here is actually one
of the steps for amalGAN that you can try.
Let's try it.
So what people are calling AI
are usually things like
machine learning or deep learning.
Deep learning is a subfield
of machine learning
that's more specialised,
and what's been able to create a lot of
the recent breakthroughs in machine learning.
Even things like Google's voice
recognition and their voice synthesis,
they're all based off these
new deep learning systems.
This is a website called Ganbreeder.
The backend of this is
something from Google.
I can select a new gene
or add a random one,
so I'm going to select one.
And I can go to 'animals'.
I like animals and I'm going
to pick an Arctic fox because it's cute.
And then, I'm going to say,
"Add a random gene."
There are so many different options.
It's just crazy.
Yes.
Dial telephone,
and then African hunting dog.
Okay. Create images.
Oh
It combined those terms together,
and this is the image
it thinks best represents those.
You have a couple more tools here,
where you can either make them
more similar or different from each other.
Okay. So, if I change Arctic fox
All right!
-There you go.
-Oh, no! This is even worse.
It's a bit more abstract.
How do you use
this website in your art?
This particular site and technology
was used as one of the steps
in a work that I was doing called amalGAN.
Basically what that does is it looks
at my brainwaves and body signals
as it shows me versions
of images it creates from here.
It uses that information to try
to determine which image I like best.
In this series of work,
which Alex refers to as amalGANs,
the selection process takes place
entirely between Alex's mind and the AI.
The algorithm reads Alex's
thoughts and chooses for him.
It's an algorithm that Alex himself
has trained to work only with him
in a truly unique partnership.
For the rest of us though,
we can still collaborate with AI
on the Ganbreeder website.
Just coming back to this picture,
I can now go back and change the genes.
I can select a new gene.
You can add more or remove some.
I want to use a tank.
-See?
-Oh, no!
The more you have,
the more abstract things get,
just because there are
so many things mixing together.
And why haven't
we seen these before?
Because the processing power
hasn't been around before
to make this sort of thing.
The algorithms are new.
Things we've never seen before
are being generated by these systems.
So your point here is that I can
I can make some choices.
I can choose the components
that will go into the artwork,
and then the machine
puts it together for me.
-So, I'm collaborating with the machine.
-Right.
Several clicks, you get some
cool-looking images that are interesting.
So who's making the art,
in this case?
If you look at it purely
as a tool, like Photoshop,
you can say you're
obviously the artist,
who uses code and data
as their paintbrush, I suppose.
And it can be
a collaborative experience,
given that the technology
is so advanced.
But I think there's, at least now,
still a human in the loop.
But at some point,
things might be automated
to where you're not making
many active decisions.
After having tried it for myself,
I can certainly say there is
some skill involved. Or luck.
Because not much was gained by me
sharing my choice with these algorithms.
The images that the AI
generated with me
were in no way as interesting
or as beautiful as Alex's.
But how much longer will it be before
the human element isn't necessary?
What amazing artwork will
algorithms come up with by themselves?
For me, the medium
and Alex as the artist
are inseparable.
He is an artist
because of the machines.
They're co-creating this,
and that's something new.
In some sense,
I guess what ties together
many of the experiences
that I've had with AI to this point is,
there's a boundary
at which the humans have to let go,
and say, "I've made the AI,
but now the machines take over."
One thing is for sure,
algorithms aren't going away.
Already we're using algorithms
to make decisions,
to influence what we do.
Inevitably, they will
take over more of our choices,
make more decisions for us.
Do we trust computers
more than ourselves?
Once we're using it,
we forget about the fear entirely
and we just think it's cool.
Before it gets to us,
then we're afraid of it.
Are we actually any better
at making decisions ourselves?
Oh, no. This is even worse.
I've learnt that often, algorithms
are operating in the shadows,
hidden from the public view,
but still having a huge effect
on how we live our lives.
They're trying to change
your mind on politics.
Technology is messing with our minds.
I've seen how algorithms are
learning about our physical world,
and starting to understand it.
So, I've got it.
Things we've never seen before
are being generated by these systems.
Hopefully, self-driving cars will be
at a point we'll never need a car.
It's important to remember that
algorithms are neither good nor bad.
Just tools that we have
created to help us.
It allows us to participate in the world,
like we never could before.
It is definitely changing everything.
Ultimately, the decision to
hand over choice to an algorithm
is itself an act of choice.
In the end, we still control the choices
if we choose to.
Next Episode