Coded World (2019) s01e02 Episode Script

Freedom

An algorithm is a set of instructions
to follow, to achieve an end,
whether that end is
to solve a problem or make decisions.
Basically, it's a recipe.
They sound complex,
but an algorithm is actually quite simple.
Nature uses algorithms,
governments use algorithms,
corporations use algorithms.
They give us greater access
to the world,
but they are also arming Big Brother
against us.
We're using algorithms
to influence what we do.
Algorithms have become
such a part of people's lives.
Social media in the beginning
was about connecting people.
-Hello!
-Hello.
And now, it feels like
it's bringing us apart.
Corporations, governments,
computer systems, are now watching us.
"To death with the Surveillance State."
AI is replacing people's work.
I'm Anjan Sundaram.
I'm a journalist, author,
and mathematician.
I left mathematics because I didn't feel
like math was connecting me to real life,
real people, and the real world.
Oh, hello.
That has all changed.
Algorithms have changed it.
We can't ignore the algorithms anymore.
Most people don't know the limitations
of an algorithm.
And we've trained most people
to just be scared.
In the battle for power,
algorithms have disrupted democracy
and rocked our nations.
We need to be more vigilant
with what we trust from the Internet.
Disinformation is the new frontline,
making it harder for us
to know what to trust.
Every day, we are watched,
identified, and tracked.
But what can we do to fight
for our privacy and for our freedom?
I want to understand this covert war.
Do algorithms give us more freedom
or are they enslaving us
to technology?
We are told that algorithms call the shots
and control our lives.
But they can also help us.
Algorithms are behind a huge number
of innovative products
that are transforming our lives.
As a foreigner, I've come to Shanghai
to test some of these out.
With the help of my tablet,
I'm going to use algorithmic apps
to explore the city.
Can I feel more liberated
in an environment I would have
once found hard to navigate?
I'm going to play tourist today.
I don't even understand
the street signs
but I'm wondering what an algorithm
might help me discover in this city.
So we're going to take a picture here.
This giant red building.
And then, I'm going to ask the algorithm
what this building is.
Oh, it says it's the China Art Museum.
And I guess, what the algorithm did
was to identify this building
from a database of all the images
of this building on the Internet.
There it is.
It took a while
but it's getting there.
Not long ago, this would have belonged
to the realm of futuristic movies.
But now, I can point my camera
toward any sign, text or image.
Algorithms can then give me
a live translation,
identify an object,
or give me context
to the things I see.
This makes me feel freer
to explore the world around me.
How are you?
-How are you?
-Yes, how are you?
I've ended up walking through
this gritty neighbourhood in Shanghai
and I'm wondering if the algorithm
can help me navigate this market here.
One second
What is this?
Eggplant?
Is it a Chinese kind of eggplant?
-Yes, sir.
-Cool, thank you.
It's such a beautiful fish.
I'm wondering
what kind of fish this is.
Let's see what the algorithm tells me.
Oh, that was quick.
It's a bream fish.
It's native to the Yangtze basin
in China.
It's as though this market
is suddenly opened up to me
just because of the existence
of algorithms
that are helping me
see everything around here
and make sense of it
and interact with it
in a way I couldn't do before.
Where are these shrimps from?
I'm kind of blown away.
Everything in that market
was being translated first into code
and then into a language
that I can understand.
Even though I feel freer
with the algorithm in my hand,
I can't help but notice that
there are cameras everywhere.
And there's a new element
to the CCTV cameras,
facial recognition technology.
Governments now use algorithms
to identify people at borders,
unlock smartphones, spot criminals,
and authenticate banking transactions.
But in China, especially,
algorithms have been embraced
in more surprising aspects of life.
In 2018, the world's
first AI-focused park opened in Beijing
with augmented reality Tai Chi,
driverless shuttle buses,
and smart running tracks.
I'm here with Coco,
my local translator,
who is going to help me use
the AI-powered running track.
It uses facial recognition
to monitor my movements.
Body data such as heartbeat and pulse
are then recorded and analysed.
On the face of it,
it seems like a bit of harmless fun.
-You have to scan the QR code,
-Okay.
-to register your face in the system.
-Okay.
-So I'll take a selfie.
-Yes.
And the face recognition
is going to use this face?
-Yes.
-Oh, so I can go without the phone?
Yes.
I've just offered my face to this camera
to measure how fast I'm running,
how long I'm running.
But this technology
could actually be used on the streets
to watch me do anything
and that's maybe a darker,
more worrisome side to this technology.
Let's see how well I did
on this run today.
I hope it recognises me.
It said my face
was an 80% match.
Oh, yes! Okay.
So I ran for almost a kilometre.
And now that I've finished my run
Now, I just feel watched.
I guess what Baidu has done
is to create this artificial intelligence park
as a way of acclimatising people
to having machines
and artificial intelligence
in their environment.
I do wonder what will be left
of my private identity though.
Because even to take a little jog
in this park,
it needed my photo,
it needed my phone number,
all this personal information
I feel I don't own my personal
information anymore,
and I have no control over them anymore.
Now that Baidu has gathered this data,
what will they do with it?
Could algorithms be used to match my data
and follow me around the city?
In China, facial recognition cameras
are used to enforce public order.
Images of jaywalkers
and other lawbreakers
are named and shamed on big screens.
Police can then fine offenders.
Will they recognise me?
I can see one, two, three, four cameras
watching me now
and what's disconcerting
is that the cameras not only see me,
but they also recognise who I am.
After the camera captures my image,
the algorithm then tries to identify me
by matching me to its database.
My image then flashes up on the screen
to publicly embarrass me.
But it certainly worked.
And the system also works
because I'm feeling a little strange
that everyone's looking at me right now.
Algorithms don't always get it right.
Analysts have said that
people are regularly misidentified
and wrongly stopped.
If facial recognition can have
such an impact on my freedom,
I want to find out
how the algorithms work,
and how easy it is for governments
and corporations to track my every move.
Amrullah is the whiz kid
responsible for developing a cutting-edge
AI facial recognition algorithm
that can not only identify
who you are
but can also determine your age
and emotional state.
How would you begin
to design an algorithm?
So facial recognition,
you can break it down
into a few processes.
The first part is finding a face.
It starts doing this thing
called encoding,
which means turning your face
into a 128-dimension vector.
Okay.
So, the layman way of thinking
about this is that
you have this face and it starts
connecting all these little features, dots,
-and the distances between these features.
-Right.
Once it picks up
about 128 dimensions of it,
it starts to connect all of them
and form a mesh.
And this is Anjan's face
to a computer.
That's what an algorithm sees
on your face.
I've taken a photo of myself today
to see if the algorithm
can work out who I am
by comparing the photo to those images
that are publicly available
on the Internet.
I'm going to see whether
any random camera on the street,
by taking a picture of me
using this algorithm,
-might recognise me.
-Yes.
-Okay.
-That's scary.
Before matching,
it actually has to encode it.
Okay.
-Oh no! It recognises me.
-Yes.
-It says true.
-That's horrible.
The algorithm has successfully matched
the photo I took earlier
with an image of me on the Internet.
I'm a little bit shocked
that an algorithm so simple,
actually recognises me.
That means I'm recognisable
by every camera on the street.
It feels a bit like
you're being watched all the time.
It's not a scary feeling,
but it feels as though
you have to keep your behaviour in check.
So you know that
if you're walking down the street
-with someone who is, say a criminal
-Yes.
the government would see you
as associated with that criminal.
That's a possibility that could happen.
This sounds and feels like "1984".
Big Brother!
-Big Brother! Big Brother!
-Big Brother! Big Brother!
In some way,
I think this technology isn't that new.
What has changed is the amount of data
that people have collected.
Data is the fuel for algorithms.
To teach a machine how to better read
and recognise a human face,
it has to be trained
using hundreds of thousands of faces.
Images can be collected
from social media platforms.
These images are often gathered
without explicit consent.
What algorithm are you using here?
So, we are using
three sets of algorithms,
and what you see here
is a combination of all three.
So, the first one that we have
is face detection.
Basically, in a scene like this,
it's trying to search faces in the crowd
and trying to pinpoint
what are the coordinates of those faces.
And once it determines
that this is a face,
then it sends it to the second algorithm,
which is the age and gender detection.
Right. And what is
the third algorithm?
The third one is quite obvious
You see the bars at the bottom?
-It's the emotion recognition.
-Oh, emotion. Got it.
It's not just taking a still image,
-it's actually taking a time sequence
-Right.
across time.
Then it gives you
the four emotions there:
angry, happy, sad, surprise.
My god, what does it think I am?
It thinks I'm happy.
-It thinks you're happy.
-More or less.
-Yes.
-If I frown,
-it thinks I'm angry. Wow.
-Yes
So it's tracking my eyebrows,
my forehead.
It's treating your face
like a canvas.
And it's actually taking that as an input
to try to guess what you're feeling
at the time.
Emotion detection technology
requires two techniques,
computer vision
to precisely identify facial expressions,
and machine learning
that uses algorithms
to analyse and interpret the emotional
content of those facial features.
Initially, cameras and algorithms were
used to monitor our real-time reactions
to ads and products
for market research.
Now, mining our emotions
on a mass scale
has grown into
a US$20 billion industry
in areas such as medicine, security,
and the automotive industry.
Algorithms can detect
when to give a patient more medicine,
build better smart car features,
or spot suspicious behaviour.
The default position seems to be that
everyone knows everything about me.
If I'm walking on the street,
the facial recognition software
will track me.
If I'm browsing on the Internet,
my movements will be tracked.
If I want to protect my privacy,
I need to take steps.
I need to educate myself.
I think that notion
of taking personal responsibility
is still really important.
What are the most important things
that civil society needs to be educated
in terms of facial recognition,
privacy, and protecting our freedoms?
Right. I think the number one thing
is don't rely on the government.
Okay.
What else does the individual need
to be educated on regarding this?
-Learn to write code.
-Okay.
The laws that govern my freedom
in the physical world
have not yet been encoded
in the virtual world.
And so, that virtual Anjan, I think,
is less free, has fewer rights.
And what's scary is that
I knew the cameras could see me
and I kind of made peace with that.
But now, I know that cameras
can recognise me
with just five lines of code
that's publicly available.
I don't think I have that much power.
I think the power has been given away
and other people have that power.
Corporations, governments, coders.
I need to think about the friend
who takes my photograph
the next time at a party.
I need to realise that
that photograph
makes it easier for algorithms
to recognise me.
That photograph makes me less free.
There are also cultural differences
and diversity in facial expression.
Algorithms are often trained predominantly
on white faces.
And this can cause them
to reproduce biases
that are more likely
to prejudice minority communities.
Essentially, if the data input
into the algorithm is biased,
then, so too,
is the output it generates.
Facial recognition
is not the only algorithmic area
charged with reinforcing bias
and bolstering the surveillance state.
I'm going to travel
to the other side of the world
to find out about the impact
that algorithms are having
on the freedom of one community.
Data-driven policing
is increasingly popular
with police forces around the world
and has become commonplace
in North America.
Algorithms are used to predict
who might commit future crimes
and the places
where it is most likely to happen.
I've come to the epicentre of policing
in LA, Skid Row,
to meet those living in the community
and find out what they think
about predictive policing.
It's an area that has
become synonymous with poverty,
homelessness and crime.
So, Skid Row extends
for how many streets that way?
It's a 50-block area,
where all the homeless people
in the enforcement
is a 15-block area,
about 82% black,
40-60% of the people here
have some form of a disability.
And about 90% of us
is broke below the poverty level.
And are you being monitored
by the police?
Yes, I've been monitored
by the police.
And what does that feel like?
They've been following me
and watching me a lot.
It is frustrating, you understand me?
You get mad.
There are times
when you're just out,
doing your thing, being personal,
you understand me?
You got somebody
that's watching you.
Do you feel like a criminal already?
Of course. Yes.
They make you feel like
you're public enemy number one.
Even if you have not done anything
under the guise of pre-emptive policing,
your behaviour gets observed,
your information gets data-mined,
and then you are traced,
tracked and monitored.
In some sense,
the police has a digital version of you
-in their database.
-Absolutely.
And they're conducting policing
on that digital version.
Absolutely.
Even the fact that we're standing here
in this neighbourhood
Right.
Given how predictive policing is conducted
and how the algorithms are designed,
we are more criminal.
I am more criminal.
You are more criminal
just by being here,
-Absolutely.
-being associated in this neighbourhood.
Absolutely.
The community is clearly upset
by the police's use of predictive policing.
They feel that the algorithms
are reinforcing discrimination.
Dr Jeff Brantingham is a pioneer
in predictive policing.
He created PredPol, and claims
the algorithms are currently being used
to help protect one out of every 33 people
in the United States.
The Los Angeles Police Department,
better known as the LAPD,
is one of 50 police forces in the US
that has embraced PredPol.
What is PredPol?
It's an on-demand day-to-day tool.
Every morning,
a police officer comes in
and they're getting ready
for their 6am shift
and they go into roll call
and their supervisor says,
"Okay, here are your predictive missions
for the day.
I want you to concentrate
on these locations."
These are brand new predictions
using data that just came in
telling them about where
the risk of crime is greatest today.
They're looking at this data and
thinking about where they want to move
and where they want to travel
and locate themselves.
That's right.
I think everybody agrees that,
whether we're talking
about medicine or policing,
that prevention is the ideal. Right?
There's an even more
critical connection here.
The public is integrally related
to the algorithmic process.
For the vast majority of crimes,
police have no idea that they occur
unless a member of the public
calls them to say,
"My house has been broken into."
So actually, that call to the police
generates a data point
that then goes
into the algorithmic forecasting
that impacts how police are going
to distribute themselves tomorrow.
So it's a system that
takes certain input data
and spits out an answer.
What is your perception of how communities
have received your product?
There are two sides to that, right.
On the one hand,
when the police start using this,
often, they'd get calls
from members of the public saying,
"You know what?
I saw a police car on my street,
and I haven't seen a police car
that would drive down my street
in a very long time.
And that was really nice."
The other way is
if they don't know what's happening
and they just hear predictive policing,
they think of this
as about targeting people.
It's not about targeting people.
It's about finding those places
where the risk of crime is greatest.
And by policing those locations,
you're actually preventing the crimes.
If your algorithm says this crime
is more likely to occur in this park.
-Right.
-I happen to be walking through that park.
I'm going to experience a higher rate
of scrutiny from the police.
I could be walking somewhere else
in the city and I'd be totally benign.
And the moment I stepped in here
to the police,
I look more like a criminal.
Or you might look more like
a potential victim
-and they step up to help you. Right?
-Okay.
You're trusting the algorithm.
It's not always going to
predict perfectly
that a crime is going to occur
in this location.
But, if over time, it gives you
a little bit of an edge
to get out in front of the event,
then police officers actually see.
You know what,
this is giving me some value.
I think math is very powerful
and that's both positive and scary.
And I think Jeff was just taken
by the beauty and the power of it
to predict, to distil messy,
complex worlds and neighbourhoods
and crime situations
into a single line of math.
The community is fighting back
against the PredPol algorithms.
Hamid is at the helm of a movement
that's come here to protest
against the LAPD's use
of data-driven policing.
Good morning, everybody
and welcome.
Welcome to our press conference.
I'm with the
Stop LAPD Spying Coalition.
And last year,
the Stop LAPD Spying Coalition
released a report on predictive policing
that triggered a whole series of events,
which led to the city holding
public hearings on data-driven policing
at City Hall in July of 2018.
It's inherently racist
and all of this new technology
attempts to simply put a glossier,
more comfortable view on it.
But we say, "To death
with the surveillance state."
Shut them down!
What do we say? Shut them down.
-Shut them down!
-Shut them down!
-Hey, hey! Ho, ho!
-Hey, hey! Ho, ho!
-PredPol has got to go.
-PredPol has got to go.
-Hey, hey! Ho, ho! PredPol
-Hey, hey! Ho, ho! PredPol
The LAPD are having a hearing
on data-driven policing.
Activists believe that
instead of eliminating bias,
these algorithms are entrenching
pre-existing inequalities.
Every day, every single second,
every minute, every hour,
every week, every month, every year,
I'm targeted as being a man
of African descent.
And you come back to me saying
there's no such thing as bias.
I'm seeing first-hand for the first time
these people speak, and speak to power.
And that's kind of inspiring.
Let's go!
People change their minds every day.
How can you predict
what somebody is going to do?
This is America.
We're supposed to be
a progressive community.
And this is still happening in 2019.
The police were talking about using math,
predictive policing and PredPol,
the software to make policing
more efficient,
to make it more targeted.
You know, only going after the bad guys.
-Do you believe them?
-No. No.
I just think that there needs to be
a level of consent from a community
before you deploy high technology
and algorithms
that is tracking people,
that is leading to arrests.
The Stop LAPD Spying Coalition would like
to dismantle the software entirely,
but from speaking to the researchers
and visiting the labs,
I just felt that
this technology is unstoppable.
I think more than many of the other places
I've visited so far,
I got a real sense
of how vulnerable people could be
in the face of algorithms.
This community feels that algorithms
are making them less safe, and less free.
But what if there's an even deeper level
of intrusion into my privacy?
One that I could have
even less control over.
George Orwell's 1984
projects the ultimate evil.
Total control of the human will.
Authorities can use algorithms
to read my face,
to trace me, and even to predict
certain behaviour.
But what if they could also tap
into my brain,
and know what I'm feeling?
- Big Brother! Big Brother!
- Big Brother! Big Brother!
I've come to one of the main centres
for brain surveillance research in China
to see if it's really possible,
and if so, what that would mean
for our freedom.
Ningbo University hosts the
government-backed project, Neuro Cap.
The device uses wireless sensors
placed in helmets or hats
to monitor the wearer's brain activity,
and the data is sent
to an algorithm for analysis.
To date, Neuro Cap has been rolled out
in the manufacturing
and transport industries,
as well as in the military in China.
I'm here to try out
this brain monitoring device.
What are you putting on?
It's your brain waves.
What are the different lines?
This is the low beta, high beta,
low gamma, and high gamma.
-Okay.
-Yes.
What is alpha, beta and gamma?
These brain-generated frequencies
are detected and analysed
by the algorithms
to monitor changes
in the emotional states of its users.
It's not designed
to read our thoughts per se,
but to track what we are feeling
like anxiety, tiredness,
depression and anger.
So, my mental tiredness
is 40 out of 100.
The baseline is 50.
My mental load is 42,
so it's below average.
What is this one?
-Arousal.
-Oh, arousal? I'm not aroused at all.
And attention
I'm paying a lot of attention to you.
-Yes.
-So I'm a good student.
-Yes.
-Yes.
Good.
Does the Neurocap data-mine the brain?
Why is the alarm going off?
And so, the employer can choose
to give me a break now?
-Maybe?
-Yes, yes.
The team are also simulating
different working environments
so that they can understand what
people are feeling in certain conditions.
I'm going to experience
what it feels like
to be a worker under surveillance.
Proof that hosting a TV show
is not that easy.
I feel like an experiment subject.
This feels very strange.
I just wear the shoes, yes?
I can't even move.
Quite restrained.
Oh, wow! Nice. Cool.
Wow, this is kind of high.
All right, so I'm going to fall
from 20 metres.
So, say, I'm a building worker
or window cleaner
working on the skyscraper.
There is some sense of vertigo.
In a relatively safe environment,
I guess I'm experiencing
some kinds of emotions
that workers in precarious situations
might feel.
And maybe the Neurocap would,
or even the sensors would alert employers
to sensations that
workers were feeling
if they were scared or
or tired and more prone
to accidents.
It literally feels like
I'm walking off the building.
Okay. I'm going to jump off now.
Oh, my god!
When I walked into the university
and saw Department of Neuromanagement,
I was kind of surprised.
Is there a field now
that's seeking to manage our minds?
People have access to my mind
and what I'm feeling with simple sensors,
a simple cap and basic algorithms.
And in a work scenario,
wearing a Neurocap
and knowing that someone with power
over me can track my emotions,
might not make me feel so safe.
And yet, this might be
the future of work.
I'm afraid that I've unintentionally
given away my freedom
to governments and corporations.
I might not yet wear
a Neurocap every day,
but algorithms can watch us,
and they're on track to working out
our state of mind.
But what if they could even create
alternative versions of us?
And, if they can,
should we really trust what we see?
Spectre showed me that
whoever controls the data,
controls the future.
We need to be more.
Videos known as "deepfakes"
use AI and algorithms
to manipulate the appearance and
voices of individuals
into theoretically real-looking footage.
Let me tell you something,
you have
This is the new battleground
for disinformation online.
It can be harmless
but there's also a darker side.
This technology can make people
appear to do or say things
that they never did or said.
If we've lost control
of our virtual selves,
then we've also lost
our personal freedom.
One of the world's best
image synthesis labs
is at the Berkeley Artificial Intelligence
Research Lab.
I'm going to find out
how these deepfakes are created.
One of the graduate students,
Shiry Ginosar,
has offered to make
a synthetic version of me.
My body never allowed it,
but I always wanted to dance
like Michael Jackson.
Is it possible to spread disinformation
about my moves?
I have a camera,
and you're going to walk around
and move.
And basically, what I need from you is
I'm going to have to see you
in all different positions.
-All right.
-So then,
what I'm going to do
is I'm going to take someone else,
and I'm going to take their motion
and I'm going to try
and apply it to your body
and make it so that
you can move like them.
Oh, wow. Cool. Nice.
The technique is called
Generative Adversarial Networks
or GANs for short.
It uses captured imagery of me
and turns that into data points
so that the algorithms
can then imitate me.
-Yes.
-There are two parts of your
Two parts of
-your algorithm. Okay.
-The setup, right?
There's one part
that's generating fake yous
and one part that is looking at them
and it's kind of being an art critic.
It's being like,
yes, this is a real art piece,
or this is a fake art piece.
And the way it knows is that
it gets to see your real ones,
and it's trying to learn
the statistical distribution of you
and figure out when it sees a real thing
or when it sees a forgery.
The hard thing must be
to make it look natural.
-And that's the trick.
-And that is what matters.
Digital image creation
was once the preserve
of Hollywood special effects artists
and highly skilled programmers.
But the software is
now much more accessible.
Shiry shows me how she's used GANs
to make grad student, Caroline,
dance like Michael Jackson.
She's moving exactly like him.
Right. And so here,
we're taking his motion.
This is the pose,
kind of that we detect of him
and we're transferring this pose
frame by frame to her.
It looks a bit computer generated
just on the edges.
But otherwise, it's really good.
These algorithms are dangerous
exactly because their creations
are so realistic.
Being able to swap faces, remove objects,
or change what people are saying
poses a huge threat
not only to our personal freedom,
but to national security
and democracy.
Congress is tackling
the issue of deepfakes.
What enables deepfakes
and other modes of disinformation
to become truly pernicious
is the ubiquity of social media
and the velocity at which
false information can spread.
You can make me dance
in really beautiful ways,
but you're making something,
at the same time,
that's also quite dangerous.
That's true, and I think that
there should be ways
to control these things.
And the ways, I think, are not necessarily
getting it into the algorithm
or putting limits
on what people can do.
I think the ways to control it
should be to do with public policy
and what people on the street
can legally do and can't legally do.
I guess as long as the population
still believes in videos,
yes, this stuff is so powerful
and kind of dangerous.
You could make a video about anything,
someone launching a nuclear attack
and if people believe that
When I look at a video,
I believe in that video.
I believe that that's true.
But maybe my daughter's going to grow up
in a world where she's going to be like,
"Dad, you're so dumb."
"You're looking at a video.
Of course, that's fake."
But algorithms can have even wider
and more sinister consequences,
on an unprecedented scale.
And they are increasingly
hard to stop.
The new normal is that we can't trust
what we hear or what we see,
and we certainly can't trust
algorithms to be fair.
And they're also highly dangerous.
Algorithms can bring down
nuclear power plants,
attack financial institutions,
and manipulate elections.
But there are people
who are fighting for our freedom.
And I'm going to the biggest den
of hackers in Europe
to meet one of them.
The Chaos Computer Club or CCC
is one of the most influential
digital organisations in the world.
-Hey, how's it going?
-Nice to meet you.
And Linus Neumann
is one of the few hackers
allowed to publicly speak
for the collective.
Freedom of the individual
is central to the group's philosophy.
They want to stop governments
and corporations from using algorithms
to take control of our lives.
He's going to show me
how dangerous algorithms can be
to one of the fundamental tenets
of democracy in the world today,
elections.
Where are we going?
To the CCCB hackerspace,
Probably Berlin's oldest base
of the Chaos Computer Club.
Right next to the government.
Ready to take over.
We have made enemies,
be it in corporations,
be it in governments,
be it involved
in the first digital bank robbery
or attacking digital election systems.
We have been, I would say,
like the soft and decent terrorists.
The Chaos Computer Club
often pull big publicity stunts
to expose how algorithms and
technology are abused
by governments and corporations.
One of their most
high-profile undertakings
was to demonstrate how easily
they could hack electronic voting systems
brought in by the German government.
There's no such thing
as a computer-based voting machine
that can only do voting.
So, members of the CCC
hack these computers
and make them do different things,
for example, count the votes differently
or act as a chess computer to show
there is no way for you as a person
who cast your vote
to trust this machine
because it is a computer.
So in doing so,
the Chaos Computer Club achieved
that voting computers
are actually illegal in Germany.
And that is one of the best things
we've ever done for democracy,
in Germany and in the world.
Would you say
you've made nations safer?
We only strive for security
if it is for the individual
or for, let's say, democracy.
So, maybe we've made
nations more secure
but only to make the people
in the nations more secure.
I get the sense that
we're in a kind of a battle
that is created for us
without our knowledge
that we can't escape.
By now, there are very few
corporations that control
90-99% of the way
you interact with the Internet.
And, probably, almost 100%
of the way you interact with other people
on the Internet.
In the end, it's a battle for power.
Linus has confirmed that
by doing nothing, and without meaning to,
I've given my freedom away
to corporations and governments.
They use algorithms to control
what we do on the Internet,
what we buy,
who we interact with,
what we like,
and who we vote for.
Why do we trust in these algorithms
when we are becoming aware
of the dangers of them?
Why do you think we give
algorithms such power?
Why do you think we treat them
like some kind of god?
Whatever answer they spit out,
we got to follow it, as a society.
I think it's a tendency
to always overestimate the things
that we don't understand.
The best way to take somebody's power
is to understand it,
and to try to find out how it works.
Something that we do here
every day, right?
Most people don't know
what an algorithm is.
Most people don't know
the limitations of an algorithm.
And, basically, we've trained
most people to just be scared.
It's bad if there's only a few people who
know how to work with these technologies
because then, they basically define
what these technologies are.
They define our future.
They define how the world
is going to look like
and other people are basically
completely left in the dark.
And this is why all information
should be free.
On a daily basis,
how do you carry out this fight
in precise and concrete ways?
It's informing the public.
If that doesn't work,
we hack the thing we don't like.
And, probably, most of us
are better at hacking than talking.
And what makes you a hacker
is trying to understand how things work
and then changing them,
so they work for you.
And that is what everything
in CCC is about.
This is why people
who meet here build things,
and, sometimes, destroy things
if that is necessary
to build better things.
Linus and the Chaos Computer Club
will do their best
to preserve the freedom
of the individual.
And I still believe it's possible
to challenge those in power.
I still believe it's possible to
change the world we live in
for the better using these computers.
Does it look particularly good
at this point in time?
Not really.
And this is why I'm fighting for it.
I was really inspired by Linus.
To me, he felt like a representative
of myself, of society,
working for the public good,
using his skills as a hacker
for public benefit.
And the digital world
needs people like him
because it seems to me very much
like a war is going on for my identity
to define who I am,
to control my data and my digital self.
Linus showed me how easily algorithms
can bring down a cornerstone of democracy.
So, on the one hand, our future freedom
in the digital world is looking bleak.
But algorithms also have
the potential to liberate us.
For some on the fringes of society,
this code can be used
to break down barriers,
and create a fairer world.
This coding school in Mumbai
is offering disadvantaged children
a way out of poverty.
The kids are being taught
various mathematical concepts
including programming
and how algorithms function.
The school's goal is that, through code,
children learn to think for themselves,
and apply what they've picked up here
throughout their lives.
-Hello.
-Hi, Anjan.
One of the other ways that the kids
are making use of their coding skills
is to design apps.
Rinsa is one of the founders
of the school.
She passionately believes
in giving the kids tools
for self-motivated learning.
So, how is thinking
algorithmically different?
Because they're actually thinking
the entire process, too.
I'm not asking for the answer,
I'm asking what is happening there.
Right. So they have to break it down
and analyse.
Yes, what I've realised is that,
so far, with coding,
children never got bored.
They're learning to make things,
they're getting stuck,
but they want to learn.
What kind of background
do your students come from?
They don't have a home.
They live on the pavements
with their families.
Yes, I look at coding as something
that will bring them up.
Help them break out
of their current situations.
So that gives them an upper hand,
and as they progress,
these are the 21st century skills
which they will require.
And what do you imagine they will do
with the coding skills?
I believe whatever skills they pick up,
whether it's problem solving,
thinking algorithmically,
looking at the problem,
thinking through it,
this is something
that they will take through them
no matter what profession
they choose.
I think that's more important to me
than learning a language,
because right now,
it's the process that matters more.
Aditi is one of the school's star pupils.
For her, coding is
a chance for freedom
and a way she can provide
for her family.
Aditi's family see coding
as a liberating force in her life,
and one that could offer her
a way out of pavement living.
What struck me about
my meeting with Aditi,
was how the virtual world,
the world of code,
offers so much hope to some
of the poorest people in the world.
If you learn that mathematical language,
if you learn how to code,
there literally aren't any boundaries.
I think, in many ways,
the language of code is an equaliser.
If you can code well,
mathematics and code makes
no distinction to where you come from,
what family you are from,
what religion you are.
It's universal,
it's a universal language.
And if you can learn to speak
that universal language,
you can communicate with anyone.
It is mathematics
at its most beautiful.
Algorithms can create
opportunities for people,
and they can also make life easier.
In that sense, I can see
how code can be liberating.
But I've learnt how
I'm watched everywhere I go,
and even what I'm feeling
can be monitored.
So I am also afraid about
how this code threatens my freedom.
This sounds and feels like "1984".
I can't even trust what I see anymore.
I feel I've lost control
of my virtual self.
You're making something,
at the same time,
that's also quite dangerous.
That's true, and I think that
there should be ways
to control these things.
And I feel vulnerable
to those in power,
whether it be a government
or a corporation.
They could take away my freedom
in a heartbeat.
In the end,
it's a battle for power.
We need to equip people
with the tools to protect themselves,
if we are to remain free
in our coded world.
Previous EpisodeNext Episode