Order And Disorder (2012) s01e02 Episode Script

Information

We are surrounded by order.
Over the last 300 years, we've developed amazing new ways to harness energy.
We've used this ability to transform our environment.
But all these structures that we see around us are just one type of visible order that we've created here on planet Earth.
There's another type of invisible order, every bit as complex that we are only now beginning to understand.
It's something that nature has been harnessing for billions of years.
Something we call information.
The concept of information is a very strange one.
It's actually a very difficult idea to get your head round.
But in the journey to try and understand it, scientists would discover that information is a fundamental part of our universe.
This film is the story of information.
And the immense power released from manipulating it.
It's the story of how we discovered the power of symbols.
And how writing, codes and computers would revolutionise our understanding of the universe.
It's the story of how, in a cosmos collapsing into disorder, information can be used to create order and structure.
At first glance, information appears to be a very straightforward idea.
It exists everywhere in our world.
Our brains are filled with it.
And we constantly exchange it between each other.
But information has been one of the subtlest and most difficult concepts that science has had to grapple with.
Understanding and harnessing it has been an extremely long and difficult process.
The power of information would first be glimpsed over 5,000 years ago, when a revolutionary technology was developed.
One that would set the modern world in motion.
Over the years, mankind has come up with some pretty remarkable stuff.
But of all humanity's inventions, there's one that really stands out.
It's the most transformative, destructive, creative technology ever conceived.
It is also one of the simplest.
That invention is the written word.
At its heart, writing is all about the transmission and storage of information.
Words allow ideas to endure through time.
These are some of the earliest texts in existence.
They give us an incredible insight into the development of writing.
I've come to meet one of the few people who can still read them - Dr Irving Finkel.
We take writing so much for granted these days, it's easy to forget that it was invented.
It certainly was.
How did it first come about? The earliest writing that we have is written on clay tablets and it comes from Iraq, Ancient Mesopotamia.
It comes from the culture of the culture of the Sumerians.
What happened here was that they started off with purely pictographic signs to express an idea.
This lasted for quite a long time, until it occurred to somebody, perhaps accidentally, that what you could do is make one of these graphic symbols on the surface of the clay not for what it looked like but for the sound it represented.
So not a picture of an object, a picture of a sound? That's what we always called the giant leap for mankind.
By combining different sounding pictures, the ancient Mesopotamians could express any idea imaginable.
The essence of their breakthrough was to see, for example, that a picture of an eye and a picture of a deer didn't have to mean an eye and a deer.
The pictures could be used simply for the sounds that they made.
In this case, idea.
Once this system was discovered, it meant anything that could be spoken, even the most strange or abstract thoughts could be transformed into symbols.
Information could now live outside of the human brain.
This meant it could endure over vast spans of time.
It was an idea that fascinated the ancient Mesopotamians.
This lovely tablet here, this king lived in about 2100 BC.
He buried this in the foundations of his temple as a message for the future.
This King Ur-Nammu, the powerful male, King of Sumer and Akkad - that's the south and north part of Ancient Mesopotamia.
Her house - he built for her and he even restored it afterwards.
This is a proud thing.
He wants everybody to know about it and this is a real message for the future.
What's so remarkable for me is this is information stored on clay for thousands of years.
Yes.
Ideas that someone had 4,000 years ago are still there.
You have ideas, you have speech, human hopes, literature, prayers - all these sorts of outpourings of the human soul fixed for ever in clay.
By turning sounds into symbols, the Mesopotamian scribes had discover that information could be changed very easily from one form to another.
From something that existed as spoken sounds, to something that existed as symbols on clay tablets.
This was just the beginning.
Humans were yet to realise the true power of symbols.
For 4,000 years, writing was pretty much the only information technology people used.
But in the 19th century, during the great Industrial Revolution, things would begin to change.
In the maelstrom of ideas and inventions, a series of seemingly unconnected technologies would emerge that all began to hint at the immense power of information.
These technologies would all come from very practical, very un-theoretical origins.
They would start to reveal that information was a much deeper and more powerful concept than anyone had realised.
One of the first of a new breed of information technologies would be developed in the French city of Lyon at the end of the 18th century.
18th-century Lyon was home to some of the best craftsmen in the world.
It was also a place of great opulence, grandeur and, above all, money.
Thanks to the rich and fashionable aristocrats and bankers who lived there, it would become home to the greatest silk-weaving industry in the world.
Almost a third of the city's inhabitants worked in the silk industry, and it was home to over 14,000 looms.
This is brocade.
The material that made Lyon famous.
It's a beautiful and intricately woven fabric that, as you might imagine, is incredibly labour intensive to produce.
A two-man team, working flat out for a day, could at best produce about an inch of this amazing stuff.
The demand for the fine fabrics of Lyon was immense.
But the silk weaving process was painful slow.
But thanks to a soldier and weaver named Joseph Marie Jaquard, a device will be developed to help speed up weaving.
In the process, it would reveal a fundamental truth about information.
Building on the work of a number of others, in 1804 Jaquard patented his invention.
At the time, the loom was the most complex mechanism ever built by humankind Jaquard's loom was a miracle of ingenuity.
You see, he had designed a single machine, which without any alteration to its construction - its hardware, to use a modem term - could be programmed to weave any pattern a designer could think up.
It fact, it could produce a whole range of silk designs with barely a pause in production.
Jaquard had found the holy grail of weaving.
And the secret was a simple punched card.
The punched card held within it the essence of the designs that the loom would weave.
When these punched cards were fed into the loom they would act to lower and lift the relevant threads .
.
recreating the pattern in silk.
Any design you could think of could be broken down and translated into a series of punch cards that could then woven by the loom.
Information was being translated from picture to punch card to the finished fabric.
It's a machine for weaving textiles, that's its task, but there is nothing specific about what textile it should weave.
That is contained in the information, which is encoded on the cards.
So if you like, the cards, programme it, that is to say instruct it what to do.
And this has huge resonances for what came later.
Jaquard's Loom revolutionised the silk industry.
But at its heart was something deeper, something more universal than its industrial origins and its ability to speed up weaving.
The loom revealed the power of abstracting information.
It showed you can take the essence of something, extract the vital information and represent it in another form.
Writing had revealed you could use a set of symbols to capture spoken language.
Now, Jaquard had shown that with just two symbols - a hole or a blank space, it was possible to capture the information in any picture imaginable.
This is a portrait of Jaquard that's been woven in silk.
It's spectacularly detailed with hundreds of thousands of stitches.
Yet all the information you need to capture this life-like image can be stored in a series of punched cards.
24,000 of them to be precise.
This picture is a fantastic example of a really far-reaching idea.
That the simplest of systems - in this case, cards with a series of holes punched in them - can capture the essence of something much, much more complicated.
If 24,000 punched cards could create an image like this What would happen if you had 24 million? Or 24 trillion cards? What new types of complex information might be able to be captured and represented? Jacquard had stumbled on an incredibly deep and far-reaching idea.
As long as you have enough of them, simple symbols can be used to describe anything in the entire universe.
Translating information into abstract symbols to store and process, had proven to be an extremely powerful idea.
But the way information was sent, the way it was communicated, hadn't changed for thousands of years.
The world before telecommunications technology was a very different place, cos you could only send messages as fast as you could send objects.
You'd write a message on a piece of paper or something like that and then you'd give it to somebody who could run very fast, or could go on horse or on a ship very fast.
The point was you could only send information as fast as you could send matter.
But in the 19th century, the speed at which information could be sent would dramatically increase, thanks to an incredible new information-carrying medium - electricity.
Very soon after electricity was discovered, excitement grew about its potential as a medium to transmit messages.
It seemed that if it could be controlled and summoned at will, electricity would be the perfect medium for sending information.
Electricity seemed to offer many advantages as a way of sending messages.
It was sent down a wire which means it could pretty much go anywhere.
It wasn't affected by bad weather conditions and most importantly, it could move very quickly.
But there was one big problem facing those in the early 19th century who wanted to use electricity as a means to communicate.
How could such a simple signal be used to send complex messages? Here in the Science Museum archive, they have one of the most impressive collections of early electronic communications technology in the world.
Here are just some of the early devices designed to send signals using electricity.
This one's particularly fun.
It was developed in 1809 in Bavaria by Samuel Soemmering.
So if the sender wants to send letter A, he sends a current through that corresponding wire.
At the receiver's end is a tank full of liquid and electric current forces a chemical reaction causing bubbles to appear above the corresponding letter A.
The whole process is ingenious, if a little laborious.
But what's really fun is that the sender has to let the receiver know he's about to send a signal.
He does that by sending extra electric currents so that more bubbles appear, forcing an arm upwards which releases a ball BELL RINGS .
.
and triggers a bell.
As you can imagine, this wouldn't be the quickest of systems.
After Soemmering, all sorts of approaches were taken in trying to crack the problem of sending messages using electricity.
But they all suffered from having over-complex codes.
These devices, each cunning and innovative in its own way, were all destined for the scrap heap of history.
And that's because in the 1840s, they were superseded by a way of sending signals that still endures to this day.
It was developed by artist and entrepreneur Samuel Morse, together with his colleague Alfred Vale.
What was so special about their system wasn't the technology that was used to carry their messages, but the incredibly simple and effective code they used to send them.
Just like Jacquard's punch cards, the genius of Morse and Vale's code lay in its simplicity.
Using a collection of short and long pulses of electrical current, they could spell out the letters of the alphabet.
Vale suggested that the most frequent letters in the English language get the shortest code.
So an E is sent like this.
While an X is sent like this.
This means that messages can be sent quickly and efficiently.
Figuring out the code part of it, the software if you like, was as complicated as figuring out the hardware side of things with the batteries and the wires, and together they made an entirely new technology which is the electric telegraph.
The telegraph had once again revealed the power of translating information from one medium to another.
Information had at first been fixed in human brains.
Then held in symbols in clay and paper and punched cards.
Now, thanks to Morse, information could reside in electricity and this made it unimaginably lighter and quicker than it had every been before.
In just a few short years, the telegraph network would spread around the entire globe, laying the foundations of the modern information age.
Between them, Jacquard and Morse had found new novel ways to manipulate, process and transmit information.
What had begun with the invention of writing thousands of years ago had culminated in the binding of the entire planet in a lattice of wires carrying highly abstracted information at incredible speeds.
For people at the end of the 19th century it may have seemed that humanity's ability to manipulate and transmit information was at its zenith.
They couldn't have been more wrong.
Information would reveal itself to be a more important, more fundamental concept than anyone could have imagined.
It would soon become apparent that information wasn't just about human communication.
It was a much further-reaching idea than that.
The true nature of information would first be hinted at thanks to a strange problem, one dreamed up by a brilliant Scottish physicist who appeared to be thinking about something else entirely.
James Clerk Maxwell was one of the great minds of the 19th century.
Among his many interests, Maxwell became fascinated by the science of thermodynamics - the study of heat and motion that had sprung up with the birth of the steam engine.
Maxwell was one of the first to understand that heat is really just the motion of molecules.
The hotter something is, the faster its molecules are moving.
This idea would lead Maxwell to dream up a very bizarre thought experiment in which information played a crucial role.
Maxwell theorised that simply by knowing what's going on inside a box full of air, it'll be possible to make one half hotter and the other half colder.
Think of it like building an oven next to a fridge without using any energy.
It sounds crazy, but Maxwell's argument was extremely persuasive.
It goes like this.
Imagine a small demon perched on to of the box, who has such excellent eye sight that he could observe accurately the motion of all the molecules of air inside the box.
Now, crucially, he's in control of a partition that divides the box into two halves.
Every time he sees a fast-moving molecule approaching the partition from the right-hand side he opens it up, allowing it through to the left.
And every time he sees a slow moving molecule approaching the partition from the left, he opens it up, allowing the molecule through to the right.
Now, you can see what's going to happen.
Over time, all the fast-moving hot molecules will accumulate on the left-hand side of the box, and all the slow-moving cold molecules on the right.
Crucially, the demon has done this sorting with nothing more than information about the motion of the molecules.
Maxwell's demon seemed to say that just by having information about the molecules, you could create order from disorder.
This idea flew in the face of 19th-century thinking.
The science of thermodynamics had shown very clearly that over time, the entropy of the universe, its disorder, would always increase.
Things were destined to fall apart.
But the demon seemed to suggest that you could put things back together without using any energy at all.
Just by using information, you could create order.
It would prove to be a fiendishly difficult problem to solve, not least because the brilliant Maxwell had come up with an idea far, far ahead of its time.
It's amazing, the impact that he had on physics, and that he came up with this very intricate concept and that he already in some sense pre-anticipated the notion of information.
It wasn't actually there at the time, there was no such thing.
I think this idea was astonishing.
He didn't really have a resolution, he raised it as a concern and he left it open.
And I think what followed is more or less 120 years of extremely exciting debate and development to try to resolve and address this concern.
So what was going on with Maxwell's demon? It may sound far-fetched and fanciful, but imagine the possibilities if we could build a machine in the real world that could mimic the actions of the Demon.
I could use it to heat a cup of coffee, or run an engine, or power a city all using nothing more than pure information.
It's as though we could create order in the universe without expending any energy.
Scientists felt intuitively that it had to be wrong.
The problem was it would take over 100 years to solve the problem.
While Maxwell's riddle rumbled on, something quite unexpected was to happen, a new device was dreamt up that could perform quite incredible and complex tasks simply by processing information.
What's more, this was a device that could actually be built.
The machine would come to be known as the computer, and the idea behind it came from a quite remarkable and visionary scientist.
Alan Turing was the first person to conceive of the modern computer, a machine whose sole function is to manipulate and process information.
A machine that harnesses the power of abstract symbols.
A machine that enables almost every aspect of the modern world.
Turing's incredible idea would first appear in a now-legendary mathematical paper published in 1936.
In his brief life, Alan Turing brought fresh, groundbreaking ideas to a whole range of topics, from cryptography through to biology.
The sheer breadth of his thinking is breathtaking.
But for most scientists, it's the concepts he outlined in these 36 pages that mark him out as truly special.
It's this work that makes him worthy of the title "Genius".
Published when Turing was just 24 years old, On Computable Numbers With An Application To The Entscheidungsproblem tackles the foundations of mathematical logic.
What's amazing about it is that the idea for the modern computer emerged simply as a consequence of Turing's brilliant reasoning.
He was thinking about something else entirely, he wasn't, you know, sitting there thinking, "I want to try and invent the modern computer," he was thinking about this very abstract problem in the foundations of mathematics.
And the computer kind fell sideways out of that research, completely unexpectedly.
I mean, nobody could have guessed that Turing's very abstract, abstruse research in the foundations of mathematics could produce anything of any practical value whatsoever, let alone a machine that was going to change the lives of, you know, nearly everyone on the planet.
Turing had set out to understand if certain processes in mathematics could be done simply by following a set of rules.
And this is what would get him thinking about computers.
In 1936, the word "computer" had a very different meaning to what it does today.
It meant a real person with a pencil and paper, engaged in arithmetical calculations.
Banks hired many such people, often women, to work out interest payments.
The Inland Revenue employed them to work out how much tax to charge.
Observatories hired them to calculate navigational data.
Human computers were vital to the modern world, dealing with the huge amounts of information produced as science and industry grew ever more complex.
What Turing did in his 1936 paper was ask a simple but profound question.
"What goes on in the mind of a person carrying out a computation?" To do this, he first had to discard all the superfluous detail, so that only the very essence of the process of computation remained.
So, first off went the inkpot.
Then the pen, then the slide-rule.
Then the pencils and the pads of paper.
All these things made it easier, but none of them were absolutely crucial to the person carrying out the computation.
Now Turing asked, "What goes on in the brain of a human computer?" It's a vastly complex biological system, capable of consciousness, thoughts and insights, but to Turing, none of these was critical to the process of computation either.
Turing realised that to compute something, a set of rules had to be followed precisely.
That was all.
It takes the higher level intelligence that was presupposed to be involved in calculation, which was thinking, and says you can have a mechanical process - and by mechanical, he means an unthinking process - to perform the same act.
And therefore eliminates the necessity of human agency, with all its high-level functions.
And that is what is revolutionary about what he tries to do.
Turing's brilliant mind saw that any calculation had two aspects The data, and the instructions for what to do with the data.
And this would be the key to his insight.
Turing had to find a way of getting machines to understand instructions like "add," "subtract," "multiply," "divide" and so on, in the same way that humans do.
In other words, he had to find a way of translating instructions like these into a language that machines could understand.
And with flawless, impeccable logic, Turing did exactly that.
This may look like a random series of ones and zeroes, but to a computing machine, it's a set of instructions that can be read off step by step, telling the machine to behave in a certain way.
So, while a human computer could look at this symbol and understand the process that was required, the computing machine had to have it explained, like this.
This paper tape that Turing envisaged is what we would now call the memory of the computer.
But Turing didn't stop there.
Turing realised that feeding a machine instructions in this way had an amazing consequence.
It meant that just one machine is needed to perform almost any task you can think of.
It's a beautifully simple concept.
In order to get the machine to do something new, all you had to do was feed it a new set of instructions, new information.
This idea became known as the Universal Turing Machine.
The more you wanted your machine to do, the longer the tape had to be.
Bigger memories could hold complex, multilayered instructions about how to process and order any kind of information imaginable.
With a big enough memory, the computer will be capable of an almost limitless number of tasks.
This idea of Turing's, that a multitude of different tasks can be carried out simply by giving a computing machine a long sequence of instructions, is his greatest legacy.
Since his paper, Turing's dream has been realised.
So, calculations, making phone calls, recording moving images, writing letters, listening to music - none of these require bespoke machines.
They can all be carried out on a single device.
A computing machine.
This phone is a modern incarnation of Turing's amazing idea.
Inside here are many, many instructions.
What we call programmes, or software, or apps, that are nothing more than a long sequence of numbers telling the phone what to do.
What's amazing about Turing's idea is its incredible scope.
The sets of instructions that can be fed to a computer could tell it how to mimic telephones or typewriters.
But they could also describe the rules of nature, the laws of physics.
The processes of the natural world.
This is a simulation of many millions of particles behaving like a fluid.
To work out how it flows, the computer simply follows a set of instructions held in its memory.
This only begins to hint at the power of computing machines.
This is a computer simulation of the large-scale structure of the entire universe.
And it reveals the true power of Turing's idea.
Turning instructions into symbols that a machine can understand allows you to recreate not just a simple picture or sound, but a process, a system, something that is changing and evolving.
By manipulating simple symbols, computers are capable of capturing the essence, the order of the natural world itself.
By thinking about how the human brain processes and computes information, Alan Turing had had one of the most important ideas of the 20th century.
The power of information was revealing itself.
GARBLED VOICES It would be very easy to think that after Turing's ideas were made real, the true power of information would be unleashed.
But Turing was only half the story.
The modern information age would require another idea, one that would finally pin down the nature of information, and its relationship to the order and disorder of the universe.
It was an idea that would be dreamt up by a gifted and eccentric mathematician and engineer.
Claude Shannon was a true maverick, and his desire to tackle unusual problems would lead to a revolutionary new idea.
One that would uncover the fundamental nature of information, and the process of communication in all its varied forms.
This is Claude Shannon's paper, The Mathematical Theory Of Communication.
Now, the title may sound a bit dry, but trust me, it's one of the most important scientific papers of the 20th century.
Not only did it lay the foundations for the modern world's communication network, it also gave us fresh insights into human language, into things we do intuitively, like speaking and writing.
The paper was published in 1948, while Shannon was working at the Bell Labs in New Jersey - the research arm of the vast Bell Telephone Network.
It was an institution famous for its forward-thinking, relaxed atmosphere.
The mathematicians were free to work on any problem that interested them.
The only thing that the laboratory management required of them was that they keep an open door, and if anybody from any other department came with a problem, that they would at least think about it.
Otherwise they were absolutely free, and the atmosphere was incredible.
People were playing, and encouraged to play.
Hello.
I'm Claude Shannon, a mathematician here at the Bell Telephone Laboratory.
Claude Shannon in particular was given free reign to do pretty much whatever he wanted.
This is Theseus.
Theseus is an electrically controlled mouse, mouse.
Oh, they treated him as their darling.
I never saw him juggle, but I certainly saw him ride his unicycle.
He brought it to work one day, and he must have cost Bell Labs at least a hundred man-hours of time.
But despite the frivolity, the Bell Telephone Network faced a huge problem.
Every day, they transmitted vast amounts of electronic information all across the world.
But they had no real idea of how to measure this information properly, or how to quantify it.
In short, their entire business was built on something they didn't actually understand.
Amazingly, their superstar employee Claude Shannon would give them exactly what they needed.
GARBLED VOICES In this paper, Shannon did something absolutely incredible - he took the vague and mysterious concept of information and managed to pin it down.
Now, he didn't do this using some cleverly-worded, philosophical definition.
He actually found a way to measure the information contained in a message.
GARBLED VOICES Amazingly, Shannon realised that the quantity of information in a message had nothing to do with its meaning.
Instead, he showed it was related solely to how unusual the message was.
Information is related to unexpectedness.
So news is news because it's unexpected and the more unexpected it is, the more newsworthy it is.
So if today's news was the same as yesterday's news, there would be no news at all.
And that information content would be zero.
So suddenly you have a relationship between unexpectedness and information.
GARBLED VOICES But Shannon was to go further and give information its very own unit of measurement.
GARBLED VOICES So, how did he do this? Well, he showed that any message you cared to send could be translated into binary digits - a long sequence of ones and zeros.
So a simple greeting like "Hello" could be written like this.
Orlike this.
Just think of this as another way of writing the same message.
ELECTRONIC MUSIC Shannon realised that transforming information into binary digits would be an immensely powerful act.
It would make information manageable, exact, controllable and precise.
In his paper, Shannon showed that a single binary digit - one of these ones or zeros - is a fundamental unit of information.
Think of it as an atom of information - the smallest possible piece.
Then, having defined this basic unit, he even gave us a name for it, one we're all familiar with today.
He used a shortening of the phrase, "binary digit" - "bit".
The humble bit turned out to be an enormously powerful idea.
The bit is the smallest quantity of information.
It is highly significant because it's the fundamental atom.
It is the smallest unit of information in which there's sufficient discrimination to communicate anything at all.
The power of the bit lay in its universality.
Any system that has two states, like a coin with heads or tails, can carry one bit of information.
One or zero.
Punched or not punched.
On or off.
Stop or go.
All of these systems can store one bit of information.
Thanks to Shannon, the bit became the common language of all information.
Anything - sounds, pictures, text - can be turned into bits and transmitted by any system capable of being in just two states.
Shannon had founded a new, far-reaching theory.
The ideas he began to explore would form the cornerstone of what we now call, "information theory".
He'd taken an abstract concept - information - and turned it into something tangible.
What had been just a vague notion was now measurable - something real.
The idea of converting into bits, into making things digital, would fundamentally transform many aspects of human society.
GARBLED VOICES But information isn't just something humans create.
We're beginning to understand that this concept lies at the heart, not only of 21st-century human society, but also at the heart of the physical world itself.
Every "bit" of information we've ever created, every book, every film, the entire contents of the internet, amounts to pretty much nothing when compared with the information content of nature.
And that's because even the most insignificant event contains a spectacular amount of information.
Let me show you.
Imagine how many bits of information you would need to describe this.
The beautiful and intricate interplay of physical laws taking place at scales and timeframes that are normally imperceptible to us.
But here you're still only seeing a fraction of the complexity of nature.
Imagine the interplay between the trillions upon trillions of atoms.
The amount of bits you would need to describe this is almost unimaginable.
But what's amazing is that now, thanks to the ideas of Turing and Shannon, we're able to describe, model and simulate nature in ever greater detail.
But this isn't the end of the story.
Information, it seems, isn't just a way of describing reality.
In the last few years, we've discovered that information is actually an inseparable part of the physical world.
It's a really difficult idea to get to grips with but information, everything from a Beethoven symphony to the contents of a dictionary, even a fleeting thought, all information needs to be embodied in some form of physical system.
Amazingly, the reason we understand the true connection between information and reality is because of Maxwell's demon.
Remember, it seemed like the demon could use information to create order in a box of air that started out completely disordered.
Moreover, it could do this without expending any effort.
Information seemed to be able to break the laws of physics.
Well, that's not true - it can't.
The reason why Maxwell's demon can't get energy for free lies here - in his head.
What was discovered was this - the demon really is using nothing more than information to create useful energy.
But this doesn't mean that he's getting something for nothing.
Remember how the demon works? He spots a fast-moving molecule on one side of the box, opens a partition and lets it through to the other side.
But each time he does that, he has to store information about that molecule's speed in his memory.
Soon his memory will fill up and then he can only continue if he starts deleting information.
Crucially this deletion would require him to expend energy.
The demon needs to keep a record of which molecules are moving where and if the record-keeping device is only finite size, at some point the demon is going to have to erase it.
That's an irreversible process that increases the entropy of the universe.
Its the erasure of information that increases entropy once and for all.
What was discovered is that there's a certain, specific minimum amount of energy, known as the Landauer limit, that's required to delete one bit of information.
It's tiny, less than a trillion trillionth of the amount of energy in a gram of sugar, but it's real.
It's a part of the fundamental fabric of the universe.
Amazingly, we can now do real experiments that test aspects of Maxwell's idea.
By using lasers and tiny particles of dust, scientists around the world have explored the relationship between information and energy with incredible accuracy.
Maxwell's thought experiment, dreamt up in the age of steam, still remains at the cutting edge of scientific research today.
Maxwell's demon links together two of the most important concepts in science - the study of energy and the study of information and shows that the two are profoundly linked.
What we now know is that information, far from being some abstract concept, obeys the same laws of physics as everything else in the universe.
Information is not just an abstraction, just a mathematical thing or formula that you write on the paper.
Information is actually carried by something.
So it is encoded onto something - a stone, a book, a CD.
Whatever it is, there is a carrier where the information is on.
That means that information behaves according to those laws of physics.
So it cannot break the laws of physics.
What humanity has learnt over the last few millennia is that information can never be divorced from the physical world.
But this is not a hindrance.
What makes information so powerful is the fact it can be stored in any physical system we choose.
From using stone and clay to allow information to be preserved over eons to using electricity and light so it can be sent quickly, the medium that stores information gives it unique properties.
Today, scientists are exploring new ways of manipulating information, using everything from DNA to quantum particles.
They hope that this work will usher in a new information age, every bit as transformative as the last.
What we now know is that we are just at the beginning of our journey to unlock the power of information.
It's always been clear that creating physical order - the structures we see around us - has a cost.
We need to do work to expend energy to build them.
But in the last few years, we've learnt that ordering information, creating the invisible, digital structures of the modern world, also has an inescapable cost.
As abstract and ethereal as information seems, we now know it must always be embodied in a physical system.
I find this an incredibly exciting idea.
Think about it this way - a lump of clay can be used to write a poem on.
Molecules of air can carry the sound of a symphony.
And a single photon is like a paint brush.
Every aspect of the physical universe can be thought of as a blank canvas, which we can use to build beauty, structure and order.

Previous Episode