Moore’s Law and The Secret World Of Ones And Zeroes
Articles,  Blog

Moore’s Law and The Secret World Of Ones And Zeroes


Behold! The transistor, a tiny switch about
the size of a virus that can control the flow of a small electrical current. It’s one of
the most important inventions ever because when it’s on, it’s on and when it’s off, it’s
off. Sounds simple. Probably too simple. But this “either/or” situation is incredibly useful
because it is a binary system. On or off, yes or no, one or zero. But with enough transistors
working together we can create limitless combinations of “ons” and “offs”, “ones” and “zeros” to
make a code that can store and process just about any kind of information you can imagine. That’s how your computer computes, and it’s
how you’re watching me right now. It’s all because those tiny transistors can be organized,
or integrated into integrated circuits also known as microchips or microprocessors, which
can orchestrate the operation of millions of transistors at once. And until pretty recently,
the only limitation to how fast and smart our computers could get was how many transistors
we could pack onto a microchip. Back in 1965, Gordon Moore, co-founder of
the Intel Corporation, predicted that the number of transistors that could fit on a
microchip would double every two years. So essentially every two years computers would
become twice as powerful. This is known in the tech industry as Moore’s Law, and for
forty years it was pretty accurate; we went from chips with about 2,300 transistors in
1972, to chips with about 300 million transistors by 2006. But over the last ten years we’ve fallen behind
the exponential growth that Moore predicted. The processors coming off assembly lines now
have about a billion transistors, which is a really big number, but if we were keeping
up with Moore’s Law, we’d be up to four or five billion by now. So why is the trend slowing down? How can
we get more transistors onto a chip? Are there entirely different technologies we could be
using instead, ones that pose no limitations? And how do billions of little on/off switches
turn into movies and music and YouTube videos about science that display on a glowing, magical
box? Spoilers: it’s not magic; it’s science. [SciShow intro music] To understand the device that you’re using
right now as well as the challenges computer science is facing, and what the future of
computing might look like, you have to start small with that transistor. A transistor is
essentially a little gate that can be opened or shut with electricity to control the flow
of electrons between two channels made of silicon, which are separated by a little gap.
They’re made of silicon because silicon is a natural semiconductor. It can be modified
to conduct electricity really well in some conditions or not at all in other conditions.
In its pure state, silicon forms really nice, regular crystals. Each atom has four electrons
in its outer shell that are bonded with the silicon atoms around it. This arrangement
makes it an excellent insulator. It doesn’t conduct electricity very well because all
of its electrons are spoken for. But you can make that crystalline silicon conduct electricity
really well if you dope it. You know, doping, when you inject one substance into another
substance to give it powerful properties, like what Lance Armstrong did to win the the
Tour De France seven times, only instead of super-powered tiger blood or whatever, the
silicon is doped with another element like phosphorous, which has five electrons in its
outer shell; or boron, which has three. If you inject these in to pure crystal silicon,
suddenly you have extra unbonded electrons that can move around, and jump across the
gap between the two strips of silicon. But they’re not gonna do that without a little
kick. When you apply a positive electrical charge to a transistor, that positive charge
will attract those electrons, which are negative, out of both silicon strips, drawing them in
to the gap between them. When enough electrons are gathered, they turn in to a current. Remove
the positive charge, and the electrons zip back in to their places leaving the gap empty.
Thus the transistor has two modes: on and off, one and zero. All the information your computer is using
right now is represented by sequences of open and shut transistors. So, how does a bunch
of ones and zeroes turn in to me talking to you on your screen right now? Let’s just imagine
eight transistors hooked up together. I say 8 because one byte of information is made
out of 8 bits, that’s 8 on or off switches, that’s the basic unit of a single piece of
information inside your computer. Now the total number of possible on/off configurations
for those 8 transistors is 256. That means 256 combinations of ones and zeroes in that
8 bit sequence. So let’s say our 8 transistor microchip is given this byte of data, that’s
the number 67 in binary by the way. Okay, so what now? The cool thing about binary data is that the
same string of ones and zeroes can mean totally different things depending on where it’s sent. Different parts of your computer use different
decoding keys to read the binary code. So if our teeny tiny little 8 transistor microchip
kicks that byte over to our graphics card, our graphics card will interpret it as one
of 256 colors. Whichever one is coded as number 67. But if that same byte is sent over to
our sound card, it might interpret it as one of 256 different spots mapped on to a sound
wave. Each spot has its own sound and our byte will code for a spot number 67, so your
speaker will put out that sound. If it’s sent over to the part of your computer
that converts data into written language, called the UTF-8 code, it turns it into the
letter C. Uppercase C actually, not lowercase c which is a different byte. So our eight
transistor processor already has a lot of options; the problem is that it can only manage
one byte of data at a time, and even if it’s flying through bytes at a rate of a few million
per second, which your computer is doing right now, that’s still a serious data checkpoint,
so we need more transistors, and then more, and more, and more, and more! And for the past 50 years, the biggest obstacle
to cramming more and more transistors onto a single chip, and therefore increasing our
processing power, has come down to one thing – how small we can make that gap between the
two silicon channels. In the early days of computing, those gaps
were so big that you could see them with the naked eye. Today, a state-of-the-art microchip
has gaps that are only 32 nanometers across. To give you a sense of perspective, a single
red blood cell is 125 times larger than that. 32 nanometers is the width of only a few hundred
atoms. So, there’s a limit to how low we can go.
Maybe we can shave that gap down to 22 or 16 or even 10 nanometers using current available
technology, but then you start running into a lot of problems. The first big problem is that when you’re
dealing with components that are so small that just a few stray atoms can ruin a chip,
it’s no longer possible to make chips that are reliable or affordable. The second big problem is heat. That many
transistors churning through millions of bytes of data per second in such a small space generates
a lot of heat. I mean, we’re starting to test chips that get so hot that they melt through
the motherboard, and then sometimes through the floor. And the third big problem is quantum mechanics.
Oh, quantum mechanics, you enchanting, treacherous minx. When you start dealing with distances
that are that small, you start to face the very real dilemma of electrons just jumping
across the gap for no reason, in a phenomenon known as quantum tunneling. If that starts
happening, your data is gonna start getting corrupted while it moves around inside your
computer. So, how can we keep making our computers even
faster when atoms aren’t getting any smaller. Well, it might be time to abandon silicon. Graphene, for example, is a more highly conductive
material that would let electrons travel across it faster. We just can’t figure out how to
manufacture it yet. Another option is to abandon electrons because,
and get ready to have your mind blown, electrons are incredibly slow. Like, the electrons moving
through the wire that connects your lamp to the wall outlet, they’re moving at about 8
and a half centimeters per hour. And that’s fast enough when electrons only have to travel
32 nanometers, but other stuff can go a lot faster. Like light. Optical computers would move around photons
instead of electrons to represent the flow of data. And photons are literally as fast
as anything can possibly be, so you can’t ask for better than that. But, of course,
there are some major problems with optical computing, like the fact that photons ARE
so fast that it makes them hard to pin down for long enough to be used for data. And the
fact that lasers, which are probably what optical computing would involve, are huge
power hogs and would be incredibly expensive to keep running. Probably the simplest solution to faster computing
isn’t to switch to fancy new materials or harness the power of light, but to just start
using more chips. If you’ve got four chips processing a program in parallel, the computer
would be four times faster, right? Welllll, yeah, I mean es, but microchips are
super expensive, and it’s also hard to design software that makes use of multiple processors.
We like our flows of data to be linear because that’s how we tend to process information
and it’s kind of a hard habit to break. And then there are some really exotic options,
like thermal computing which uses variations in heat to represent bits of data, or quantum
computing which deals in particles that are in more than one state at the same time, thereby
totally doing away with the whole on-off, either-or system. So, wherever computers go next, there are
gonna need to be some big changes if we want our technology to keep getting smaller, and
smarter, and faster. Personally, I’m holding out hope for the lasers,
laser computer- I want one of those. Thanks for watching the SciShow Infusion,
especially to our Subbable subscribers. To learn how you can support us in exploring
the world, whether it’s inside your computer or outside in the universe, just go to subbable.com/scishow. And speaking of that whole universe, check
out our new channel, SciShow Space where we talk about that, including the latest in space
news, and as always don’t forget to go to youtube.com/scishow and subscribe, so that
you can always keep getting more of this, because I know you like it. [SciShow outro music]

100 Comments

  • odiousominious

    hey what about organic, anything in that department. brains still process more, or am I wrong now? I dont know if i want to be wrong about this Xd

  • Dyslexic Artist Theory on the Physics of 'Time'

    Could the wave particle duality be acting like the bits or zeros and ones of a computer. This would form an interactive process continuously forming a blank canvas that we can interact with turning the possible into the actual!

  • Ian Moriarty

    Instead of making transistors smaller or using multiple chips, why not take the chips bigger? Double the size of a current chip, double the amount of transistors. I'm sure many people would be ok with having a marginally bigger machine if it would mean it's more powerful in the end.

  • Hanvit Lee

    Well actually, the gpu (graphics processor unit) vram (video memory) remembers what has to go on the screen and the gpu core processes this and changes it into a signal sent through a displayport port, hdmi port and/or dvi port

  • Nate A

    for computers like desktops, servers, or anything else that is not small like a cell phones or tablets, it seems to me like they need to stop trying to make everything smaller for a little while. i understand the appeal of making things smaller and smaller for small devices, but those devices do not really require a huge increase in performance in the foreseeable future. i think if they were to make the chip itself physically a bit bigger, that would gain a huge amount of leeway. intel cpus in desktop are pretty small as it is where a doubling in physical size would not make them all that much bigger. in servers it would be very beneficial to be able to buy chips that are, say 4 times bigger physically than current ones. imagine the number of transistors they could pack into a chip that size, and it could still fit in any normal case with a slightly larger heatsink.

  • Steven Bulfer

    As an electrical engineer, I cringe at the inaccuracies in your explanation, but yes I get it, you gotta dumb it down for the masses.

  • johenrique21

    I don't know you guys, but I think this was his best show. I laugh so much! Thank you man that talks fast…..I mean….. real fast!

  • traso

    if electrons are so slow why turning the light on etc is instant?

    well it isn't but is stupidly fast because electrons don't travel all of that distance, they push other electrons and this push is chained sort of having a pipe full of balls then you insert one and push it so the ball at the end moves almost instantly

  • Austin Harding

    i enjoy your videos, but really, the quick in-an-out witty remarks arn't necessary, people that watch these are already interested in the subject and if you want to grab a larger audience, people that have to be woed by such fancies wouldn't be into such videos in the first place

  • Spartacus547

    I think maybe a combination of some of these ideas working together would probably bridge that Gap it might be hard to figure out to get Optical computers to work with quantum computers and multi chip functioning processors that work on a nonlinear architecture would also be difficult but not impossible there's never a single thing that really can be a silver bullet fixed two large problems that usually ends up being a multi-faceted solution working together with multiple ideas and combinations of new technologies not one single technology to basically control all, more of a single flow of ideas versus a single technology to flow all ideas.

  • rosenvitae

    The current of electrons from my outlet moves at about 8.5cm/hour?? Omg, mind blown. But wait, then speed of current isn't responsible for the near-instantaneous reactiontime of computing power (seems more on par with the speed like lightning, electromagnetic stuff). In that case, the argument of fiberoptic transistor using the speed of light compared to speed of current seems illogical – more an argument of speed of light vs speed of electromagnetic stuff?

  • Z2ZProductions

    What if someone made a website that automatically generated pictures, so every picture that could ever be made was made. That’s all

  • John Mariano

    Cool, now let me try.

    Transistor (Brain), do I do my homework?

    input (procrastinate = 1, responsibleStudent = 0)

    Procrastinate me, the cpu has spoken.

  • Karmanya Gupta

    I loved that statement "oh quantum computing that enchanting trecherous mink"
    also i think that we should switch to parallel computing like Spacex dragon uses
    and also the processing could be switched to optics and storage as usual.
    prhps….. that would help 😛

  • sev

    I understand how binary works, I can also read it, but I still don't understand how the chip INTERPRETS the 1's and 0's. Why does an off/on state mean something at all to the chip to begin with? How and why does it interpret it in the first place?

  • ¬ 黒 Black Moon 月¬

    any chance you could make a version of this video using only the 1,000 most used words in the english language? 🙂 i bet it'd be an even better (and weirder.) time than the one about our galaxy was. (*^_^)

  • Judith Priestess

    Thanks for this informative video. This might sound naive but, why do we need computers to be faster? Seems like each alternative is not worth its consequence. I understand that progression is integral to the human experience, however, I think it's important to distinguish between need and want. Do we NEED ever Moore (pun intended) faster computers to survive and progress?.. I don't know, do we? At what cost?
    Perhaps Moore's law's evident plateau is showing us that we must inevitably arrive at some state of resolution, where certain human advances are concerned.

  • Mya R

    What about ternary chips rather than binary? Instead of just 1 or 0, you could have -1 as well, and my logic would assume that would increase processing power by a whole 50%.

  • NA

    to answer the people who ask "why don't we make the chips bigger with the transistors the same size" u must remember that the people who make these chips are companies who do it to make money if you make the chips bigger it cost more and they would need to change there skills in making things smaller to making this bigger all fore a temporary solution

  • Teralcraft

    As a computer Science/ I.T ,CNET Major this video is exellent. We are actually working on alternatives to how chips are made, biggest contender being gemstones, Specifically Emeralds. That's why we have multicored CPU's to adjust for the more chips issue. Software actually has not been able to keep up with hardware advances.We actaully recently started bringing back multi soketed motherboards again and more monsterous CPU's like Threadripper

  • Thomas Conrow

    In 1978, when we were using 16k x 1 chips in sets of eight (or nine for parity check), one problem was the radioactive impurities in the silicon which generated ionizing radiation which in turn could corrupt the data stored on the chip. – FYI a set of 8 16k x 1chips would hold a whopping 16k bytes of data.

  • Rafal D

    Worth to mention that some bytes are treated as instructions for CPU itself. For example, on Intel processors (x86 architecture) the byte value 67 causes to increase ebx register value by 1.

  • Adrian Bartholomew

    Wow. Just to further illustrate how fast technology changes, since this video was posted, 3nm chips are already in development.

  • Kelvin Broder

    I love this channel but can't help to notice that this guy seems to be the one person least likely to need a flannel shirt.

  • zerokmatrix

    Ah, 2014 Hank Green, 5 years younger and who, in those days, was obviously getting his blood exchanged with liquid cocaine, before placing him inside a 3m square perspex box to ensure no damage to other people or equipment from his arms, and recording an episode.

  • Muhammed Miqdad

    Why some electrons of chip in my phone become stop suddenly…?What is the reason behind the hanging of electrons? 😐

  • Megan Perreault

    HaHA ! Moore is gonna hit a Turkish 40! Umm, it's not magic it's science! I could laugh, but it might not make anybody happy. I wish I wish you were happier! JK not first song done by a bully. He needs a 40  ;not a 40 ounce!

  • Ever Weaver

    So what your telling us is… Making microbots IS ALREADY POSSIBLE IF WE CAN MAKE COMPUTING CHIPS SMALLER THAN VIRUSES Mitch a cell CAN HOLD MILLIONS OFF … You can basacly program a micro machine to do work in our blood stream already.

  • Jason Baldauf

    It’s 5 years later and I have a GPU that is touted to be 7nm. Also dual core and quad core cpus were available back then. Servers had multi processors as a previous post mentioned. All processes that a computer processes are contained in threads. Older single core single thread cpus were able to do multiple tasks at a time due to how multi tasking OS like windows 95 and newer would assign each thread a slice of processing time. The more apps you had open the more the cpu had to divide its processing time thus slowing down the system. This why help desk tells end users to close apps if a computer is running slowly. Adding more cpus increases multitasking capacity by allowing more threads to be processed at the same time, cores are basically multiple processors contained on a single processor package. Hyper threading or logical cores is a method that uses unused clock cycles to process a second thread on one core. If you are using a single core processor that has two logical cores. The OS sees it as a dual core processor. However it’s not as fast as a true dual core processor. These days common laptop computers have Dual core cpus with each core having two logic cores.

Leave a Reply

Your email address will not be published. Required fields are marked *