Graphene Computing & 3D Integrated Circuits To Increase Computing Performance
Articles,  Blog

Graphene Computing & 3D Integrated Circuits To Increase Computing Performance


Hi, thanks for tuning into Singularity Prosperity. This video is the fourth in a multi-part series discussing computing. In this video, we’ll be discussing computing performance and efficiency, as well as how the computer industry plans on maximizing them. [Music] The performance of a computer isn’t measured by its speed but by the operations it can do. Thus, a new unit of measurement, the flop was introduced. The number of floating-point operations a computing device can do per second. Observing the trend from the 1960s, the performance of computers has grown 11 orders of magnitude, in other words, 100 billion times. From 1 MegaFLOP, 1 million instructions a second, in 1965 to just about reaching 100 PetaFLOPs, 100 quadrillion instructions per second, just in 2016, achieved by China’s supercomputer, the TaihuLight. To put the scale of that number in perspective, 100 quadrillion seconds is approximately 317 million years. 317 million years ago, Earth was a giant supercontinent and reptilian life was just beginning to evolve. It is expected that the supercomputer industry will achieve the performance of one ExaFLOP, one quintillion instructions per second, by 2020. That’s about the same amount of operations as the grains of sand on Earth. At an ExaFLOP of performance, we’ll be able to simulate the human brain on a computer. This increase in computing performance isn’t showing any signs of slowing down, with ZettaFLOP performance expected by 2030. Now at this point, you may be asking: what the limit on computational operations is? According to current laws of physics, this could reach upwards of 1^50 FLOPs by utilizing a black hole as a computing device! This may sound absurd, but in future videos on this channel will definitely delve into how this could be possible. Bringing it back to Earth, let’s focus on some of the revolutionary new ways the computer industry plans on shifting to increase compute performance and efficiency. [Music] The computer industry is beginning to reach an inflection point, where clock rates are capped and current implementations of the transistor are reaching their minimum sizes. The clock rate is the speed at which the CPU executes. It is a pulse that is generated to make sure everything in the processor is synchronized, and with each pulse instructions can be executed. Now clock rates were steadily increasing up until the early 2000s, from 100 Megahertz in the early 90s, to 4 Gigahertz by the early 2000s. That’s a 40-fold increase in the span of a decade, so you may be wondering why they plateaued afterwards and haven’t recovered since. The answer is the end of Dennard Scaling. Dennard Scaling is another law, just like Moore’s, stating that as transistors get smaller their power density remains constant. Essentially this meant that as transistors got smaller, their power consumption and heat generation would remain constant or decrease. Once transistors crossed into the sub-100 nanometer mark, this stopped holding true and increasing the clock would result in massive power usage and extreme heat generation due to the density of transistors in such close proximity. As you can see past about 4.5 Gigahertz on most processors, the power usage versus clock rate trade-off starts becoming very unfavourable. Thus, a collective decision by the industry was made to stop increasing the clock speed and focus on adding more transistors as well as parallel hardware to chips, making them able to execute more instructions per clock cycle. Continual miniaturization of the transistor has been the current industry go-to solution for increasing performance and efficiency. Computing devices are just beginning to hit 10 nanometers with a clear runway down to 3 nanometers by 2025. IBM estimates that scaling from 10 to 5 nanometers will yield, a 40 to 50 percent boost in performance and a 75 percent boost in energy efficiency. Following that, scaling from 10 to 3 nanometers could potentially yield 60 to 80 percent gains in performance and 90 to over 100 percent gains in efficiency! However, within the next decade, based off these two factors of computing, transistor miniaturization and clock speeds, computing performance would stop growing. While hardware and software parallelism and optimization are always going to be ways to continue milking more performance and efficiency from current architectures, completely new paradigm shifts will be needed to continue growth that follows exponential performance trend. Before we continue, if you want more insight into transistor miniaturization and computing parallelism, be sure to check out the previous two videos in this computing series. Back on topic, the Defence Advanced Research Projects Agency, DARPA, seems particularly interested in investing nearly a quarter of a billion into the development of 3D integrated circuits and research into new materials as the next major paradigm shifts in computing! The era of Silicon is coming to an end. There have been and are multiple research projects studying new materials that can replace the infamous silicon semiconductor. Out of these, so far, most bets are being placed on Carbon. To be more specific, 1 nanometer thick Graphene and Carbon Nanotubes, with Carbon Nanotubes just being Graphene rolled into a cylinder. The goal is to use this material to replace every facet of not only computers but electronics design as well, from wiring to the chips themselves. Graphene has an extremely high thermal and electrical conductivity, much higher than Silicon, simply put this means it could handle more heat and use less power, meaning the computer industry could begin following Dennard Scaling once again. This translates to the ability to increase clock rates to insane levels, in the order of Terahertz, allowing for computers 1000 times faster than today, and using 1/100 the power! Earlier in this video, I mentioned that at 1 Exaflop, we’ll be able to simulate the human brain. The human brain however only uses 20 Watts of power, whereas the computer we’d used to simulate it would use power in the order of Megawatts. A Graphene based computer could bring this down, in the order of Kilowatts. This would also translate to consumer devices, imagine an iPhone with a battery that could last for a month on a single charge! Beyond the massive performance and efficiency increases Graphene brings, this super material also possesses other amazing properties, such as it’s stronger than steel strength and flexibility! These properties will not only shape the field of computing, but every electronics based field, such as a new era of sensors which will allow our devices to become smarter. Graphene has the ability to record radiation in Terahertz levels as well as the unique ability to change its electrical properties based on smell. Imagine wearable medical sensors able to detect the slightest deviances in heartbeat, phones that could serve as carbon monoxide alarms, air pollution integration into Google Maps – the list can go on and on. The Internet of Things will also be massively affected by Graphene, bringing upon the era of flexible electronics. Imagine digital clothing, smart walls and windows, phones that could turn into tablets and more! The use cases Graphene based computational devices will open up are never-ending! The field of nano- engineering is still in its infancy, but rapidly developing. As it progresses, we’ll discover more materials and ways to better utilize Graphene, Carbon Nanotubes and other Carbon structures. This channel will definitely be dedicating many videos to come on nano- engineering. 3D integrated circuits are exactly what the name suggests, stacking layers of transistors above each other and interconnecting them. While there are a few approaches research teams are taking to achieve this, the DARPA funded Monolithic 3D Circuit-on-Chip program, has made the most progress. Nothing has been demoed to the public as of yet, but some of the performance and efficiency metrics they’ve released are astounding. The results were measured for a 2D 7 nanometer transistor node versus a 3D implementation at 7 nanometers, for multiple machine learning models. As you can see by the results, the 3D implementation crushed 2D processes, yielding performance boosts in the order of 100 times greater! Past machine learning, here are the performance boosts over various other compute tasks, such as PageRank and regression, yielding gains up to 1,000 times better than 2D architectures! These gains are both distributed in terms of energy efficiency and performance in terms of execution time. Much of these massive performance boosts are due to the demolition of the memory wall. While computer processing performance has seen significant gains over the years, the memory gap has only continued to increase, by approximately 50% per year, one of the computers greatest bottlenecks. This is because accessing memory takes time, which drastically slows down computer performance. In some tasks, such as machine learning, 80 to 90% of the time is just spent accessing memory. With 3D integrated circuits, the memory and compute logic is placed on different layers, significantly increasing the memory bandwidth. This is why such massive gains in performance and efficiency are seen. Now 3DSoC’s are still Silicon based, meaning heat dissipation will still be a major issue. This will only get worse with multiple stack layers, the current solution like we do with 2D architectures, is to bring down clock rates once again, potentially in the order of 1 to 2 Gigahertz. However, imagine the insane results that can be achieved once this technology matures and is redesigned with Graphene and Carbon Nanotube based processes. DARPA expects their first functioning 3D integrated circuit prototype by 2019. Up to this point, we’ve discussed how computing performance is measured and 2 massive paradigm shifts, the use of new materials and 3D integrated circuits, to continue increasing performance and efficiency. Now these implementations are still very much in their development stage, with mass production for either not expected until the mid to late 2020s, right on par for when transistor scaling is expected to stop. While we are in this transitionary period, more emphasis is now being put on hardware and software, maximizing optimization through parallelism and other techniques. In terms of hardware parallelism, elaborating further on what we discussed about in the previous video in this series, there are still many routes that can be taken by the computing industry to improve performance: making the pipeline wider, in other words, superscalar pipelines, making the instruction pipeline longer by adding more instruction computation units, which would allow more instructions to be done per clock cycle, more cores, architecture redesigns and more! Another major hardware topic we’ll be covering in the next video in this series, and something we momentarily touched upon earlier, is overcoming the memory and storage gap. In terms of software parallelism, the industry is going through massive changes. Focus on multi-threading and utilizing hardware resources is becoming standard in every industry. Also exemplified by machine learning, more advanced algorithms, probabilistic programming languages, new protocols and more – optimization is starting to become better and better, year after year, month after month. These topics and the evolution of software itself for topics for future videos, however there are tons of great creators on this platform and resources online if you wish to learn more now. With both hardware and software, after years of stagnation, competition is starting to become much fiercer. The driving force of innovation. This is also great for us, the consumers, leading to better products at competitive price points. Now the majority of our discussions over the past few videos have solely revolved around the CPU as the primary computing device. The CPU is general-purpose, not designed for a specific task. This is great for consumer desktops and laptops, but a huge bottleneck in terms of computational performance and efficiency. After covering solutions to decreasing the memory gap, the following videos will be on GPUs, FPGAs and application-specific integrated circuits, ASICs, as the driving force of the computing industry. Following our computing hardware videos, we’ll focus on huge paradigm shifts that will shake the competing industry, on top of the 3D integrated circuits and new materials we discussed in this video. These videos will include: optical, quantum and bio computers as well as cloud computing! At this point the video has come to a conclusion. I’d like to thank you for taking the time to watch it, if you enjoyed it consider supporting me on Patreon to keep this channel growing and if you want me to elaborate on any of the topics discussed, or have any topic suggestions – please leave them in the comments below. Consider subscribing for more content, follow my Medium publication for accompanying blogs and like my Facebook page for more bite-sized chunks of content. This has been Ankur, you’ve been watching Singularity Prosperity and I’ll see you again soon! [Music]

100 Comments

  • Ashutosh Samal

    The video is so well made, it's just amazing and really shows your efforts. I'd love to watch the last two private videos in this series.

  • Ricardo Lourizela

    Graphene has no energy gap; it behaves more like a metal rather than a normal semiconductor, which limits its potential for transistor-type applications. There’s a lot of research on other two-dimensional materials, like Indium Selinide (InSe) or hexagonal boron-carbon-nitrogen (h-BCN), that have large bandgaps.

  • Mohamed M. Sabry

    Got this video as recommendation. Surprised that my work is mentioned here. Many thanks for such a clear and simple explanation

  • Alejandro

    Great video very informative and up to date, just a couple things, first the audio, I don't know what it is but it sounds like you're muffled or something, and second I know its a lot of information for 10 minutes but sometimes I think you went a bit too fast.

    Keep it up hope to see more!

  • Meta Cube

    So basically, if we manage to create a commercial CPU using graphene, even with today's IPC levels, we could boost the computation power by hundreds. Wow.

  • Eric Kitchen

    It's crazy how few subscribers you have for the quality you have! Keep it up and this will grow rapidly. Fantastic work.

  • Mark S

    QUESTION: Did anyone from outside China verify if their latest superfast computer worked to international standards as they claim? No, nobody independent from outside has verified that speed under proper conditions and temperatures. If you trust anything that this new dictator commie government has to say, then you are a fool.

  • Amir Abudubai

    A FLOP isn't a measurement instruction per second. IPS is the measurement of instructions per second however it doesn't work very well for modern high performance computer. This is mostly because newer systems are designed to perform more than one floating point operations in a single operations.

    Also, no bets are being placed on Carbon replacing Silicon. To make a graphene transistor, you have to control both the chemical and physical structure; we can hardly make a single transistor in lab. Whatever replaces Silicon will have to work with photolithography otherwise we won't be able to manufacturer it no matter how much better than Silicon it is.

  • Drew B

    Really great content. Although I don't have trouble keeping up, you should consider slowing down. This is not the only video of your to have many comments about you talking too fast. I'm only saying because maybe you could get more subs. On the other hand if that's just how you talk… meh. Anyhow you have a new sub

  • Matt Stiles

    Ya but the display is actually 90% of the battery drain in smartphones!
    We need transparent displays already with all the components in the bezel. (Yes, bring bezels back in exchange for see-thru screens)
    Apple needs to get on this first or they will fall behind even further lol!

  • KorAllRBare

    I'm an idiot please disregard the following paragraph under the dotted line.. As I miss heard you, having said that..
    You seriously need to slow down your ranting, yes ranting is what it sounds like..
    —————————————————————————————————————————————————————————————————-
    Sorry but I am adamant that graphene is not carbon nanotubes, My understanding is that graphene is actually a single layer of carbon atoms, a somewhat 2 dimensional Mass, I say somewhat as an atom is made up of standing waves or "Particles", Anyway I digress, So yeah: Yes.. Once a tube is formed by a sheet of Graphene, it is only then best to refer to that Mass or Structure as a Nanotube, I am guessing maybe that's why you received so many thumbs down, which BTW I haven't.. Mind you I also didn't give you a thumbs up..

  • Robert Galletta

    use the material from the space shuttle tiles as heat sync in between layer of the 3d chips to cool them

  • Steven Burrell

    Graphene would be nice but we can only make 400 tons a year. If we really want to have a graphene age we need to make over 10,000 metric tons a year to find a niche market. Silicon was made in quantities around 7.2 million metric tons a year.

  • Daniel Tule

    You just told me that within the next two decades computers can become trillions of times more powerful?

  • Mridul

    Very Informative tone of narration , it weirdly keeps you curious .. Great Video ! and the animation is very well done !

  • Nikolaos Skordilis

    3:19 It is always "either or", never "and". Your very own text about the IBM claim states that ("Or, more interestingly…"). Smaller fabrication nodes provide a higher power efficiency, which means that CPU and GPU companies face a trade-off between higher performance at the same TDP (i.e. heat, which is derived from how much power is consumed) or the same performance at a lower TDP. Or, of course, they can opt for a combination of the two.

    For instance if a new node can provide 20% higher performance (at the same TDP, and thus the same battery life for mobile phones, tablets and laptops) or 35% lower TDP (at the same performance) they can sacrifice some of the performance and target 15% higher performance and ~25% lower TDP.

    According to the first preliminary data about Intel's first couple of Cannon Lake CPUs that have been spotted in Asia, they have ~15% higher performance due to the transition to 10nm, despite having no architectural difference. That performance edge is due to the 10nm node's higher power efficiency, which allowed Intel to add a 10% higher clock to their base clock and the rest must be due to longer boost clock times and +1 MB of L3 cache.

    Yet their TDP is the same as the previous generation, which means Intel sacrificed their entire TDP gain in exchange for merely 15% higher performance. That does not paint their 10nm node in a very bright light, which is presumably one of the reasons they delayed full volume production until next year. Nope, despite your exascale and even zettascale projections the road to them is nowhere close to "this increase in computing performance doesn't show any signs of slowing down". The industry requires either new materials like III-V semiconductors or a replacement for CMOS.

  • David Cadman

    will you be doing a video on the recent MIT advance in industrial quantity production of graphene film… thanks… this is the source video I leaned about it…
    https://www.youtube.com/watch?v=gaoE9cAbgTs

  • cmscms123456

    In 1985 I worked for a company that was making Diamond substrates from carbon based gases. Once we had perfected the process, the FBI came in and SHUT US DOWN, TOOK EVERYTHING. Whats is Diamond substrates good for? Silicon IC have a problem, heat. Diamond IC's are near limitless in speed. 33 years later, you never see nor hear of "Diamond Processors".. and you won't. The Government has them. In the 1990's I worked for a company that made 'sub-micron' metrology devices. 1995 ALL of our customers told us, they could no longer buy our equipment, because their parts were "TOO SMALL" to be seen in a 1 micron field of view… TOO SMALL…!! That was 23 years ago… You can not even fathom where tech science is today.

  • Singularity Prosperity

    Become a YouTube member for many exclusive perks from early previews, bonus content, shoutouts and more! https://www.youtube.com/c/singularityprosperity/join – AND – Join our Discord server for much better community discussions! https://discordapp.com/invite/HFUw8eM

  • Mephmt

    Hey, I just found your channel and you've definitely earned my sub. I just wanted to say, you give off a very Isaac Arthur vibe, which is amazing. I hope you grow even more! Cheers!

  • Odilo von Steinitz

    I just came across your channel by accident today and immediately subscribed – what a fortuitous accident it was! Your work is top notch and very informative.

    I have an interest (venture capital investment) concerning the practical applications of graphene in general electronics (not only computing), photovoltaics, energy storage, in propulsion and transportation, desalination and the invention/production of brand-new industrial materials through combining and interspersing and/or sandwiching layers of different metals, carbon fibre, plastics etc with graphine layers.

    The problems associated with graphene as a new industrial material capable of solving multiple problems towards progress of otherwise feasible 21st century technologies, seemed to lie primarily in the production costs and scalability of graphene sheets. It would appear now that very recently this road block may have become a thing of the past.

  • Svetlin Totev

    It's funny how you animated a clock speed going up to 1 THz. Just due to the speed of light limit the entire processor, with all of the logic gates, control circuits and internal memory, has to fit within a sphere of 150 microns diameter just because of the speed of light limit (assuming the signals travel at that speed and the logic gates don't slow the signal down more than that).
    Technically, yes, you can make a processor that fast but it wouldn't really be able to do anything that useful except for doing some small number calculations that require the operations to be executed in a specific order that that is already done in normal processors in operations like addition that can be executed in a single cycle. But the clock rate for the whole processor has to include all the interpretation and execution of the instructions and one of the main limits is how much internal memory you want since it takes up a lot of space but if you have it externally it is much slower.
    So, yeah, you can get high clock speeds for specific tasks but not much higher for general purpose processing.

  • J Shysterr

    And yet the time it takes to boot up your computer hasnt changed since the first pc and upload and down load and web surfing time ist much faster than the best dial up speeds. How about the computer industry let speeds get faster before trying to cram in more memory than the computer and internet can handle?

  • Kyle Simon

    So what happens if you link nanotubes along their length so that all their walls intersect around a central cavity?

  • Rib Bs

    If the human brain is by far faster and more efficient than any supercomputer then we need to make the next computers out of fat and protein not nano tubes right??

  • Jm ZZ

    a very fast sculptor sculpts a rock say 100grams per second. if you come with a hammer and start chipping at the rock at 1grams per second, you are slow, so what's the solution? get a bigger hammer, and chip 20 50 60 75 then 100 grams per second. the rock is smitherines, doesn't make it a sculpture.

  • DarthRaver86 86

    Bro i spent like the last hour or so watching a bunch of your videos. You use computers everyday but rarely do you ever stop to think how it all actually works. You just know it does work so your questions stop there. Its cool to see how it all works together. Awesome vids!

  • Monkey Monkey

    Even though Summit currently runs at 200 petaflops, they too say that their upgrades will be around 1 exaflop around 2020 so that's right in line with this prediction.

  • GRASBOCK WindyOrange

    The assumption that one could increase the Clockspeed of a CPU to the 10^12 Herz just because heat generation sinks low is false. This is due to the fact that the wavelength of the electromagnetic waves becomes too small for the chip to remain controllable. There is a reason as to why CPU are as small as about 4cm, while the wavelenth is about 6cm at 5 GHz. At THz the CPU would have to be smaller then 300 micro meters.

  • Emanuel de Sousa

    Love your video, but would be even better if background music was lower and you slowed down a bit. Very hard to follow at times, had to watch a few times. Awesome content and graphics. Kudos to you.

  • Dr. Fresh_2k

    Is that how humans solve super advance mathematical equations and find new technologies, by allowing super computers to think for us?

  • Dr. Fresh_2k

    If 1 EXAFLOP will simulate the human brain, at that point, wouldn't the computer be smarter then humans because it will be able to access all that information while we as humans have that brainpower but don't/cannot access all parts of our brain?

  • Fergus Moffat

    Just a little correction , next year we are expecting to have a 34 exoflop supercomputer , tachyum is building it , it will be the power of 34 human brains , and will be significantly more powerful than the 1.2 exaflop supercomputer planned by US gov in 2021 and according to a the head of true North we will also get a neuromorphic chip as powerful as the human brain next year also. Mabye this will shake these AGI denyers out of their skepticism. I have no doubt we will have an AGI within 5 years after the breakthroughs that will be made this year at Darpa. Rey Kurzweil predicted an AGI as smart as a human by 2029 , but he never predicted the rapid progress in quantum computing that could enable a super intelligent AI within 5 years and he never predicted elon Musks neuralink that will enable us to scan whole cortical columns starting in 2019

  • Banisher of Evil SOUL

    How much is 1 to the power of 50 flops? I found no search results? And that's by using a black hole as a computing device

  • MysteriusBhoice

    if you simulate the human brain with transistors it wont be as efficient
    dedicated neural units maybe better but som1s gotta make and design them
    you could train them on a standard computer so they learn faster then move them onto dedicated hardware

  • MysteriusBhoice

    all the tech shown in this video was created 10 years ago or more
    its just that the computer industry worked with silicon and improved it and wont change to a better material because it will have to change its entire assembly line to make them
    now that they can no longer do that, they turn to material scientists with stored ideas from over a decade ago to improve computing!

  • Emperor

    I'm born in the right decade, or came just a bit too early and found out on all the good stuff when it is in its heyday.

  • Feynstein 100

    Mate, speak a little slower. If you talk so fast, most of what you say will just go over people's heads. There's no rush, you have all the time in the world. So please speak at a more comfortable speed for the viewer, not like someone trying to win a rap battle.

  • MotoZest

    Incredible video, as usual! Your timelines are just slightly off – we'll be getting 3nm transistors in 2021 (likely either GAAFET or MBCFET), not 2025; we also won't hit zettascale in 2030, more like 2033-2035.

Leave a Reply

Your email address will not be published. Required fields are marked *