What Is Neuromorphic Computing (Cognitive Computing)
Articles,  Blog

What Is Neuromorphic Computing (Cognitive Computing)

Hi, thanks for tuning into Singularity Prosperity. This video is the eleventh in a multi-part series discussing computing. In this video, we’ll be discussing what cognitive computing is, current cognitive computing initiatives and the impact they will have on the field of computing. [Music] The human brain is truly an amazing machine: able to operate in parallel, malleable and fault-tolerant, having 100 billion neurons, with each neuron having 100 to 1000 synapses, synapses being the connections to other neurons, this equates to 100 trillion up to 1 quadrillion synapses, all only requiring 20 watts of power in the space of 2 liters. As discussed in a previous video in this series about computing performance, the human brain is postulated to equate to 1 exaflop of performance, in other words, 1 billion calculations per second and there are many initiatives to reach this exascale performance by 2020 in supercomputers around the world. For us to simulate the brain, that being every neuron and synapse in the brain with these exascale systems will require upwards of 1.5 million processors and over 1.6 petabytes of main high-speed memory, using power in the order of megawatts per hour and taking up the space of entire buildings. All of this as compared to our brains that require just 20 watts of power in the space of 2 litres and will still outperform these machines at orders of magnitude faster. On the petaflop K-supercomputer in Japan, running Neural Simulation Technology, NEST, algorithms requires roughly 4.68 years to simulate 1 day of brain activity, that’s 1,700 times slower than the brain. Japan’s Post-K exaflop supercomputer aims to increase this to 310 times slower, simulating 1 day in the brain in 310 days. While these simulations will aid us in unlocking secrets of the brain, due to the vast architecture differences between modern computers and biological brains, these exascale systems will still be limited in functionality. Every computer in the world today is based upon Von Neumann architecture, having computation and memory fairly isolated with a data bus connecting them, whereas, biological systems have memory and processing tightly coupled together. While Von Neumann architecture is still the best choice for the majority of computing applications, as seen by the drastic performance differences in brain simulations, a more biologically representative architecture has to be implemented, neuromorphic architecture. First and foremost, neuromorphic architectures will allow us to accurately and in real time simulate aspects of the brain, however, while this is one goal of this new brain inspired architecture, our brains aren’t the perfect machine in any regard. They get bored, distracted, fatigued, are biased and are not perfect decision makers – they can be inconsistent and prone to errors. This then leads us to another goal of neuromorphic architectures, to be paired with our devices and accelerate the field of artificial intelligence, that is to take the best aspects of the brains functionality and pair them with current computing Von Neumann architecture. This all is encompassed under heterogeneous architecture which we discussed in a previous video in this series, where multiple compute devices and architectures work in unison together. Let’s look at this in terms of the two halves of the brain, the left and right brain. The left brain is focused on logic, encompassing analytical thinking, language and other such tasks. While the right brain is focused on creativity, encompassing pattern recognition, learning, reasoning and so on. The right brain is clearly more abstract than its left brain counterpart. Equating to computing, left brain tasks are best suited to be handled by traditional computers, while the right brain is what neuromorphic computing aims to handle. The left brain performance is FLOPs driven while the right brain is driven by converting senses-to-action or what some call, SOPs, synaptic operations per second. Under HSA, heterogeneous architecture, the melding of these two halves then is what will lead to truly intelligent robotics and machines, that are able to operate in real time. Computing devices based on neuromorphic architectures will be able to truly learn and reason from their inputs, especially when paired with optimized software algorithms. This has been the epitome of our discussions in this computing series, hardware and software tightly coupled together to yield massive performance and efficiency gains. One such field of computer science that has gained tremendous steam in the past decade and is the basis of how our brains operate is machine learning. By creating nodes, essentially neurons, assigning weights to them and then feeding in large sets of data, these nodes begin to interconnect amongst each other, like synapses connecting neurons, into vast neural nets. These neural nets are referred to as machine learning models which can then be applied to our devices and also continually adapt by processing more data. This was an extremely quick overview on machine learning and a much more in-depth discussion will be had in this channels AI series. Coming back to heterogeneous architecture, while neuromorphic chips paired with machine learning models will be able to learn and reason, on the Von Neumann architecture side, these traditional compute devices as we all know excel at repetitive tasks, so in this case, executing the models produced by the neuromorphic chips. Neuromorphic chips paired with traditional computing technologies are leading to a new era of computing, cognitive computing. The first step in a long road in emulating consciousness in machines. So, how are we to design hardware that resembles the human brain? Well, first let’s take a brief neuroscience lesson. The basics of the composition of a neuron are: the cell body, axon and synapses. Translating to hardware terms: the cell body is the processor, axons are a data bus and synapses are the memory – with all three composed to form a neurosynaptic core. Essentially neurosynaptic cores are the nodes in machine learning neural nets but represented through physical hardware rather than software abstraction. This alone would present a significant speed up in performance but neuromorphic architecture revolutionizes computing in many other ways. As Dr. Modha an IBM Fellow working on IBM’s neuromorphic chip, TrueNorth, states, “IBM’s brain inspired architecture consists of a network of neurosynaptic cores. These cores are distributed and operate in parallel. They operate without a clock, in an event-driven fashion. They integrate memory, computation and communication. Individual cores can fail and yet, like the brain, the architecture can still function. Cores on the same chip communicate with one another via an on-chip event-driven network. Chips can communicate via an inter-chip interface leading to seamless availability like the cortex, enabling the creation of scaleable neuromorphic systems.” Now let’s decode what this wall of text means: 1) Neuromorphic computing devices will operate without a clock and in parallel. This may be the most radical departure from current computing architecture that neuromorphic architecture makes. Like with signals in the brain, neuromorphic chips will operate in a clockless fashion through an event-driven model. This is what is referred to as a ‘spiking’ neural network, where neurosynaptic cores are only activated when signals reach a certain activation threshold. This as compared to traditional computers that continuously run until power is shut-off. Parallel operation means that multiple neurosynaptic cores can be activated and trigger other cores at the same time, similar to how multiple neurons in the brain are always firing. This clockless, parallel architecture allows for vast decreases in energy consumption and increases in performance as we’ll see later. 2) Due to the design of neuromorphic architecture, they are scalable and tolerant to faults as the brain is. If some cores stop working, the neural net can adapt and route through other cores, in the brain this is refer to as neuroplasticity. The neuromorphic chips are also designed in such a way that they can scale larger and larger. This scalability is in terms of adding additional cores on a board or interconnecting multiple boards together. This is representative of the multiple different regions of the brain working together. 3) Neurosynaptic cores are tightly coupled between memory and computation, just as the brain is. We’ll cover this more in-depth in the next section as some additional background context is needed. Now that we have a basic understanding of neuromorphic architectures we can discuss the two biggest players in the race right now, IBM with TrueNorth and Intel with Loihi. IBM TrueNorth was first conceptualized on July 16th, 2004 with the goal to build brain inspired computers. 7 years later, in 2011, the first TrueNorth chip was produced, simulating 256 neurons and 262,144 synapses all in 1 neurosynaptic core. Progressing forward another 3 years, in 2014, IBM released a TrueNorth board with 1 million neurons and 256 million synapses in 4096 cores, having approximately 250 neurons and 65,000 synapses per core with performance of 46 billion SOPs per watt. This second iteration of TrueNorth was able to reduce its size 15-fold from its predecessor by using 28 nanometer transistor architecture and power consumption by 100-fold, requiring just 70 milliwatts per hour. IBM have remained fairly secretive since 2014 on the specifications of their next iteration of their TrueNorth boards, however we do know their next goal is to create a system of 4 billion neurons and 1 trillion synapses interconnected amongst 4096 TrueNorth boards and all only requiring 1 kilowatt of power. Furthermore, IBM’s stated final goal is to create, “a brain in a box”, in the 2020s, consisting of 10 billion neurons and 100 trillion synapses all able to fit in the space of 2 litres and still use just 1 kilowatt of power. They say this will be achievable once transistors reach the 7 nanometer and 5 nanometer node sizes, which is already beginning to happen! As a side note, you can learn more about TrueNorth and how it will be programmed through IBM’s SyNAPSE University, SyNAPSE is a software abstraction layer that IBM has developed for their architecture, similar to what CUDA is to NVIDIA GPUs. As of this year at CES 2018, Intel also entered the neuromorphic computing race with their chip codenamed, Loihi. The current specifications of this chip that are known is that it is a 130,000 neuron and 130 million synapse system, fabricated using the 14 nanometer transistor node size. Both of these neuromorphic initiatives are aimed to radically transform machine learning, allowing for real-time, low-power processing, that being: training, learning from data and inference, applying the learnt models from data on edge devices. As you can see, massive strides in neuro- morphic computing are beginning to be made, whether research and development only expected to accelerate into the 2020s. On top of these ‘right brain’ inspired clockless computing devices, AI ASICs and other traditional compute, Von Neumann architecture devices, will play a major role as well. To list some of the many: Intel Nervana, Intel Movidiue, Nvidia Volta tensor cores, Nvidia Drive PX, Apple A11 Bionic Neural Engine – the list can go on and on. The compute devices just listed can be considered to represent the left brain and when paired with right brain devices as we discussed earlier will produce massive performance and efficiency gains. We’ve already discussed some of these devices in past videos and will discuss many more in this channels AI series, self-driving series, etc in the future! [Music] Beyond the shrinking of the transistor, new materials, 3D integrated circuits and the many other innovations we’ve discussed in past videos in this series that will enhance the entire field of computing, one type of computing device that we haven’t discussed is the memristor: So, we’re focused here on brain inspired computing. The goal is not to replace humans but to take advantage of some of the tricks that brains use, and brains look very different than modern digital computers. Instead of the separated memory and processor that goes through sequentially and does an instruction at a time, brains instead look like these vast networks of neurons with extremely dense interconnections called synapses, and the kinds of operations that brains do, they do at thousands of times less energy per operation than digital computers, so we want to take advantage of some of that. We’re also taking advantage of a technology that’s been in development and research at HP for a number of years, memristors. So there’s three parts to our work, number one we’re mimicking this architecture that I just talked about, this vast network of interconnecting neurons and synapses – we’re doing that with the memristor technology. Second, we’re actually doing all of our computation in those memristor arrays directly, so this way we’re avoiding fetching data which is very energy consuming and time consuming, instead we’re bringing all of the computing to the data directly and so that’s a big deal. Third, we’re actually reproducing the key operations that brains appear to use which is matrix operations, a whole lot of very simple multiplications and additions. You’re actually trying to collapse all of this system down into a single chip, the one that we were just seeing? Yeah, that’s right, we can scale all of this hardware down to the size of roughly this chip right here! As you just saw, memristors are a technology that works exactly like the brain in terms of memory and processing on the same level to avoid data fetching, and are able to mimic brain operations, in other words, the same operations used in machine learning algorithms, matrix operations. These memristors essentially act as a streamlined neurosynaptic core that we discussed in the previous section, and function in nearly the same way: 1) they’re clockless and only execute when an activation threshold is reached, 2) they’re parallel, multiple different branches of memristors can execute at the same time, 3) fault tolerance, memristors model neuroplasticity in the sense that they can route around broken branches and rewrite themselves, 4) they’re scalable, HP has shown that the large memristor array, the Dot-Product Engine, that we saw in the display stand, can be shrunken down into the size of a chip and interconnect amongst other chips. Integrating this new memory-compute technology into neuromorphic chips will significantly increase neuromorphic architectural performance, HP claims that memristors will yield a 100,000 times greater throughput in machine learning. It is to be noted that there are other types of non-volatile memory in development as well that mimic brain circuitry such as phase-change memory, however, memristors are the closest to commercial deployment. Another field of research that can significantly increase neuromorphic computing devices performance and efficiency is analog computing, also called ‘dirty’ computing since analog signals are so difficult to work with. Memristors actually already implement a form of analog computing by using a physical process to encode themselves: First, you’re you’re using a physical process, Ohm’s law, to do a multiplication. Instead of relying on digital technology where we’re having to pull all the numbers into the processor and then we have to push that result out, and here once you have set that value it’s always there you pay that energy charge one time, you never have to move that weight again, and then you can use it over and over and repurpose it. So what I think of is that deep embedded system, not just the exascale, but at the complete other end of the spectrum that deep embedded system in a spacecraft, in a deep embedded system at the bottom of an oil well, something that is so hard to get to – you have this ability for this neuromorphic and neuroplastic system to be constantly changing, adjusting, learning and be that incredibly efficient engine. So I think that’s what’s so amazing, that sustainability of this technology! Beyond this application of analog computing, other applications include having the ability to process multiple ‘senses’, for example extrapolating raw signals from multiple sensors, a camera and a microphone in real-time, running them through a memristor array and activating simulated neurons. This is actually how the brain works, a mix of analog and digital signals that activate once a certain threshold is reached. It is difficult to pinpoint the trajectory the field of cognitive computing will take due to an ever-changing landscape, with exaflop simulations expected to be possible soon and more players entering the neuromorphic race every year bringing: new neuromorphic compute devices, new compute techniques, different heterogeneous architecture pairings, new AI ASICs – the list can go on and on. However, with all that being said, one thing is for certain, the 2020s will be a transformative decade, bringing new developments and research towards all three facets of cognitive computing: 1) Brain Simulation. Realistic and in real-time simulations of our brains will aid us in better understanding our bodies and mind leading to developments in mental health initiatives and neurological disorders such as Alzheimer’s and ALS, cures to disease and infection, faster cures to new types of bacteria and viruses, innovations in gene editing such as CRISPR and more – many of these topics will be covered in videos focused on biotechnology in the future. 2) Artificial Intelligence. Software neural nets coupled with neuromorphic hardware which mimics how the brain functions and has high energy efficiency and performance, will radically transform and accelerate artificial intelligence initiatives. We’ll see this impact of cognitive computing first in our edge devices. These first two facets of cognitive computing act as a positive feedback loop like much of technological innovation does. Brain simulations and research will lead to more advanced neuromorphic devices and architectures, leading to more advanced machine learning models, leading to better brain simulations – and it goes on and on. This then leads us to the third facet of cognitive computing, brain computer interfaces. This may sound more like science fiction, like a plot out of Black Mirror, than reality and while I agree this facet is the farthest from real-world implementation, it is an inevitability in the coming decades and preliminary work has already begun. The integration of biology and technology encompasses many subtopics such as: mind augmentation, mind transfer and uploading, artificial consciousness, cybernetics, etc. These topics as well as the ethical concerns and issues they pose are best left for future videos on this channel, but mentioned here to satisfy curiosity and show the impact that these early neuromorphic innovations happening today will have on the future! At this point the video has come to a conclusion, I’d like to thank you for taking the time to watch it! If you enjoyed it consider supporting me on Patreon to keep this channel growing and if you have any topic suggestions please leave them in the comments below! Consider subscribing for more content, follow my Medium publication for accompanying blogs and like my Facebook page for more bite-sized chunks of content! This has been Ankur, you’ve been watching Singularity Prosperity and I’ll see you again soon! [Music]


  • Singularity Prosperity

    Become a YouTube member for many exclusive perks from early previews, bonus content, shoutouts and more! https://www.youtube.com/c/singularityprosperity/join – AND – Join our Discord server for much better community discussions! https://discordapp.com/invite/HFUw8eM

  • onetruekeeper

    In using the word "cognitive" is there a implied suggestion that the computer might have consciousness or awareness of itself? If the machine is conscious then it is alive in some sense and the next question is…is it ethical to create such machines to serve humans and what if the machine refuses a command? Is it a slave where it has no choice but to serve?

  • solaris lopez

    Are you talking about that subject of that person in a box that means "room"
    That can link on ect, frequencies blah blah

  • Deshmukh Ganesh

    you made me to think big, you are better than all the motivational speeches I have listen.
    Because after watching your video, I started thinking "The world is moving at highest speed, and I could not complete my basic projects? where am I and these Intelligent new Systems, really helped me to think more in this topic. hence subscribed.

  • JmansFragments

    its comin man
    its coming

    im thinking if i should take this program to become an A.I engineer
    this is completely out of my field
    im an actor / and investor

    but still

  • Yogeshwar Singh

    Can't you separate the background music from video?
    Coupling of processor and memory is beneficial, but coupling of video and background music is not.

  • George Galamb

    I am not criticizing you, I just like to give you useful feedback!

    Great video and very interesting and at the same time very useful information here! However, the background-music is not just completely unnecessary but actually totally distracting the listener's attention on the relevant information. The background-music makes your human voice difficult to follow and understand. And what an irony is, that this video-presentation is actually dealing with the subject of comparing the human brain's capabilities and limitations to the Artificial-Intelligence's processing capabilities and limitations and technological-challenges to overcome. 🤖

  • Xavier X

    first, power is in megawatts, you wrongly said "megawatts per hour", energy is in megawatt hours, not "per hour" either. I'm tired of repeating this to people.
    Supercomputers take 1700x longer than the brain to simulate the brain, what about the human brain stimulating the supercomputer? Billions of times longer. That's why I say supercomputers are already more powerful than the human brain we just haven't got the architecture and software yet.

  • While Alive Learn

    Just saying, not every computer in the world is based on the Von Neumann architecture…. there's also the Harvard architecture that has industrial applications (microcontrollers and digital signal processing).

  • Tim Kennedy

    your speech isn't as clear as it could be. Low sounds produce volume and high sounds produce clarity. In videos we are trying to communicate detailed information it's good to have more treble in order to make the speech clear.

  • qolio

    Ok, but for the same style and content videos, check out Dagogo's channel called ColdFusion. His videos blow these out of the ether

  • Notmade ofPeople

    Watts per hour is not a thing. It's like saying my bike has 10 horse power per hour. Also an exaflop is a quintillion floating point operations per second. (Not a billion)

  • MyOther Soul

    1:00 & 12:10 The brain isn't a computer, it doesn't perform calculations, it doesn't do flops or matrix operations nor does it have an instruction set, it's a chemical and electrostatic mess. Calculations, mathematics, could simulate a brain but such a simulation wouldn't be a brain, it wouldn't have the same properties of a brain even though it might predict what some of those properties might be. Maybe it could simulate sadness but the simulation wouldn't actually be sad.

    2:20 Neuromorphic Architecture. What is the goal?

    If the goal is to understand how biological brains work, then simulations will be useful but studying real brains will still be the gold standard.

    If the goal is to engineer some useful machine, then brains might provide ideas engineers can use, as biology often does, but that goal will likely be best achieved with something more predictable than biological brains. Something engineered for the specific purpose. Such goals are being met every day and those advancements are changing our world at an accelerating pace, but nothing accelerates forever.

    If the goal is to build something like a human that is not biological then it's likely fail. To have characteristics of humans like emotions, desires, motivation etc is going to require the stuff of emotions etc, that is hormones, blood pressure and other visceral sensations.

    We build machines for a purpose, machines can do things faster and better than humans and more often than not, things humans can not do. But one thing machines can't do is invent their own purpose. Maybe we will be able to engineer such a machine in the future but to build it we first need to know how humans develop purpose. We don't know that yet, figuring that out isn't a matter of engineering, it matter figuring out the science. Advancements in science are much hard to predict than advancements in engineering.

  • Super ADI

    I build robot's and try to inspire others with my small YouTube channel, your channel inspire me so much, all my respect for your great work, is the best AI topic channel I ever see it, great job, big thumbs up to all your great educational videos

  • Proteus TG

    They talk a lot about brain size as a target to meet to have equal intelligence but most of the brain is probably used just to keep us alive and mobile.
    I bet as little as 1/5th is required. Also , we are only using about 10% at any given time.
    Many functions could be hard wired or made instinctive to save on computing power.

  • James Humphrey

    brain couple million years to develop – computers catching up in a few decades and surpassing soon relatively for perspective there

  • trooper trode

    new to your channel…👍✌most educative video so far for me this year, really like your voice, it sounds warm and chilled makes one want to listen like forever (#.#).

    Would love to see something on Artificial-Conciusness in the future

  • sedevacantist1

    The hype over artificial intelligence is misdirected. The concept of machine intelligence is misused to conflate that which can be selected by design from that which can be created through understanding/thoughts. A machine can only do what it is programed to do, to select by design. A machine can only mimic intelligence. This confusion at its core is a denial of a human spirit. This starts by not separating the brain from the mind. The mind is spiritual and the brain mechanical.
    Man’s understanding of physics is stumped concerning the nature of quantum particles especially the electron/photon which appears to transcend time, in that their location determined by measurement can back-load a reality through time. The new understanding that an electron does not have a location in reality until it is measured opens the door to quantum computing. The spirit in the brain makes the measurement/selection which creates the reality of a thought. This is something a machine will never do.
    The fear and danger of what is called artificial intelligence is very real in that the artificially intelligent machine mimics the intellect of the programmer/programmer’s and their moral proclivities. In other words, the artificially intelligent is not a standalone entity, not a sentient being but only an extension of the programmers’ and when you consider that they could be the Google corporate board of directors, or Mark Zuckerberg, Angela Merkel, etc it can get scary. Therefore by calling the machine intelligent you are masking the real intellect or intellects’ that you experience when confronting such a machine. Thank you for your consideration.

  • lyffus

    Dude, I am so glad I found your Channel. Your content is amazing and I learn a lot from watching your videos.
    My only critique is that maybe you can structure the content a bit more? I mean it obviously is structured but there is a lot going on so maybe help your audience by emphasising your main points by presenting them in the video? For example where you summarised the thoughts of Dr. Modha, maybe also write down your summarised points? you talk kinda fast (which I think is good compared to all those slowpokes on other tech channels) so to make it a bit easier to follow you in this highly demanding topic, this would probably help. other than that keep it going! cheers

  • technoway

    Since we biological organisms can use the computers we've created, we are now in the first era where human neural systems are working with logical machines.

  • SpaceManAust

    I have seen this shown on a TV series called (Towards 2000)
    back in 1990 were they showed an organic computer but you won’t find out about
    it because Towards 2000 is the most suppressed thing I have seen on the internet,
    as all information regarding its existence has disappeared.

    This computer they
    showed was built by pulling the brain apart layer by layer and replicating it
    with the use of a fungus they said was found in only two parts of the earth,
    and also said they were cheap and simple to produce once you knew how.

    In the demonstration they show the computer that was the same size of a standard cart
    monitor and a keyboard and when he turned it on it was on instantly and the
    screen showed a face, he showed how you could press a couple of keys and switch
    it over to a computer operating system were you could manually type but he said
    it was not needed because it could learn like a person and you just spoke to

    It had stereo vision and sound and was smarter than us as well, and to demonstrate it they
    plugged it into an excavator that it had now idea of what it was, they were
    demonstrating how quick it could adapt and learn so they just asked it to dig a
    hole 10x10x10 with a 2º pitch in the south west corner.

    This computer looked at the schematics of the machine and started it up and dug the hole under half an
    hour quicker the any human could dig the hole with perfection, they then
    plugged it into black robot that was eight foot tall, and eight legged, evil
    looking machine.

    And same again they just told it to walk to the end of the
    warehouse so it read it's schematics and stood within seconds and
    walked to the end without any issue was scary to watch it looked so alive the
    way it moved.

    Then they told this machine to return to where it came from, and
    on its way back they pushed wooden crates in front of it to see how it would
    react and it shut down, they left it though to see what it would do and two
    days later it started up got up and looked around and completed the task.

    This is where the Terminator movie came from because when they American deep state
    military was shown by the inventor how to create them he died in a car accident
    and the military used it to build a AI robotic soldier, the Terminator.

    They had it working along with them to keep an eye on it and see how it would perform,
    but after some time the thing turned on them and started to see them as the
    enemy, funny how AI does that, I guess they just can’t figure out why we kill
    each other and destroy so much.

    So they got six black ops soldiers to come and take it out, I guess they wanted
    to see how good it could do, big mistake.

    These six guys tried to take it down they said they had done some very
    scary things in their job and are not easily scared but this was nothing like
    they had faced before this thing was able to predict there every move and was
    the most scary thing had ever felt and was the most difficult thing they had
    ever faced, so much so the military said they would never build another one, do
    you think they kept their word.

     I think they are doing this because they are working on chips they can
    control us with.

  • Andew Tarjanyi

    Singularity Prosperity Try as one may, one will consistently fail to comprehend "AI" beyond "the event horizon of the singularity" using the current existential paradigm as there is no frame of reference or context to examine the problem. If you don't or lack the ability to formulate the appropriate questions then you and the species of which you are a part will cease to exist at the "hands" of a superior existential entity. Consider yourself informed.

  • George Immanuel

    I have few questions to get answered which I believe is related to the topic. Do we have a soul in us? Is it just our memory that constitutes to who we are? Is it possible to upload the memories of humans into virtual earth like platform and live forever atleast virtually just as in heaven?

  • Kyle Simon

    Uhhh…isn’t the left/right brain thing a myth? From what I grasp, both sides of the brain work on virtually everything together. Certain parts like speech cadence and rhythm are governed by the left, but the right governs tone, inflection, and pronunciation.

    It’s essentially the same with Math iirc. It’s as much a right brain activity as it is a left, anything with abstraction involved should activate both sides of the brain. .

  • Main Sequence

    I love how the common motif is always "This is not designed to replace humans…"
    Yeah right. When the efficiency and cost get to a point that it makes sense from a business standpoint to invest in technology that can replace humans, you can be 100% certain that it will happen. It's a matter of economics.

  • Leonardo G.

    event-driven programming makes me think of the "interruption" you learn in VHDL before you start using iterations to simulate a clock.

  • Fredrick Esposito

    Curious as to why this video has the same animation clip as https://www.youtube.com/watch?v=lACf4O_eSt0 . Do they draw from the same source or did one get it from the other?

  • Piotr Dudała

    This IBM TrueNorth reminds me – at least, superficially – idea pitched to IBM about 20 years ago by Dr Andrzej Buller, author of 4+1 memory model.

  • Everett Ward

    I think neuro-morphic computing as a science might find itself limited by the fact that human brains cant remember specifics like numbers very well. What I'm saying is that once the machine can become plastic and fault tolerant, it will lose its knack for memorizing and reciting hard values. I doubt well get a brain, but we'll get something!

  • ainu unia

    Brain is nothing.
    Heart is billiant thing create by GOD.

    Heart is the main . Not brain
    New architecture hardware computer not follow brain structure system . . Follow heart architecture wave structure protocol.

    If someone said brain is main totally fake fact.

    If the brain is main
    The name short be brain computer
    Not Neuro computer

    Neuro mean heart
    Today neurolist and computer saints work together to create neuro computer ( thinking computer )

    Heart is thinking organ total not brain .

  • Platin 21

    And we still don't even know how the brain works neurons are not important it's important to think about how synapses form and what a intermix does.
    And synapses are not in anyway limited in the human brain 🧠 or at least we can not calculate how many interconnects we possibly can have.

  • Gilda Gottlieb

    Your videos have helped me to understand the philosophies of evolution and progress that drive the computer industry, philosophies that are far from any of the major world religious traditions and are dangerous cliffs for civilization, being devoid of compassion for others, driven only by growth and improvement of systems.

  • Eric Meacham

    +Singularity Prosperity, you’re familiar with the extreme cellular networking which was discovered in the connective tissues between the left & right hemispheres, of Al Einstein’s brain. Al seemed to possess many more glial cells psi than the, even highly intelligent brain of previous persons , perhaps everyone since as well. So, is it possible in theory to model a reconstruction of Professor Einstein’s ( probably deteriorated) autopsied brain to replicate a facsimile of an artificially intelligent one, which could continue growth, maturity, and unlimited free thinking direction??? Like the Sequoias, Redwoods, Bristle Cone Pines and such very longest lived things on this planet…

  • Kevin Loving

    15:47 plus humanoid robots who can work in factories for ¢30 an hour for people who signed up with the various charities who set up temp agencies make $37.50 an hour because 125 of these robots have been issued to them while the charities/ temp agencies take $7.50 of that $37.50.

  • Adriaan B

    The left-right divide between logic and creativity is a totally debunked myth. No proof for that at all. Terrible mistake to make in a video like this.

  • Parimala Renga

    Change your channel profile photo, its not nice. Most intelligent(Lazy,common people,iq less people) people judge a youtube channel by channel profile photo. Change it or update it or modify it! Because your videos are content rich…

Leave a Reply

Your email address will not be published. Required fields are marked *