The Ethical Implications of Mind-Machine Meld | Future You | NPR
Articles,  Blog

The Ethical Implications of Mind-Machine Meld | Future You | NPR

[Elon Musk] Hello, everybody. [Elise Hu] This is entrepreneur Elon Musk announcing his new company, Neuralink. The goal? Merging our human minds with the super intelligence of computers. [Musk] So hopefully we can have a tertiary layer, which is the, kind of, digital superintelligence layer — and in fact, you already have this layer — so it’s your phone and your laptop. [Hu] That’s right. He wants to put computing devices inside us. Okay pause. This brings us to the big question of this episode. Do we want To link our minds directly with machines? For this special episode of Future You we’re talking with two big thinkers about augmenting our brains and bodies. How is the technology already available today upgrading or downgrading us? Our minds can already be networked with other ones. Our bodies can be augmented with exoskeletons. Our learning can happen in half the time, [Dr. Dan Chao] … like, so that you’re hitting motor cortex. [Hu] … so long as our brains get an extra boost. The promises are exciting but we can’t ignore the pitfalls. In the race to the future we should stop and look around for a minute. What could go wrong? How much enhancement do we really need? Let’s start with the promise of this moment. And where else to talk about a bright future then Venice Beach in Southern California. [Bryan Johnson] My name is Bryan Johnson. I am an entrepreneur and I started a company called kernel to build brain interfaces. [Hu] Nailed it! [Producer] It looked really good. [Hu] Like Musk and Facebook’s Mark Zuckerberg, he’s hopeful about how this technology could upgrade our ability to solve big problems. And they’re each spending millions in research trying to unlock the mysteries of our minds, so computers can better read them and better work inside them. [Johnson] If we understood the inner workings of our brain you could imagine that we could use that information to better ourselves. We all know what it’s like to experience some mental kind of challenge. And we’ve just begun to scratch the surface with understanding our brains, and mostly that is a problem of the hardware we have to get access to it. [Hu] Johnson argues if we can unlock human capability and connect our cognitively-enhanced brains to one another to solve problems, every other problem of the world is easier to solve. [Johnson] My estimation if we are to address climate change, the risk of pandemics, of wars, of terrorism of mental illness, of everything we care to solve the world there’s one thing they all have in common … our minds. And yet we have very primitive tools to actually explore how we might change our minds so that these problems don’t occur in the first place. [Hu] This has always been the hope for new technology. Whether its cars, planes, the internet, vaccines … we hope the tools connect us, heal us, and help transcend our current limitations. Brain-machine interfaces are a way to help ourselves and potentially rewire ourselves at the same time. But technology ethicist Tristan Harris says let’s slow down. He left Google to become a leading voice on the harmful effects of Silicon Valley’s creations. [Tristan Harris] I really respect Bryan and what he’s doing and trying to do. What I find, though, interesting about it is if we don’t even have an honest appraisal of how we’ve already been cognitively downgraded then what does it mean to upgrade? Like, what are we upgrading? [Hu] Most of us don’t have implanted chips in our heads yet. Most of us don’t have access to technology that stimulates our minds for better performance or learning. Harris argues even with the devices we hold in our hands and keep in our pockets pick up an average of 86 times a day we’re already changing our brains, and it’s not for the better. [Harris] We have to ask ourselves what got us into the present situation where our attention spans are 40 seconds on any computer screen? What got us there wasn’t let’s make our attention span short. What got us there is let’s give ourselves superpowers and we didn’t know ourselves well enough that when we gave ourselves superpowers we debased our way of making attention. And so I really do strongly push back and question the motivations of, “Let’s give our cognition superpowers,” ’cause I think that is actually omni lose/lose self-destructive. I think it’s a bad idea. We have Paleolithic emotions medieval institutions and godlike technology. So when I say, like, “increasing consequences,” Imagine a shark with shark instincts of ancient, Paleolithic, like, thousands of thousands of years ago — you’re operating with shark instinct. So as a shark, you know, what the world’s like for you is when there’s blood in the water you go [foom] like this with your head right. And you just go over there. But imagine you attach exponential tech to a shark. Let’s say it has a 1,000-mile long drift net extending from its nose. It goes [schwoom] and it swipes– sweeps through the whole ocean and it just destroys its habitat. Because we gave exponential consequential tech to something that has Paleolithic guidance. And now we’re doing that with exponential cognition ’cause we’ve strapped on these brain-computer interfaces. We’re sort of destroying the social and interior habitat of our mental health, of our sense of connection. I’m not saying it’s all bad but if we don’t even have a balance sheet of these are the harms and we’re just eager to just race into the future and get to that that future milestone … This is why it’s not about focusing on the present so much as it’s about understanding carefully, like really really subtly, like where is this good and where is it in control and where is it happening with wisdom when we’re making technology. [Hu] He’s a self-described skeptic of brain boosting but Harris did think of one way he wanted to see humans improve their minds. [Harris] Well, I mean almost like we had hearing aids to help first people who are deaf hear better and then the military might use something like that just to extend and get superhuman hearing. You know, we’d want to apply that in the areas where we want, you know, superhuman powers. Superhuman empathy would be really good. Superhuman ability to find common ground would be really good right now. Superhuman ability to feel loving connection with people when we’re feeling when we don’t feel that. But if we’re not recognizing that, you know, now, and we’re just going to race off and create you know, exponentiated cognition without seeing where the failure modes of cognition are then, you know, we have to be really careful. [Hu] In every conversation we have on this topic the creators of powerful tech have also brought up their concerns about this technology’s misuse. [Atilla Kilicarslan, Ph.D.] … pretty powerful. Privacy. What’s still ours when computers are inside and vice versa? Inequality. Who has access to this
kind of connectivity? And lethality. Mind-machine melds can kill, perhaps more powerfully. The early examples brain-machine interfaces are already here, so it’s worth asking these questions now. How has existing technology already changed our brains and is it worth it? What will it mean to be human when technology is inside us? Our entire season is dedicated to these questions and I want to hear from you. Message me @elisewho or catch all the episodes at or subscribe to NPR on YouTube.


  • Calvin McClory

    The most vulnerable part of cyber security is between the chair and the keyboard.

    This tech is necessary for humans to compete in a post singularity reality.

    This technology is necessary to achieve singularity. We are the circuits of the thinking machines.

  • Greg Hartwick

    The “blue screen of death” has a new connotation.
    No, no! Not Windows 10!
    Put me in a nice distro of Linux. Mint has a nice feel to it.

  • sciencetoymaker

    You can add to the list of negative consequences: deceit and fraud. Consumer retail headsets that claim to interface with brainwaves in fact only detect when you have put the thing on–then they just generate encouraging results. In other words, you will get the same result if you attach the headsets to a bunched-up, wet towel as you would actually putting it on. You can see it here

  • Alonso Favela

    I just hope it doesn’t end up like M.T. Anderson’s “Feed”. Targeted advertising to your brain, increases in inequality between the rich and the poor, and an overall disconnection for nature.

  • CocoAnana

    This just seems like it would exacerbate inequality, bombard us with ads, and record everything we do (which could be used for ideological observation). I would NOT want this to become a thing.

  • Firestorm12345678910

    The ethical implication is that you won't die. Since immortalism tech is still a big taboo the mind-machine interfacing is also a very good way to get that first foot in the door. It appears that all such tech must be popularized first and for people in general to have some rudimentary knowledge about it in much the same way as presidents get elected in democracies.

    I disagree with Tristan Harris because being careful also means loosing time to being careful and therefore delaying on technology and or not increasing the scientific knowledge to achieve physical immortality as quickly as possible for all human beings (and then other animals etc).
    I mean if you wan't to talk about ethics then the foundation of physical immortalism as say an ideology must be present (in laws/constitutions etc) and from that platform you can talk about "being careful" because putting chips into brains might mess up our minds/selves too much. Humanity suffers from a foundational problem first and foremost and since obviously there is no foundation to speak of (a rational logical one) we keep plugging in fun little problem holes and masturbate our selves to oblivion…philosophically speaking. If one want's to talk about "ethics" seriously then the foundation has to be in place. Now of course you will get people opposing this and calling this a creation of a giant cult for the whole world or some such…except that who you gonna call (ghost buster reference) when you get sick? I thought so. Leave the mind over matter and spirituality things AFTER physical immortality had been achieved and is rock solid guaranteed to everyone free of f-ing charge.
    Oh and I forgot to add. Once we figure out how our brains work (and this would obviously * hint * require a-lot-of-time…sigh why do I even try…) and therefore how our minds work then it is either VR worlds time or the much more dangerous external space exploration thing where you unnecessarily bring your brain and body on a space ship to explore space and foreign planets.

  • SoberRS

    lol who cares about ethical or not the companies dont care as long as it makes money they only ask to make you feel safe lol its not about you

  • Stefan Nikola

    I agree with Tristan, and I'd like to add that this brain augmentation is very half-brained in that it's very logical and not connecting to the emotional half of our brain. Our human brain evolved to work both logically and emotionally, and so if the tech is only going to augment the logical side, then there are going to be a lot of problems.

Leave a Reply

Your email address will not be published. Required fields are marked *