[Elon Musk] Hello, everybody. [Elise Hu] This is entrepreneur Elon Musk announcing his new company, Neuralink. The goal? Merging our human minds with the super intelligence of computers. [Musk] So hopefully we can have a tertiary layer, which is the, kind of, digital superintelligence layer — and in fact, you already have this layer — so it’s your phone and your laptop. [Hu] That’s right. He wants to put computing devices inside us. Okay pause. This brings us to the big question of this episode. Do we want To link our minds directly with machines? For this special episode of Future You we’re talking with two big thinkers about augmenting our brains and bodies. How is the technology already available today upgrading or downgrading us? Our minds can already be networked with other ones. Our bodies can be augmented with exoskeletons. Our learning can happen in half the time, [Dr. Dan Chao] … like, so that you’re hitting motor cortex. [Hu] … so long as our brains get an extra boost. The promises are exciting but we can’t ignore the pitfalls. In the race to the future we should stop and look around for a minute. What could go wrong? How much enhancement do we really need? Let’s start with the promise of this moment. And where else to talk about a bright future then Venice Beach in Southern California. [Bryan Johnson] My name is Bryan Johnson. I am an entrepreneur and I started a company called kernel to build brain interfaces. [Hu] Nailed it! [Producer] It looked really good. [Hu] Like Musk and Facebook’s Mark Zuckerberg, he’s hopeful about how this technology could upgrade our ability to solve big problems. And they’re each spending millions in research trying to unlock the mysteries of our minds, so computers can better read them and better work inside them. [Johnson] If we understood the inner workings of our brain you could imagine that we could use that information to better ourselves. We all know what it’s like to experience some mental kind of challenge. And we’ve just begun to scratch the surface with understanding our brains, and mostly that is a problem of the hardware we have to get access to it. [Hu] Johnson argues if we can unlock human capability and connect our cognitively-enhanced brains to one another to solve problems, every other problem of the world is easier to solve. [Johnson] My estimation if we are to address climate change, the risk of pandemics, of wars, of terrorism of mental illness, of everything we care to solve the world there’s one thing they all have in common … our minds. And yet we have very primitive tools to actually explore how we might change our minds so that these problems don’t occur in the first place. [Hu] This has always been the hope for new technology. Whether its cars, planes, the internet, vaccines … we hope the tools connect us, heal us, and help transcend our current limitations. Brain-machine interfaces are a way to help ourselves and potentially rewire ourselves at the same time. But technology ethicist Tristan Harris says let’s slow down. He left Google to become a leading voice on the harmful effects of Silicon Valley’s creations. [Tristan Harris] I really respect Bryan and what he’s doing and trying to do. What I find, though, interesting about it is if we don’t even have an honest appraisal of how we’ve already been cognitively downgraded then what does it mean to upgrade? Like, what are we upgrading? [Hu] Most of us don’t have implanted chips in our heads yet. Most of us don’t have access to technology that stimulates our minds for better performance or learning. Harris argues even with the devices we hold in our hands and keep in our pockets pick up an average of 86 times a day we’re already changing our brains, and it’s not for the better. [Harris] We have to ask ourselves what got us into the present situation where our attention spans are 40 seconds on any computer screen? What got us there wasn’t let’s make our attention span short. What got us there is let’s give ourselves superpowers and we didn’t know ourselves well enough that when we gave ourselves superpowers we debased our way of making attention. And so I really do strongly push back and question the motivations of, “Let’s give our cognition superpowers,” ’cause I think that is actually omni lose/lose self-destructive. I think it’s a bad idea. We have Paleolithic emotions medieval institutions and godlike technology. So when I say, like, “increasing consequences,” Imagine a shark with shark instincts of ancient, Paleolithic, like, thousands of thousands of years ago — you’re operating with shark instinct. So as a shark, you know, what the world’s like for you is when there’s blood in the water you go [foom] like this with your head right. And you just go over there. But imagine you attach exponential tech to a shark. Let’s say it has a 1,000-mile long drift net extending from its nose. It goes [schwoom] and it swipes– sweeps through the whole ocean and it just destroys its habitat. Because we gave exponential consequential tech to something that has Paleolithic guidance. And now we’re doing that with exponential cognition ’cause we’ve strapped on these brain-computer interfaces. We’re sort of destroying the social and interior habitat of our mental health, of our sense of connection. I’m not saying it’s all bad but if we don’t even have a balance sheet of these are the harms and we’re just eager to just race into the future and get to that that future milestone … This is why it’s not about focusing on the present so much as it’s about understanding carefully, like really really subtly, like where is this good and where is it in control and where is it happening with wisdom when we’re making technology. [Hu] He’s a self-described skeptic of brain boosting but Harris did think of one way he wanted to see humans improve their minds. [Harris] Well, I mean almost like we had hearing aids to help first people who are deaf hear better and then the military might use something like that just to extend and get superhuman hearing. You know, we’d want to apply that in the areas where we want, you know, superhuman powers. Superhuman empathy would be really good. Superhuman ability to find common ground would be really good right now. Superhuman ability to feel loving connection with people when we’re feeling when we don’t feel that. But if we’re not recognizing that, you know, now, and we’re just going to race off and create you know, exponentiated cognition without seeing where the failure modes of cognition are then, you know, we have to be really careful. [Hu] In every conversation we have on this topic the creators of powerful tech have also brought up their concerns about this technology’s misuse. [Atilla Kilicarslan, Ph.D.] … pretty powerful. Privacy. What’s still ours when computers are inside and vice versa? Inequality. Who has access to this
kind of connectivity? And lethality. Mind-machine melds can kill, perhaps more powerfully. The early examples brain-machine interfaces are already here, so it’s worth asking these questions now. How has existing technology already changed our brains and is it worth it? What will it mean to be human when technology is inside us? Our entire season is dedicated to these questions and I want to hear from you. Message me @elisewho or catch all the episodes at npr.org/FutureYou or subscribe to NPR on YouTube.