Neuralink is another of Musk’s many companies. It’s part of the growing brain-computer interface (BCI) industry. The implant he referenced is basically a tiny computer – a chip and electrodes – that gets sewn into a person’s brain. The device would allow someone to control a phone or a computer with just their thoughts. That’s the idea, anyway.
Eran Klein, a neuroethicist at the Center for Neurotechnology at the University of Washington, spoke to the Texas Standard about some of the ethical considerations of this emerging technology. Listen to the interview above or read the transcript below.
This transcript has edited lightly for clarity:
Texas Standard: You know, I think research in this field has been underway since the ’60s at least. Could you explain a bit about why a device like this or how a device like this might be used in the near future?
Eran Klein: Yeah, sure. So this brain-computer interface research, as you mentioned, has been around for a while, but it’s really accelerated in the last 20 years or so.
And, you know, this kind of technology could be particularly helpful for people who have different kinds of disabilities, like the Lou Gehrig’s disease, ALS, stroke, anything where the electrical transmission from the brain to a limb, for instance, is interrupted. And so this kind of provides an alternative pathway to get information from the brain to what the brain controls.
In fact, I understand that Neuralink’s first trials have targeted people who’ve lost motor function somehow, right?
What about actually developing these devices for humans? I mean, none of these devices have yet hit the market. And I guess a lot of questions remain about whether or not this can be done safely. And if not, might folks try to take the risk anyway, given the promise of the benefits?
Yeah. So although we’re hearing about this because of Neuralink, I mean, there’s been a lot of publicly funded research in academic medical centers around the world to really kind of advance this technology. And that research – with people who are volunteering, kind of putting their brains out there for science – has been held to really high safety standards.
So, you know, at present this research has been done quite safely. It’s hard to know with Neuralink, given that it’s a private company and all the information that they have has not been kind of fully shared with the scientific community.
Well, it’s my understanding that there’s been a lot of research done on animals and a lot of those animals in the course of that research have died. And some have alleged that more than one would expect have actually perished in the testing phase of this. What do you see as the number one ethics question facing something like Neuralink?
Well, I think the relationship between a sort of a private company and the motivations for profit, I think, and then safety and scientific advancement on the other.
And, you know, you hope that those two things can go together, but the safety standards are different in the private world. And so the risk there is that the push for scientific advance in industry might go beyond kind of our current standards of safety.
But, I think there are other questions that folks have. Like, for instance, if this does become a product, how would you protect your privacy if you’re talking about an implant and the potential for a third party having access to your thoughts? I mean, can you provide consent?
I know that that’s something that Neuralink says, that individuals participating would provide their consent. But does that mean assuming companies like Neuralink might be electing and using this data that they gather from your thoughts?
Yeah, I think that’s an excellent question. And I think we just don’t know, at this point, all of the data and how meaningful that data is that Neuralink would be collecting.
But you can imagine that, right now, if I have a thought and I decide that I don’t want to share it with you at the very last minute, then I can hold that back. But if I have a device in my head that’s recording my intention to speak, that thought might be verbalized on a computer screen or through a computerized voice and it changes the way in which we communicate and maybe how we communicate. And so I think there are some real privacy issues to be thought through here.
Of course, Neuralink does insist that we’re talking about an opt-in option, right? And presumably, that could mitigate – at least if it’s transparent and open and detailed – that could communicate some of those risks to the users. Does that mitigate the ethical concerns as you see it?
I think it mitigates some of them. I think part of the issue, though, is we just don’t know what kind of privacy risks there are.
And so you may consent to what you and the researchers think the device can do. And it turns out the device can do other things. And so I think there still are going to be privacy concerns, even with a robust, informed consent process.
Well, is it your sense that implants like Neuralink’s will become mainstream in the not-too-distant future, or is there more hype around this than we realize?
That’s a difficult question. I think there have been tremendous advances in BCI technology in the last ten years, even.
But, you know, to be mainstream in the clinical world… So could I see people with spinal cord injuries using brain-computer interfaces in the next 5 or 10 years? Yeah, I could actually see that. Could I see people without medical conditions or disabilities electively having brain surgery to have this sort of device put in their head? That, I think is hard for me to fathom.