Neurology Central

Ethical and social implications of neural–digital interfaces: an interview with Suzanne Rivera

0

SuzanneThis is a headshot image of Suzanne Rivera. Rivera is the Vice President for Research and Technology Management at Case Western Reserve University (OH, USA). In this role, she supports and oversees research that is done in all disciplines practiced at the university, which can include working with a poet to get their chapbook published, to helping a physician–scientist to get a clinical trial up and running or assisting an engineer to receive funding for their study. Suzanne is also on the faculty in the Department of Bioethics at the university, where she teaches students research ethics as well as conducts her own research and scholarship on research ethics and science policy.

In this interview, we spoke with Suzanne to discover more about her discussion at SfN Neuroscience (19–23 October 2019, Chicago, IL, USA) on the ethical and social issues raised by neural–digital interfaces. We were also intrigued to find out more about the prospect of human enhancement and whether creating ‘superhumans’ were in the realm of possibility, or if the idea was farfetched. Take a look at the full discussion below.


1.Can you provide us with an overview about your talk at the round table on the ethical and social issues raised by neural–digital interfaces?

Our panel consisted of seven people from very different disciplines across various institutions, including one research participant. The panel was quite diverse in a variety of ways and demonstrated – by the way that we were formed and by the different perspectives that we each expressed – this ethical, legal and social implication (ELSI) approach to thinking about science, medicine and technology. You can think of the ELSI methodology as a transdisciplinary approach for thinking about science and technology, and examining the human impacts of science and technology, both for individuals and for society.

What our panel was trying to do was ask and explore important questions about what it means to be able to take technology and have it communicate directly with people’s neural systems through implants – either to restore function (e.g., in the case of an amputee who can now be given a limb that provides them with the sense of touch) or even for purposes of enhancement or to address certain kinds of needs in the field of work. For example, in ordinance disposal, if you don’t want to have soldiers out physically touching bombs with their own hands then you could potentially give them that sense of touch with a robotic hand that is halfway around the world, which could be a really important advance for society because it would reduce injury and harm.

So, what we were really trying to do is say, if we’re going to develop these kinds of technologies – and clearly we are – then what kinds of important questions should we be asking around things such as equity and inclusion in order to make sure that we’re not deepening the digital divide but instead, doing something to bridge it. But also, what are we doing to mitigate the risks of use by bad actors? What are we doing to address privacy concerns? What are we doing to think about not only how this technology might change human capabilities, but how those changes might potentially affect how we define humanity itself?

2.The ethics behind neural–digital interfaces has been quite a big topic recently at many conferences. Do you think it’s possible for these machines to develop ethical principles or procedures to overcome ethical dilemmas that they may come across?

Well, I don’t really think about machines as being capable of developing ethical principles themselves, as any so-called intelligence that a machine would have, humans created. What I think is more important is for us to be extremely thoughtful and intentional about who we’re putting on the teams to create the algorithmic programs that machines are asked to execute. If those teams are not inclusive and if those teams are not thoughtful about important ethical considerations, then what you could end up with is technologies that have been created with blind spots that end up harming people.

Restricted Content / Members Only

You cannot view this content because It is available to members only. Please or Register to view this area.

Share: