BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

The Neurotech Revolution Could Lead To 'Frankenstein' Brains. Here's How We Avoid It

Following
This article is more than 6 years old.

By Dr. Daphne Bavelier, Dr. Simone Schurle and Alvaro Fernandez 

Year 2030, your college-age daughter, who has normal hearing, has been pounding on you to get the latest hearing aid that allows one to cancel out noise on demand, amplify selected ambient conversations at will, and can easily connect to the music store. Should you buy one for her? Maybe you should buy one not only for her, but also for you as you enter your 60s?

Would you still be interested if this device had to be implanted in your cochlea via surgery? What if this device is now augmented with some AI opening the possibility to eave-drop on remote conversations or get automatic translation in any language of the world in real time?

These are not far-fetched questions Just now in 2017, some of us with normal attention capacity are already taking drugs that enhance concentration powers to excel at an exam, or help with demanding job conditions: According to a report from the NIH in 2013, the use of amphetamines such as Adderall and Ritalin nearly doubled between 2008 and 2013, with growing usage especially among college student groups.

Others, for example the close to 10,000 subscribers to Reddit’s tDCS forum, are tinkering with DIY electrical brain stimulation devices in the hope of increasing their sharpness. And millions more are of course purchasing all kinds of brain supplements and brain training apps.

But wait - What if habitually taking concentration-enhancing drugs leads to higher risk of dementia 20 years down the line? What if electrodes implant over the frontal cortex becomes mandatory to serve in police and military forces?

Or what if only the top 1% can afford to access and to appropriately use evidence-based brain enhancers, creating an ever-expanding Cognitive Divide?

Thank you for joining us in reflecting about some amazing Human Enhancement opportunities—and some significant risks.

Brain enhancement can be everything from a cup of coffee, to performance enhancing drugs, to to a potential eradication of Alzheimer’s disease through gene editing, and more.

And, as illustrated by an article in Wall Street Journal recently, we are in the midst of a neurotech revolution. Facebook is experiment with a Typing-by-Brain platform. A start-up co-founded by Tom Insel — the former Director of the National Institute for Mental Health (NIMH) — just raised $14 million to transform the diagnosis and treatment of neuropsychiatric disorders through artificial intelligence (AI) and smartphones. More controversially, entrepreneurs such as Elon Musk and Bryan Johnson have funded efforts to develop wearable, miniaturized brain implants to “read your mind” and let you communicate directly with computing systems.

Granted, we are still far from neuroprosthetics that knows what you want for breakfast or can interface your brain directly with your music library; yet, this endeavor is on the march and it is now high time to talk about what we as a society may wish for. As history has shown us times and times again, once the technology is out, there is no going back.

What are the risks over the benefits? To what extent are enhancements desirable on a personal and societal levels? What if people that undergo such enhancement lose their ability to relate to other who have not? If a device can enhance decision-making, how will the user know that’s it’s truly their decision, in their best interest, rather than being manipulated by the device designers?

Trying to anticipate such concerns, the World Economic Forum created last year a Council on the Future of Human Enhancement. A diverse group of 20+ research and industry leaders, our objective is to provide pioneers and all innovation stakeholders with a roadmap to harness the opportunity in a positive direction, avoiding what we could call a “Frankenstein effect”.

From our work so far, we believe that Human Enhancement technology innovations should meet each and every of these 3 characteristics:

1. Increase functional abilities needed to improve quality of life across the life span (otherwise it'd be an exercise of vanity or pointless tinkering)

2. Ensure durable and beneficial effects (not just momentary ones, and certainly not long-term counterproductive)

3. Transfer to wider societal benefits, helping make communities more inclusive, cohesive, and resilient (helping bridge, not expand, a Cognitive Divide)

These three criteria would help secure that the developed technologies meet core ethical values such as enhancing well-being, respecting autonomy and freedom, and enabling justice in the public interest, with a particular interest in reducing inequalities.

Do those 3 criteria help separate the wheat from the chaff in the scenarios posed above? Do you have suggestions to refine them?

Technology doesn’t simply unfold. Its development both shapes and is shaped by the values and actions of everyone involved, from creators to users, from investors to regulators, from scientists to engineers.

Right now, when consumer interest and industry activity are heating up, is the perfect time to discuss. You are part of this discussion. The engagement of the public in this conversation is more pressing than ever: it is not only for you, it is about you.

Dr. Daphne Bavelier is a cognitive neuroscience professor at the University of Geneva. Dr. Simone Schurle is a biomedical engineering postdoc fellow in MIT. Alvaro Fernandez is the CEO & Editor-in-Chief at SharpBrains. The three belong to the World Economic Forum’s Council on the Future of Human Enhancement.