BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

The EU Is Proposing Regulations On AI—And The Impact On Healthcare Could Be Significant

Following
This article is more than 2 years old.

The emphasis and development of artificial intelligence (AI) is swiftly growing, with innovators across the globe trying to create more viable use-cases for this groundbreaking technology. AI’s market reach has penetrated nearly every large industry, including manufacturing, retail, infrastructure, financial services, defense, and healthcare, among countless other sectors.

Healthcare especially has experienced an incredible amount of attention in the AI space. The value proposition of AI in healthcare is undoubtedly extensive, especially as the industry is poised to surpass over $11 trillion in market valuation, and given that healthcare is such an inherently data-rich, innovation heavy, and operationally nuanced field.

Last week, the European Union (EU) put forth its “Proposal for a Regulation on a European approach for Artificial Intelligence,” intending to create “the first ever legal framework on AI, which addresses the risks of AI and positions Europe to play a leading role globally.”

The proposal mentions: “the same elements and techniques that power the socio-economic benefits of AI can also bring about new risks or negative consequences for individuals or the society. In light of the speed of technological change and possible challenges, the EU is committed to strive for a balanced approach […] Rules for AI available in the Union market or otherwise affecting people in the Union should therefore be human centric, so that people can trust that the technology is used in a way that is safe and compliant with the law, including the respect of fundamental rights.”

But the wider arc of artificial intelligence does not lend itself to easily drawn boundaries, simply due to the nature of the technology. Hence, in the coming months and years, legal scholars, regulatory authorities, scientists, and AI innovators will have to scrupulously navigate what exactly the real-world effects of this new proposal will entail.

In a recent piece for Politco, Melissa Heikkila writes about the “key battles ahead for Europe’s AI law.” She explains that some of the proposal’s key stipulations, such as those regarding “banned” artificial intelligence applications (See: Section 5.2.2. Prohibited Artificial Intelligence Practices) or “High-Risk” systems (See: Section 5.2.3, High-Risk AI Systems Title III) may potentially require more intricate analysis to truly understand and decipher how they will actually change the realm of AI.

A few key ideas/statements from the proposal that have garnered increased attention include:

  • Section 1 CONTEXT OF THE PROPOSAL: “The proposal is based on EU values and fundamental rights and aims to give people and other users the confidence to embrace AI-based solutions, while encouraging businesses to develop them. AI should be a tool for people and be a force for good in society with the ultimate aim of increasing human well-being.”
  • Section 5.2.2. PROHIBITED ARTIFICIAL INTELLIGENCE PRACTICES (TITLE II): “The list of prohibited practices in Title II comprises all those AI systems whose use is considered unacceptable as contravening Union values, for instance by violating fundamental rights. The prohibitions covers practices that have a significant potential to manipulate persons through subliminal techniques beyond their consciousness or exploit vulnerabilities of specific vulnerable groups such as children or persons with disabilities in order to materially distort their behaviour in a manner that is likely to cause them or another person psychological or physical harm.”
  • Section 5.2.4. TRANSPARENCY OBLIGATIONS FOR CERTAIN AI SYSTEMS (TITLE IV): “Transparency obligations will apply for systems that (i) interact with humans, (ii) are used to detect emotions or determine association with (social) categories based on biometric data, or (iii) generate or manipulate content (‘deep fakes’). When persons interact with an AI system or their emotions or characteristics are recognised through automated means, people must be informed of that circumstance.”

Though regulators and innovators may not exactly know yet how this proposal will impact healthcare in real terms, it will likely create some sort of precedent and framework moving forward, depending on how well it is received in the coming months.


Indeed, healthcare is an inherently sensitive industry, literally dealing with matters of life and death. Hence, the standards are, and should be higher.

Innovation in the healthcare space continues to push the bounds of what was once thought possible, and may potentially hold many societal benefits, if done correctly. Take for example Elon Musk’s Neuralink. Earlier this month, after a significant demonstration by the company on its new technology, I wrote about how Neuralink continues to embrace innovation in the brain-machine interfaces space, with the hopes of potentially providing respite to individuals with paralysis. Although Neuralink is entering relatively uncharted territory which may therefore create room for potentially significant consequences, there is also an immense potential value to the technology, if it can be developed in a safe and responsible manner.

More specific to innovation in artificial intelligence, take for example the AI system that was developed at the height of the coronavirus pandemic, as a means to help detect Covid-19 on patient x-rays. This was a significant achievement, especially at a time when physicians were already overwhelmed with skyrocketing Covid-19 infected patient volumes. Similarly, Microsoft is also expanding its presence in the healthcare AI space, as a means to “enable better functions in medical research, health equity, and data collaboration.”

Overall, market analytics indicate that the healthcare AI market will be worth more than $61 billion within the next decade, illuminating the massive amount of interest and investment in this sector.

Regulatory frameworks regarding AI, such as the one put forth by the EU, and policy frameworks on healthcare generally, will undoubtedly only continue to grow, especially as innovators attempt to solve increasingly difficult problems with technology. Perhaps some amount of regulation is indeed necessary, in order to ultimately prioritize patient safety, health, and privacy. The key will be to find the perfect middle ground with this aspect in mind—carefully focused on the highest standards of patient safety, while also ensuring progress in innovation.

Follow me on Twitter or LinkedIn