Using AI ethically to protect health data from bias

With the right partners, federal health agencies can build a solid foundation of high-quality data to implement ethically sound AI tools, says an Optum report.
health data
(Getty Images)

As federal leaders at health agencies look to use more artificial intelligence tools, they will first need to make sure they have a solid foundation built on high-quality data and diverse perspectives, says a recent report.

That requires that their data is standardized, tested and collected from multiple sources in addition to running a variety of conceptual models and analyst perspectives that help to avoid bias.

health data

Read the full report.

“While a seemingly endless amount of data exists, some agencies are not using data to the fullest potential,” says the report from OptumServe, a leading provider of information, data analytics, technology and clinical insights in the effort to improve overall health system performance.

For public sector organizations, this can stem from two factors. One is the inability to generate and collect a variety of data from multiple sources, with many different formats, at scale. Another comes from the existence of siloed systems limiting organizations’ ability to curate and clean data to make it useful.

If agencies can overcome these challenges, they will be better equipped to realize the benefits of AI to automate repetitive tasks, make data-driven decisions and improve the ability of agencies to meet their mission.

OptumServe — Optum’s government service branch — is already working with federal agencies to use AI for clinical, financial, administrative, social and policy applications. For example, the report shares how they have partnered with the Department of Defense to evaluate future health outcomes of 3.2 million TRICARE Prime beneficiaries in order to reduce smoking-related diseases and medical costs of active duty and retired Military Service members.

“With the convergence of algorithmic advances, data proliferation and tremendous increases in computing power and storage, the opportunities for AI in healthcare will only keep growing,” says the report.

The private health sector is already using AI to drive better performance, generate better outcomes and produce better patient or user experience. The report highlights use cases such as:

  • Using AI to automate repetitive tasks and help discover patterns and anomalies that lower the total cost of care.
  • Quickly processing data — personalized for each patient— to provide information about risks, support decision-making diagnostics and therapeutic processes, identify other health issues and motivate preventative actions.
  • Using AI to break down care silos, translating the complicated into easy-to-understand information, and helping to improve the patient care experience.

OptumServe has an extensive knowledge of diverse data sets, from claims and clinical data to genomic and electronic health record (EHR) data to help federal agencies verify whether they have a strong foundation for AI methods to be built on.

However, the report stresses that the importance of working with a partner who can ensure that when AI tools are implemented, it is done in an ethically sound manner that uncovers and then mitigates sources of bias to make sure analytics of all types are most useful.

Read more about accelerating efficiencies and effectiveness in federal health agencies with artificial intelligence.

This article was produced by FedScoop for, and sponsored by, OptumServe.

Latest Podcasts