Future Tense

Down With the Frequent-Flyer Icon

How health technology can reinforce biases against the mentally ill.

electronic medical records.
Maybe she can’t wait.

XiXinXing/Thinkstock

Imagine it is 2 a.m. in the ER at High Tech University Hospital. The charge nurse turns to his computer monitor to see who waits in triage. Three new patients. Alongside one of their names sits the dreaded red airplane icon. This means the patient has visited the ER more than five times in the past 30 days—she’s a “frequent flyer.”

He doesn’t remember her, but her name is Mrs. Simmons. She has bipolar disorder, an illness that affects about 4 percent of the population and is characterized by extreme emotional swings and sometimes psychotic episodes. Though she arrived before the other two patients, our nurse decides to go to her room last, thinking it’s likely not an urgent problem since she’s here so often.

The nurse finally gets to Mrs. Simmons at 4 a.m. and takes her to a bed. Soon, she is shouting, “My head hurts!”

“OK, Mrs. Simmons,” the nurse answers with a note of contempt in his voice. As he leaves the room, she falls back, clutching her head.

Mrs. Simmons is familiar with this emergency department. Usually, she walks in when she runs out of psychiatric medications and slips into a frightening psychosis and hears a cacophony of disturbing voices in her head. Other times, the police bring her in, agitated and yelling about people stealing her ideas for revolutionary inventions.

As Mrs. Simmons waits, she gets confused. Nurses and doctors pass by her room noticing her frenetic state, with arms flexing uncontrollably. Many think, “It’s just Simmons being Simmons; she’s just manic. She can wait until the next shift after she calms down.”

But Mrs. Simmons can’t wait. By the time the doctor on the next shift reaches her room, her brain is bleeding and she has lost consciousness. Mrs. Simmons was not just being Mrs. Simmons. She has severe meningitis and is near death.

* * *

This story—though fictional—reflects a certain all-too-common reality. Patients with chronic and serious mental health problems often receive poor physical health care. Clinicians tend to assume physical symptoms are related to their underlying psychiatric conditions. Those arm movements Mrs. Simmons was having? A seizure related to the brain infection—not bipolar disorder.

The evidence of this disregard is striking. Study after study has shown that this phenomenon—known as diagnostic overshadowing—is real and it is dangerous. It happens when a patient’s mental illness overshadows her other medical conditions. Many hospitals now aim to improve this situation and have made strides in integrating psychiatric and medical care. But subtle factors continue to undermine this effort, at once cordoning off mental health care from medical care while also stigmatizing people with mental health problems.

The airplane icon—which is a feature of a widely used electronic health record system—is a simple case in point. It symbolizes the concept of the frequent flyer, but this isn’t about racking up free travel miles. Instead, it is a derogatory phrase used among clinicians that tacitly communicates: “watch out, this is a problem patient.” It implies the patient is a “drug seeker,” “a liar,” or a patient looking for “two hots and a cot.” In the parlance of Samuel Shem’s novel The House of God, the frequent flyer is a GOMER—“get out of my emergency room.”

The intent of the airplane icon was probably innocent enough. Computer programmers likely created it to help clinicians know that someone comes to the hospital often. They may have thought the icon could help hospitals provide better care by pointing out individuals in need of extra resources or close follow-up plans. And it is hard to imagine a computer programmer deciding to draw upon the concept of the frequent flyer without advice from clinicians who know and use that colloquialism.

Here is the problem. For one, the icon could worsen health care delivery—essentially announcing “this patient is here all the time, it’s likely not a critical problem.” The fact is many of the people who repeatedly visit emergency rooms have mental health problems, and they wait anywhere from eight to more than 24 hours longer for care. Sadly, studies have found that, compared with the general population, people with serious mental illness are 3½ times more likely to die than the general population, losing about 30 years of life on average, largely due to physical health problems. They don’t have time to wait.

Furthermore, the airplane symbol derogatively tags this population. By labeling individuals as “frequent flyers,” the icon reinforces the stigma of prejudice and discrimination against people with psychiatric disorders. Symbols aren’t benign. They communicate meanings, including values, judgments, and morals. We are aware of this when battles over the confederate flag arise or someone displays a swastika tattoo. But it’s taking new, subtler forms in the era of digital technology. For example, Americans perceived Apple’s recent decision to replace the pistol emoji with a squirt gun as a major political statement triggering some rather vocal backlash.

Technology and health care have long been bedfellows. But in recent years, advances in big data and social media technologies have prompted the development of powerful solutions to inform clinical practice and empower patients. For instance, new applications provide clinicians with real-time updates on their patients’ well-being and medication adherence. We’re seeing a particular rise in technology intended to help patients with mental health problems: Some social media platforms now provide crisis intervention tools, and researchers are now investigating the potential to use artificial intelligence to diagnose and treat mental illness.

But we also need to be mindful that people designing algorithms and computers systems insert intentional and unintentional biases into the way they function. Recent allegations that Pokémon Go and Snapchat reflect underlying racial biases are prime examples. On a larger scale, author Cathy O’Neil asserts that algorithms and big data are bolstering racism and inequality through mechanisms such as criminal sentencing, insurance rates, and employment opportunities. Technologies order the world according to the priorities and values of those in power. And unsurprisingly, patients with mental health problems often wield less power.

The airplane icon on a hospital’s records system is a small example, albeit one with important consequences. But one could imagine that such biases may begin to play out more broadly in the health technologies of the near future. Social media posts and your smartphone can glean all sorts of information about you—think speech patterns, routes traveled, and body movements. How can that data be used to detect behavioral health problems? One could envision positive outcomes, like being able to alert an individual to a potential looming manic episode or alcohol relapse. But prejudices could also complicate these programs in ways that limit care, enhance stigma, reduce privacy, or make other harmful choices for individuals with mental health problems.

Health IT developers should be aware of potential biases and create systems that at the very least minimize harm to patients. While technologies may not be able to escape values, they don’t need to reinforce damaging ones. By collaborating with patients, consumers, clinicians, social scientists, and ethicists, perhaps new technologies can encourage ethical behavior and respectful treatment no matter a person’s symptoms or diagnosis.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, follow us on Twitter and sign up for our weekly newsletter.