Skip to Content, Navigation, or Footer.
Friday, May 10, 2024
The Observer

Professor discusses sources, responses of algorithmic bias in lecture

Cathy O’Neil, a mathematician, data scientist and New York Times bestselling author, gave a keynote presentation on algorithmic bias Friday afternoon.

O’Neil received her PhD in math from Harvard, and she has held academic positions at MIT and Barnard College where she did research on arithmetic algebraic geometry. She also founded the O’Neil Risk Consulting and Algebraic Auditing.

Her address was part of an event hosted by the Technology Ethics Center at Notre Dame. Following O’Neil’s presentation there were a a few panels featuring Notre Dame faculty members. They analyzed the source of algorithmic bias as well as the ethical obligations that institutions have to account for bias in algorithmic decision-making.

1604855497-5f8950b02010daa
Keynote speaker Cathy O'Neil presenting on algorithmic bias.


She began her talk by discussing her argument against the authoritativeness of big data, which is a term that refers to data that is difficult to analyze using traditional methods because it is so complex and fast.

“[Big data] is presented to us as factual, scientific and authoritative. I’m going to argue that it is the opposite of that,” O’Neil said. “We’re told not to look behind the math, and we’re told to trust it blindly… that is a problem of power.”

In order to analyze big data, predictive algorithms are used to sort and compile the data. O’Neil said we use predictive algorithms on a daily basis. She broke down the two different elements of predictive algorithms using the example of choosing what to cook for dinner.

“When I cook dinner for my kids, the data I have on hand is what are the ingredients in my kitchen. The history of data, and historical data is all the memories of what my kids eat and don’t eat,” O’Neil said. “I need a definition of success. What does it mean for me, for my kids, that the dinner was success?”

She said that the definition of success could contribute to bias.

“I am inserting my agenda into my meal preparation,” she said, continuing with the dinner analogy.

This same specific bias can be seen in the technological realm.

“[Facebook] deliver news items to you that will keep you on Facebook for as long as possible. That is an agenda,” she said. “That’s the sort of proxy for money because the longer you spend on Facebook, the more you click on ads, which is where they get money from.”

Although this may benefit Facebook, this often does not benefit the individual scrolling through their feed.

“Facebook is to decide what is their definition of success and how to optimize it to their benefit,” she said. “And I would argue that [clickbait] is not to our benefit, as a nation, as a world, because we’ve seen that their definition of success is actually promoting misinformation and disputes and arguments.”

O’Neil said people commonly defend algorithms by pointing to the complex mathematics and writing off individuals who challenge the math without proper background training.

“But ultimately, what is going on behind [the algorithms], is deciding what we deserve,” O’Neil said. “That is essentially what big data does. Pretty big algorithms have been replacing your bureaucracy at every turn. So anytime you’re interacting with a bureaucracy, like applying for a job, applying for college, an algorithm is probably deciding whether you deserve or don’t deserve something.”

Additionally, algorithmic bias has larger implications within society. The likelihood of incarceration can be algorithmically predicted by one’s ethnicity, prior incarceration, family history and socioeconomic status.

“The context matters. An algorithm needs to be understood within its context. The ethical dilemmas represented by an algorithm in the context, will very much depend on the details.” O’Neil said. “And that’s one of the reasons as an algorithmic auditor myself that I do not believe whatsoever in automated AI ethics tools. There is no such thing as automated ethics.”

In reference to the data science community, O’Neil says that data scientists as a whole are overloaded.

“We are expected to not only build algorithms, but to solve ethical dilemmas. And it’s way too much,” she said. “Our job is to translate those decisions about values to code, rather than to make those decisions.”