BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Controversy Over Facebook Emotional Manipulation Study Grows As Timeline Becomes More Clear

Following
This article is more than 9 years old.

In a controversial study Facebook reported the results of a massive psychological experiment on 689,003 users.  The authors were able to conduct the research because in their words, automated testing “was consistent with Facebook’s Data Use Policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research.

Most of us who covered the story relied on that statement from the academic journal for evidence of Facebook's efforts to gain informed consent.  Well, it turns out that was wrong.

My colleague Kashmir Hill just reported that Facebook conducted their news feed manipulation four months before the term "research" was added to their data use policy, she writes:

However, we were all relying on what Facebook’s data policy says now. In January 2012, the policy did not say anything about users potentially being guinea pigs made to have a crappy day for science, nor that “research” is something that might happen on the platform.

Four months after this study happened, in May 2012, Facebook made changes to its data use policy, and that’s when it introduced this line about how it might use your information: “For internal operations, including troubleshooting, data analysis, testing, research and service improvement.” Facebook helpfully posted a “red-line” version of the new policy, contrasting it with the prior version from September 2011— which did not mention anything about user information being used in “research.”

Kashmir's story is worth reading in full, along with her earlier piece that digs deeper into the ethical and institutional review board issues, including a statement from Cornell "saying its IRB passed on reviewing the study because the part involving actual humans was done by Facebook not by the Cornell researcher involved in the study."

Facebook seems nonplussed, releasing a statement saying "To suggest we conducted any corporate research without permission is complete fiction. Companies that want to improve their services use the information their customers provide, whether or not their privacy policy uses the word ‘research’ or not."

As Dan Solove points out in a recent LinkedIn Influencer post:

The problem with obtaining consent in this way is that people often rarely read the privacy policies or terms of use of a website. It is a pure fiction that a person really “agrees” with a policy such as this, yet we use this fiction all the time...

Contrast this form of obtaining consent with what is required by the federal Common Rule, which regulates federally-funded research on human subjects. The Common Rule requires as a general matter (subject to some exceptions) that the subjects of research provide informed consent. The researchers must get the approval of an institutional review boards (IRB). Because the researchers involved not just a person at Facebook, but also academics at Cornell and U.C. San Francisco, the academics did seek IRB approval, but that was granted based on the use of the data, not on the way it was collected.

Informed consent, required for research and in the healthcare context, is one of the strongest forms of consent the law requires. It is not enough simply to fail to check a box or fail to opt out. People must be informed of the risks and benefits and affirmatively agree.

The problem with the Facebook experiment is that it exposed the rather weak form of consent that exists in much of our online transactions. I’m not sure that informed consent is the cure-all, but it would certainly have been better than the much weaker form of consent involved with this experiment.

These are huge issues, which my other Forbes colleague, Ryan Calo explores in detail in his scholarly article Digital Market Manipulation.  He writes, companies “will increasingly be able to trigger irrationality or vulnerability in consumers—leading to actual and perceived harms that challenge the limits of consumer protection law, but which regulators can scarcely ignore."

Perhaps this is something consumers just need to get used to.  The Institutional Review Blog has a post providing some historical perspective on issues related to psychological experiments, noting that this is not a new thing Zachary Schrag writes: "critics of the Facebook experiment should at least be aware that we are talking about a mode of research that existed long before Facebook, and that federal ethics advisors and regulators specifically decided that it should proceed."

Gregory S. McNeal is a professor specializing in law and public policy.  You can follow him on Twitter @GregoryMcNeal or on Facebook.