A study not to ‘like’

Subscribe Now Choose a package that suits your preferences.
Start Free Account Get access to 7 premium stories every month for FREE!
Already a Subscriber? Current print subscriber? Activate your complimentary Digital account.

Facebook’s study on emotional contagion may not have broken laws, but it has exposed the unfettered power of big data companies grounded in opaque user policies.

Facebook’s study on emotional contagion may not have broken laws, but it has exposed the unfettered power of big data companies grounded in opaque user policies.

For one week in 2012, researchers from Facebook, Cornell and the University of California skewed the emotional content of almost 700,000 news feeds to test how users would react. They found that people would write slightly more negative posts when exposed to negative feeds and vice versa. News of the study spread on the Internet on Monday, angering users who thought Facebook had treated them as “lab rats” and sparking European legal probes. Facebook executive Sheryl Sandberg eventually apologized for “poorly” communicating the study, but Facebook stood firm. “When someone signs up for Facebook, we’ve always asked permission to use their information,” the company said in a statement. “To suggest we conducted any corporate research without permission is complete fiction.”

Facebook is half right. Users agree to terms and conditions when they join the social network. In-house experiments, called “A/B testing,” are routine, too. They observe how users react to small changes in format and content, such as a bigger icon or a different shade of blue. The purpose is to improve user experience on the site.

But this crossed an important line: Unlike typical A/B testing, Facebook tried to directly influence emotions, not behaviors. Its purpose was not to improve user experience but rather to publish a study.

Almost all academic research requires informed consent from participants, which Facebook assumed from acceptance of its terms of service. Yet Facebook’s data-use policy at the time of the study did not explicitly state that data would be used for “research.” This means the company likely justified the study under one of its broad provisions. A user would have to read tens of thousands of words of the agreement, then hypothesize about its possible interpretations, to consent. This practice is very different from the offline standard, where subjects need to understand the full risks and benefits of a study and have an option to decline. Federally funded research institutions are required to follow these rules, but plenty more do so for ethical reasons anyway.

Recent lawsuits against Facebook and Google — including the European Court’s ruling in favor of a “right to be forgotten” — focus on the ownership and use of companies’ existing store of data. This study reveals a new arena, in which users are manipulated to create new data for companies beyond their narrow commercial purposes.

While Facebook has implemented internal review mechanisms since the study, the underlying problem remains. Permission is still based upon ineffectual terms-of-service agreements. Users do not know what to expect from services; companies push to the limit because they know users won’t drop out.

President Barack Obama’s 2012 proposal for a “Consumer Privacy Bill of Rights” and the 2014 “Big Data” report have failed to produce much progress on transparency. This Facebook study should prompt a resumption of debate in and out of government on how to manage big-data practices.