Nothing is free.


Every day, a billion people — one-seventh of the world’s population — log onto Facebook. They chat with coworkers, they catch up with former classmates or old Army buddies, they look at photos and upload their own, they flirt with girlfriends and boyfriends and present a image of themselves to the world that they’ve hand-crafted and want to be seen.


And no one asks them for a nickel. But they pay with their data.


Terms and Conditions


When you signed up for Facebook, you agreed to their terms and conditions (if you didn’t, you wouldn’t have a Facebook account).


You almost certainly did not, however, read all 14,000 words of the now infamous document. Facebook — like all social-media sites — uses their terms to essentially claim ownership of your online content. The super-legalese in which these terms are written is — seemingly on purpose — non-conversational, complicated, convoluted and completely foreign to the laymen who often do little more than skim the document.


And it works.


When you post a photo to Twitter, you’re giving Twitter the right to use it, transmit it or modify it in any way they see fit. LinkedIn reserves the right to distribute and even modify professional content uploaded by users.


But with the addition of one single word to their terms in May 2012, Facebook changed the entire game. That word was “research“.


Customer Service Analytics


Four months earlier, in January 2012, Facebook conducted a then-unknown, weeklong study that involved 700,000 unwilling, unknowing participants — their own customers. During the study, Facebook altered the newsfeeds of the researchers’ unwitting “volunteers”.


On the surface, that’s no big deal.


Facebook has always publicly admitted to manipulating the newsfeeds of its users, often without full disclosure of why or with what methodology. It was usually presumed — often correctly — that Facebook’s grip on their users’ newsfeeds was directly related to their bread-and-butter targeted ads.


But this time, things were different. The mechanics of the study weren’t known until its findings were published in the prestigious Proceedings of the National Academies of Sciences in 2014.


Facebook was manipulating its users newsfeeds to see if it could directly affect their moods. By removing positive posts and adding negative posts, could the user with the altered newsfeed be subconsciously nudged into posting more negatively themselves?


The answer was “yes”. The emotions of individual people could be manipulated depending on the flood of content put in front of them — and those emotions were contagious across social media.


In articulate words of Forbes, nearly a million people were forced to “have a crappy day for science”.


How Much is Too Much?


Although Facebook said it was merely a week-long “customer service” study, the revelations of the study caused immediate and almost universal backlash. American legislators pressured the Federal Communications Commission to act. European regulators questioned the company’s executives.


But perhaps more importantly, the general public asked themselves a very important question: How much is too much?


It was always public knowledge that giants like Google and Facebook — companies whose business is big data — gather, store, analyze and use the data we turn over to them every time we use their free services.


That’s always been the tradeoff. If you Google the search string “gutter guards single-family home”, the next time you log onto Google — voila! — there are ads for gutter guards on the right side of the page.


Nothing is free. These companies invented miraculous technology and provided us with real, genuinely life-changing services. Unlike the cable company, the satellite radio company, the streaming video company and the Internet company, we pay them nothing.


They have bills. They need to sell ads. Fine.


But gathering data, and then knowingly and purposely using it to change the behavior of their customers, seems to have crossed a precipice that no other marketing campaign ever has.


Customer Service Analytics


With a line clearly crossed — at least in the realm of public perception — Facebook and the rest of the tech giants may be forced to take a second look at their ethical protocols. Since the age of consent for Facebook is just 13, it is all but a mathematical certainty that minors were included in the study. What if people who were clinically depressed, suicidal or mentally ill had been included in “research” that manipulated their emotions by bombarding them with negative content?


Already, Facebook has modified its guidelines to ensure that future studies go through stricter internal reviews. But for companies like Google, which routinely use “A/B” testing to try to get users to click more ads or hit more links, a larger question is lingering in the background.


How much control is it appropriate for us to bestow upon any company with unbridled access to our most intimate data?


In the eyes of Facebook, they proposed terms and agreements that included the word “research”. The users willingly accepted those terms and used the product. But, according to surgeon, researcher and science blogger David Gorski “It’s absolutely ridiculous to suggest that clicking a box on a website constitutes informed consent” for the participation in an extended psychological experiment in which the “participants” didn’t know the variables, the purpose, the methods or the results of the research — or even that they were part of research at all.


In 1999 — when Facebook Founder Mark Zuckerberg was just 15 years old — Sun Microsystems CEO Scott McNealy famously said, “You have zero privacy anyway. Get over it.”


Have we gotten over it? Should we? These questions were only exacerbated by Facebook’s recent use of nearly three-quarters of a million human beings as unwilling Guinea pigs in a study by a company that counts one-seventh of the world’s population as its customers.


How much control is too much for tech giants to have, and how little privacy is too little for the rest of us to accept?