Escaping the traps of Facebook, Google and other centralized data hordes

A furor erupted this week over a research project conducted by Facebook in which they manipulated the feeds of over 600,000 users in order to measure their emotional responses. To many, this sounds like a trivial intrusion, perhaps on par with the insertion of advertising content. But several scientists have argued that it constitutes a serious breech in established research ethics — namely the requirement for informed consent. In the world of scientific research, the bar for informed consent is quite high. Facebook chose to rely on their Terms of Use as a proxy for informed consent, but that is unacceptable and would establish a dangerous precedent for eroding the rights of future study participants. An author at the Skepchik network contributed this critique of Facebook’s behavior:

What’s unethical about this research is that it doesn’t appear that Facebook actually obtained informed consent. The claim in the paper is that the very vague blanket data use policy constitutes informed consent, but if we look at the typical requirements for obtaining informed consent, it becomes very clear that their policy falls way short. The typical requirements for informed consent include:

  • Respect for the autonomy of individual research participants
  • Fully explain the purposes of the research that people are agreeing to participate in in clear, jargonless language that is easy to understand
  • Explain the expected duration of the study
  • Describe the procedures that will happen during the study
  • Identify any experimental protocols that may be used
  • Describe any potential risks and benefits for participation
  • Describe how confidentiality will be maintained
  • A statement acknowledging that participation is completely voluntary, that a participant may withdraw participation at any time for any or no reason, and that any decision not to continue participating will incur no loss of benefits or other penalty.

Of course this level of detail cannot be covered by blanket “Terms of Use” that apply to all users of a general-purpose communication platform. Slate’s Katy Waldman agrees that Facebook’s study was unethical:

Here is the only mention of “informed consent” in the paper: The research “was consistent with Facebook’s Data Use Policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research.”

That is not how most social scientists define informed consent.

Here is the relevant section of Facebook’s data use policy: “For example, in addition to helping people see and find things that you do and share, we may use the information we receive about you … for internal operations, including troubleshooting, data analysis, testing, research and service improvement.”

So there is a vague mention of “research” in the fine print that one agrees to by signing up for Facebook. As bioethicist Arthur Caplan told me, however, it is worth asking whether this lawyerly disclosure is really sufficient to warn people that “their Facebook accounts may be fair game for every social scientist on the planet.”

Of course Facebook is no stranger to deceptive and unethical behavior. We may recall their 2012 settlement with the Federal Trade Commission, which charged “that Facebook deceived consumers by telling them they could keep their information on Facebook private, and then repeatedly allowing it to be shared and made public.”

The problem is simple: Facebook is a centralized service that aggregates intimate data on millions of users. They need to find ways to profit from that data — our data — and we have little control over how their activity might disadvantage or manipulate the users. Their monetization strategies go beyond their already troubling project to facilitate targeted ads from third party apps, apps that you might assume have no relationship to your Facebook activities. Facebook also manages the identity and contact networks of those users, making it difficult to leave the platform without becoming disconnected from your social network. It is a trap. Last week a Metro editorial claimed that it’s getting worse, and recommends that we all quit “cold turkey.” Some users have migrated over to Google services as an escape, but Google has faced similar FTC charges that reveal isn’t any better. So Google is just another mask on the same fundamental problems.

So what is the fix? I’m putting my money on The Red Matrix, a solution that supports distributed identity, decentralized social networking, content rights management and cloud data services.

20140630-145154-53514492.jpg

The core idea behind the Red Matrix is to provide an open specification and protocol for delivering contemporary internet services in a portable way, so that users are not tied to a single content provider. The underlying protocol, called “zot,” is designed to support a mix of public and privately shared content, providing encryption and separating a user’s identity from their service provider.

While still in its early stages, the Red Matrix provides core features comparable to WordPress, Drupal, Dropbox, Evernote and of course social networking capabilities. It is hard to summarize the possibilities of this emerging platform. I’m still discovering new ways to leverage the platform for things ranging from personal note management to blogging. Although the Red Matrix is small, it is an open source project with a fanatical base of users and developers, which makes it likely to endure and grow.

This seems like a good time to announce the Red Matrix companion channel for this site: www.bawker.net/channel/FairCoinToss. This channel acts as a “stream of consciousness” for material related to this blog, containing supplemental information, technical posts, short comments, reposts of news items, and other miscellanea. The primary WordPress site will be reserved for more detailed posts. Any readers are welcome to comment or otherwise interact by joining the Red Matrix at my server or one of the other public servers in the Red Matrix network.