June 13, 2016

How much is privacy worth to today’s consumers?

Reexamining the economics of privacy in the information age

Bob Nevv/Bigstock

Privacy is not a new concept, but we may be in the midst of a redefinition of the term. Companies are honing new techniques to get inside the heads of consumers, taking advantage of the fact that people spend so much of their commercial, social, and entertainment time online. Every search query entered into Google, each second spent browsing reviews for a new pair of headphones, every political rant on Facebook or Twitter each online action can be recorded and analyzed with the hopes of gaining useful information and ultimately making more money.

Shadowy data brokers are helping consolidate consumer information from disparate sources, allowing curious companies help fill out their psychological profiles of customers they hope to eventually target with personalized offers. These techniques are still maturing, but companies have shown they can use this wealth of data to predict how much certain people are willing to pay for products and might one day figure out how to tempt us with offers we can’t refuse at exactly the moments we’re most susceptible.

Is there an economic case for new rules or regulations that will help people keep their personal information to themselves? Research has shown that the impact of privacy protections is economically ambiguous. There are times and places when restrictions on sharing can definitely benefit consumers, and other times when restriction on the flow of information can put a crimp on commerce and might even slow development of information technology. An article appearing in this month’s issue of the Journal of Economic Literature surveys recent work by economists in this evolving field.

In The Economics of Privacy (PDF), authors Alessandro Acquisti, Curtis Taylor, and Liad Wagman argue that “privacy” is best defined not as the opposite of sharing information, but rather control over the sharing of one’s own information. One non-obvious question is who should own personal data in the first place. It may seem intuitive that people should own information about themselves, their actions, and their interest and desires. Yet this would be hard to enforce as companies often need crucial information about you like your address and what products you want to buy in order to even offer their services at all, and this could preclude several innocuous and user-friendly practices like having an order history on a website.

A line of research during the 1990s imagined a Coasian solution to this dilemma. Regardless of who had the rights to information, consumers and businesses would negotiate a mutually beneficial agreement to reach the economically optimal level of privacy.

In these models, the value to business in collecting private information such as a person’s race, income, or internet browsing habits is balanced against the value to consumers to have that information kept confidential. Depending on how property rights over personal data were assigned by law, people would either charge companies to collect the data they were willing to divulge, or pay them not to collect the data they wanted to protect.

Over time, Facebook users grew more reluctant to share personal details
Information visible on 5,076 Facebook profiles that were part of the Carnegie Mellon University network, 2005–13. Information was coded as "visible" if it could be seen by another user on the Carnegie Mellon network who was not a friend. After declining for years, sharing rose sharply between 2009 and 2010. This reversal may have been partially unintentional, as it coincided with an update to Facebook's privacy settings that many users found confusing.
Note: blank rectangles denote missing data.
Source: adapted from Table 3 of Stutzman et al. (2012)
 

One paper from 1996 even proposed an explicit National Information Market where consumers could sell the rights to access their own information. In principle, people could have detailed control over how their data was used, whom it was shared with, and how long it would be available, and they could trade very specific rights to use or share data away to companies in exchange for cash, discounts, or access to services.

Needless to say, that isn’t how the internet developed over the last 20 years. In a certain sense, these sorts of trades do happen every day: people are trading information about what they are interested in at the moment in exchange for Google providing them relevant search results, and people are trading information about who they know and who they want to talk to in exchange for a platform like Facebook or Twitter that facilitates new kinds of connections. But most people probably don’t think of these actions as trading their privacy away for a service; they see themselves as consuming a service for free.

Without a big group of people who are aware of the value of the data they are giving away, and a large number of firms that are willing to acknowledge the data rights of customers, it will be hard to get a centralized, widely-recognized market off the ground.

That doesn’t mean that people think privacy is worthless though. One recent study of web users in Spain found that the average person was willing to sell information about their presence on a website for about $10 even though individual web browsing history elements are routinely sold for about $0.0005 a piece on the secondary market. Another study found that people are willing to pay more to patronize e-commerce sites that will do a better job protecting their privacy, although users were alerted during the study to the privacy policy of each site a typical online shopper might not even know the difference.

Clearly there is a disconnect between how consumers value their privacy and how businesses value it, but for the most part, in the U.S. at least, websites have free reign to collect information about site visitors and use that information in various ways.

How do companies use this data, and does it ultimately hurt consumers? Much of the research has focused on price discrimination, which in this context entails guessing the most a customer is willing to pay for a product and then making sure they are offered a price as close as possible to that limit.

Personal data, especially data on income, location, and past online history, makes this job a lot easier for online retailers. Do you usually buy luxury items online? Are you located near a competitor’s store? Do you keep checking a specific product page every day, which might be a sign of bargain seeking? Are you accessing the site from a Mac computer? Each bit of information is a clue as to how much you are willing and able to pay, and recent research has shown companies using each of these approaches to glean something about consumer intent, and then offer them different products or prices.

At the same time as they profess their need for privacy, most consumers remain avid users of information technologies that track and share their personal information with unknown third parties. [The adoption of privacy-enhancing technologies] lags vastly behind the adoption of sharing technologies (for instance, online social networks such as Facebook).

Acquisti et al. (2016)

As people grow more wary of the data-slurping habits of online companies, they might demand more regulations and require that companies at least notify them before recording certain forms of data and offer consumers the option not to be tracked. Authorities in the EU have taken a more aggressive approach, implementing an ePrivacy Directive in 2005 that required websites to notify visitors about data tracking and hidden identifiers, and have them opt-in to be tracked.

But opt-in policies could have their own pitfalls. Making sure consumers know about how their privacy might be compromised seems like a commonsense proposal, but a 2013 experiment found that increased control over their own information gave consumers a false sense of security and paradoxically made them much more willing to share more sensitive information.

Some economists also noted that requiring opt-in to data collection could have the unintended effect of privileging well-established, widely-recognized companies over small upstarts. If consumers trust big companies with their personal information but not a new company with an unproven track record, monopolists could use their data advantage to become more entrenched and fend off new innovators more easily.  

Perhaps trading markets for personal data will catch on in coming years, although they face some significant technical hurdles, not the least of which are privacy concerns about what ultimate authority will safeguard the sensitive data and make sure that it is only used as agreed. In the meantime, consumers will continue to give away their personal data en masse to companies in trillions of online interactions each day. ♦


“The Economics of Privacy” appears in the June 2016 issue of the Journal of Economic Literature.