The Privacy Ceiling

Bad design, Black box, Blog, Bureaucracy, Business model, Civil rights, Consumer rights, Control, Copyright, Creeping erosion of norms, Crime, Design, Design engineering, Design philosophy, Design with Intent, Designers, Digital rights, Discriminatory Architecture, Distasteful corollary, Do artifacts have politics?, DRM, Embedding code, Engineering, Engineering design, Everyware, External Control, Feature deletion, Future, Good design, Insurance, Intellectual property, Interaction design, Intrusive technology, Invention, Law, Legislation, Norms, Panopticon, Patents, Pervasive computing, Philosophy of control, Political design, Privacy, Product design, Prophecy, Regulation, Restriction, Surveillance, Technical protection measures, Techniques of persuasion, Technology, Technology policy, Treacherous computing, Trusted Computing, Ubiquitous computing, User experience, User Psychology, Worldwide, Your property

Scott Craver of the University of Binghamton has a very interesting post summarising the concept of a ‘privacy ceiling’:

“This is an economic limit on privacy violation by companies, owing to the liability of having too much information about (or control over) users.”

It’s the “control over users” that immediately makes this something especially relevant for designers and technologists to consider: that control is designed, consciously, into products and systems, but how much thought is given to the extremes of how it might be exercised, especially in conjunction with the wealth of information that is gathered on users?

“Liability can come from various sources… [including]
Vicarious infringement liability.
Imagine: you write a music player (like iTunes) that can check the Internet when I place a CD in my computer. You decide to collect this data for market research. Now the RIAA discovers that this data can also identify unauthorized copies. Can they compel you to hand over data on user listening habits?
Your company is liable for vicarious infringement if (1) infringement happens, (2) you benefit from it, and (3) you had the power to do something about it–which I assume includes reporting the infringement. So now you are possibly liable because you have damning information about your users. This also applies to DRM technologies that let you restrict users.
Note that you can’t solve this problem simply by adopting a policy of only keeping the data for 1 month, or being gentle and consumer-friendly with your DRM. The fact is, you have the architecture for monitoring and/or control, and you may not get to choose how you use it.

Other sources of liability described include: being drawn into criminal investigations based on certain data which a company or other organisation may have – or be compelled to obtain – on its users; customers suing in relation to the leaking of supposedly private data (as in the AOL débâcle); and “random incompetence”, e.g. an employee accidentally releasing data or arbitrarily exercising some designed-in control with undesirable consequences.
Scott goes on:

“Okay, so there is a penalty to having too much knowledge or too much control over customers. What should companies do to stay beneath this ceiling?
1. Design an architecture for your business/software that naturally prevents this problem.
It is much easier for someone to compel you to violate users’ privacy if it’s just a matter of using capabilities you already have. Mind, you have to convince a judge, not a software engineer, that adding monitoring or control is difficult. But you have a better shot in court if you must drastically alter your product in order to give in to demands.

2. Assume you will monitor and control to the full extent of your architecture. In fact, don’t just assume this, but go to the trouble to monitor or control your users.
Why? Because in an infringement lawsuit you don’t want to appear to be acting in bad faith… if you have the ability to monitor users and refuse to use it, you’re giving ammunition to a copyright holder who accuses you of inducement and complicity.

But … the real message is that you should go back to design principle 1. If you want to protect users, think about the architecture; don’t just assume you can take a principled stand not to abuse your own power.
The third principle is really a restatement of the first two, but deserves restating:
3. Do not attempt to strike a balance.
Do not bother to design a system or business model that balances user privacy with copyright holder demands. All this does is insert an architecture of monitoring or control, for later abuse. In other words, design an architecture for privacy alone. Anything you put in there, under rule #2, will one day be used to its full extent.
I have seen many many papers over the years, in watermarking tracks, proposing an end-to-end media distribution system balancing DRM with privacy. Usually, the approach is that watermarks are embedded in music/movies/images by a trusted third party, the marks are kept secret from the copyright holder, and personal information is revealed only under specific circumstances in which infringement is clear. This idea is basically BS. Your trusted third party does not have the legal authority to decide when to reveal information. What will likely happen instead: if a copyright holder feels infringement is happening, the trusted third party will be liable for vicarious infringement.

Summing it up: any capability you design into a product or system will be used at some point – even if you are forced to use it against the best interests of your business. So it is better to design deliberately to avoid being drawn into this: design systems not to have the ability to monitor or control users, and that will keep you much safer from liability issues.
The privacy ceiling concept – which Scott is going to present in a paper along with Lorrie Cranor and Janice Tsai at the ACM DRM 2006 workshop – really does seem to have a significant implications for many of the architectures of control examples I’ve looked at on this site.
For example, the Car Insurance Black Boxes mostly record mileage and time data to allow insurance to be charged according to risk factors that interest the insurance company; but the boxes clearly also record speed, and whether that information would be released to, say, law enforcement authorities, if requested, is an immediate issue of interest/concern.
Looking further, though, the patent covering the box used by a major insurer mentions an enormous number of possible types of data that could be monitored and reported by the device, including exact position, weights of occupants, driving styles, use of brakes, what radio station is tuned in, and so on. Whether any insurance company would ever implement them, of course, is another question, and it would require a lot tighter integration into a vehicle’s systems; nevertheless, as Scott makes clear, whatever possibilities are designed into the architecture, will be exploited at some point, whether through pressure (external or internal) or incompetence.
I look forward to reading the full paper when it is available.