IoT Privacy in a Global World.
By Team GotIoT:
As devices become more intelligent and networked, the makers and vendors of those devices gain access to greater amounts of personal data. In the extreme case of the washing machine, the kind of data — who uses cold versus warm water — is of little importance. But when the device collects biophysical information, location data, movement patterns, and other sensitive information, data collectors have both greater risk and responsibility in safeguarding it. The advantages of every company becoming a software company — enhanced customer analytics, streamlined processes, improved view of resources and impact — will be accompanied by new privacy challenges.
A key question emerges from the increasing intelligence of and monitoring by devices: will the commercial practices that evolved in the web be transferred to the Internet of Things? The amount of control users have over data about them is limited. The ubiquitous end-user license agreement tells people what will and won’t happen to their data, but there is little choice. In most situations, you can either consent to have your data used or you can take a hike. We do not get to pick and choose how our data is used, except in some blunt cases where you can opt out of certain activities (which is often a condition forced by regulators). If you don’t like how your data will be used, you can simply elect not to use the service. But what of the emerging world of ubiquitous sensors and physical devices? Will such a take-it-or-leave it attitude prevail?
In November of 2014, the Alliance of Automobile Manufacturers and the Association of Global Automakers released a set of Privacy Principles for Vehicle Technologies and Services. Modeled largely on the White House’s Consumer Privacy Bill of Rights, the automaker’s privacy principles are certainly a step in the right direction, calling for transparency, choice, respect for context, data minimization, and accountability. Members of the two organizations that adopt the principles (which are by no means mandatory) commit to obtaining affirmative consent to use or share geolocation, biometrics, or driver behavior information. Such consent is not required, though, for internal research or product development, nor is consent needed to collect the information in the first place. A cynical view of such an arrangement is that it perpetuates the existing power inequity between data collectors and users. One could reasonably argue that location, biometrics, and driver behavior are not necessary to the basic functioning of a car, so there should be an option to disable most or all of these monitoring functions. The automakers’ principles do not include such a provision.
For many years, there have been three core security objectives for information systems: confidentiality, integrity, and availability — sometimes called the CIA triad. Confidentiality relates to preventing unauthorized access, integrity to authenticity and preventing improper modification, and availability to timely and reliable system access. These goals have been enshrined in multiple national and international standards, such as the US Federal Information Processing Standards Publication 199, the Common Criteria, and ISO 27002. More recently, we have seen the emergence of “Privacy by Design” (PbD) movements — quite simply the idea that privacy should be “baked in, not bolted on.” And while the confidentiality part of the CIA triad implies privacy, the PbD discourse amplifies and extends privacy goals toward the maximum protection of personal data by default. European data protection experts have been seeking to complement the CIA triad with three additional goals: transparency, unlinkability, and intervenability:
- Transparency helps people understand who knows what about them — it’s about awareness and comprehension. It explains whom data is shared with; how long is it held; how it is audited; and, importantly, defines the privacy risks.
- Unlinkability is about the separation of informational contexts, such as work, personal, family, citizen, and social. It’s about breaking the links of one’s online activity. Simply put, every website doesn’t need to know every other website you’ve visited.
- Intervenability is the ability for users to intervene: the right to access, change, correct, block, revoke consent, and delete their personal data. The controversial “right to be forgotten” is a form of intervenability — a belief that people should have some control over the longevity of their data.
The majority of discussions of these goals happen in the field of identity management, but there is clear application within the domain of connected devices and the Internet of Things. Transparency is specifically cited in the automakers’ privacy principles, but the weakness of its consent principle can be seen as a failure to fully embrace intervenability. Unlinkability can be applied generally to the use of electronic services, irrespective of whether the interface is a screen or a device — e.g., your Fitbit need not know where you drive. Indeed, the Article 29 Working Group, a European data protection watchdog, recently observed, “Full development of IoT capabilities might put a strain on the current possibilities of anonymous use of services and generally limit the possibility of remaining unnoticed.”
The goals of transparency, unlinkability, and intervenability are ways to operationalize Privacy by Design principles and aid in user empowerment. While PbD is part of the forthcoming update to European data protection law, it’s unlikely that these three goals will become mandatory or part of a regulatory regime. However, from the perspective of self-regulation, and in service of embedding a privacy ethos in the design of connected devices, makers and manufacturers have an opportunity to be proactive by embracing these goals. Some research points out that people are uncomfortable with the degree of surveillance and data gathering that the IoT portends. The three goals are a set of tools to address such discomfort and get ahead of regulator concerns, a way to lead the conversation on privacy.
As hardware companies become software companies, they can delve into a broader set of privacy discussions.
Discussions about IoT and personal data are happening at the national level. The FTC just released a report on their inquiry into concerns and best practices for privacy and security in the IoT. The inquiry and its findings are predicated mainly on the Fair Information Practice Principles (FIPPs), the guiding principles that underpin American data protection rules in their various guises. The aforementioned White House Consumer Privacy Bill of Rights and the automakers’ privacy principles draw heavily upon the FIPPs, and there is close kinship between them and the existing European Data Protection Directive.
Unlink-ability and intervenability, however, are more modern goals that reflect a European sense of privacy protection. The FTC report, while drawing upon the Article 29 Working Group, has an arguably (and unsurprisingly) American flavor, relying on the “fairness” goals of the FIPPs rather than emphasizing an expanded set of privacy goals. There is some discussion of Privacy by Design principles, in particular the de-identifying of data and the prevention of re-identification, as well as data minimization, which are both cousin to unlinkability.
Certainly, the FTC and the automakers’ associations are to be applauded for taking privacy seriously as qualitative and quantitative changes occur in the software and hardware landscapes. Given the IoT’s global character, there is room for global thinking on these matters. The best of European and American thought can be brought into the same conversation for the betterment of all. As hardware companies become software companies, they can delve into a broader set of privacy discussions to select design strategies that reflect a range of corporate goals, customer preference, regulatory imperative, and commercial priorities.