What if we all had a personal assistant advising us, learning our privacy preferences, and aiding us in managing control over our data?

The Internet of Things (IoT) and Big Data are making it impractical for people to keep up with the many different ways in which their data can potentially be collected and processed. What is needed is a new, more scalable paradigm that empowers users to regain appropriate control over their data. We envision personalized privacy assistants as intelligent agents capable of learning the privacy preferences of their users over time, semi-automatically configuring many settings, and making many privacy decisions on their behalf. Through targeted interactions, privacy assistants will help their users better appreciate the ramifications associated with the processing of their data, and empower them to control such processing in an intuitive and effective manner. This includes selectively alerting users about practices they may not feel comfortable with, confirming with users privacy settings the assistants are not sure how to configure, refining models of their user’s preferences over time, and occasionally nudging users to carefully (re)consider the implications of some of their privacy decisions. Ultimately, these assistants will learn our preferences and help us more effectively manage our privacy settings across a wide range of devices and environments without the need for frequent interruptions.

Our project combines multiple research strands, each focusing on complementary research questions and elements of functionality. Our work is driven by user-centered design processes that translate personal privacy preference models, transparency mechanisms and dialog primitives into personalized privacy assistant functionality. Lab experiments and pilot studies help us evaluate and refine our functionality.

Modeling and Learning People’s Privacy Preferences

We are developing user-oriented machine learning techniques to capture people’s privacy preferences and expectations. These models are used to help users manage an otherwise unmanageable number of privacy decisions. This includes recommending or semi-automating the configuration of many privacy settings for individual users.

Dialogs with Users, including Privacy Nudges

We are exploring the merits of different modes of interaction, different interaction primitives and different interaction styles with the user. As we move towards Internet of Things scenarios, Personalized Privacy Assistants will have to be increasingly parsimonious and effective in the way in which they interact with their users. This includes being able to accommodate a wide range of contextual factors that impact the availability and effectiveness of different forms of communication with the user. This also includes studying the impact of different solutions on user privacy decision making and more generally on their behavior. What does it take to get a user’s attention? How much information is too much? When is the best time to interact with the user? What mode of interaction is most effective in a given context? How does one nudge users to carefully weigh privacy-utility tradeoffs associated with their decisions? And more.

Transparency Mechanisms for Big Data

We are developing transparency mechanisms for big data systems to inform users about data use practices of data holders. This includes identifying what data holders can infer from the data they collect and how they use the results. This analysis can also be used to help people better appreciate the ramifications of their privacy decisions.

Modeling Privacy Policy Constructs and Restrictions

Here we are developing an architecture and elements of infrastructure to support the deployment of personalized privacy assistants across different mobile and Internet of Things (IoT) scenarios. This includes the identification of an extensible collection of privacy constructs that can be used by IoT resource owners to describe the data collection, use and sharing practices associated with these resources (e.g. sensors, applictions, services) in a machine readable manner. These primitives can then be interpreted by Personalized Privacy Assistants and selectively communicated to their users.

Learn More About This Project

Project Publications


P. Emami-Naeini, M. Degeling, L. Bauer, R. Chow, L. Cranor, M.R. Haghighat, and H. Patterson, "The Influence of Friends and Experts on Privacy Decision Making in IoT Scenarios", The 21st ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW '18), Nov 2018

J. Wang, B. Amos, A. Das, P. Pillai, N. Sadeh, and M. Satyanarayanan, "Enabling Live Video Analytics with a Scalable and Privacy-Aware Middleware", ACM Transactions on Multimedia Computing, Communications and Applications (TOMM), Jun 2018 [pdf]

P. Story, S. Zimmeck, and N. Sadeh, "Which Apps have Privacy Policies?", Annual Privacy Forum, Jun 2018 [pdf]

A. Das, M. Degeling, D. Smullen, and N. Sadeh, "Personalized Privacy Assistants for the Internet of Things", 2018 IEEE Pervasive Computing: Special Issue - Securing the IoT, Apr 2018 [pdf]

More project publications available here

  • Related Research ​IoT Expedition Read More
  • Related Research ​Usable and Secure Passwords Read More