Hanan Hibshi on usable privacy and security

Usable Privacy and Security, Requirements Engineering

Student Name

Hanan Hibshi

What if you had the expertise of the world’s foremost security experts at your fingertips?

Hanan Hibshi, a PhD student in Societal Computing, is interested in building a system that can do just that. Having completed a Bachelors in Computer Science at King Abdul-Aziz University and a Masters in Information Security at Carnegie Mellon University’s Information Networking Institute (INI), Hibshi found herself excited by the collaborative environment and cross-disciplinary approach here at CMU. “While at INI, I began working with Dr. Lorrie Cranor and the CyLab Usable Privacy and Security Lab (CUPS)” Hibshi explains. “When I began to research doctoral programs, what ended up happening was that I wound up just hunting research groups like CUPS, something more interdisciplinary which was focused on real-world problems.”

She found this interdisciplinary real-world focus in the Institute for Software Research’s Societal Computing PhD program where she now works with Dr. Travis Breaux, an expert in the field of requirements engineering. Concerned with security requirements, Hibshi’s research focuses on trying to improve usability through the way security requirements are applied in practice.

Attempting to understand and model how experts make security decisions, the end goal of her work is to build better support systems for security decision makers. This can be thought of as an “security advisor”, Hibshi explains. “It isn’t recommending products or solutions; it is advising you on the security status of your system and how it might be improved based on what you already have available.” It may, for example, analyze the settings available to a developer and provide options to increase security or mitigate existing risks. 

Mimicking Wisdom

This endeavor is massively complex, Hibshi notes. “This isn’t a project where you can just dive into building a tool or a model. There are many factors, at every juncture, that you need to figure out before you can proceed.”

Perhaps the most significant hurdle she has yet had to overcome is central to the research’s purpose: How do you build a model that mimics the decision making process of security experts? “The approach to identifying and deciding upon the “most secure” option available is hidden away inside the heads of our experts,” she explains. “What logical processes do they walk through in order to understand the context of a given situation, identify the components available to them, weigh and reason about the trade-offs, and arrive at a decision?” And, Hibshi notes, eliciting these processes from their experts is a delicate affair. “We have to be certain that we are asking the right questions to get enough data without inadvertently biasing their answers.”

To tackle this challenge, Hibshi and her fellow researchers constructed a study where they would show security experts a series of scenarios. Each of these scenarios incorporated a number of key security factors. “We asked the experts, ‘What do you think of this scenario?’ You make them feel as if their opinion matters and that you are seeking their help. This approach removes the psychological pressure of evaluating the person’s skills or knowledge.” From there, Hibshi then changes a number of security factors in the next scenario. What the expert sees seems to be a new scenario, so they approach it again in the same way. They ask themselves a similar set of questions, apply their decision making process, and arrive at an assessment. “We can then see which factors have more of an impact on their decisions than others. And looking across the entire study, we can both begin to understand expert decision making processes but also start to identify commonalities with regard to which security factors are considered more important than others.” 

Modeling Uncertainty

These experts are human, however. And humans are notoriously “messy” when it comes to gathering data on opinion-based topics like decision-making. Hibshi notes that it is rare to have unified consensus across participants, the responses aren’t “certain”. Oftentimes, she explains, experts varied widely in their assessments and responses. “Sometimes there is disagreement between individuals - what we call interpersonal uncertainty - and at other times respondents changed their minds when contexts change - what we call intrapersonal uncertainty.”

The challenge, says Hibshi, comes in choosing a mathematical modeling method which can incorporate this uncertainty. Leveraging the resources available to them at Carnegie Mellon, they reached out to Dr. Stephen Broomell, in Carnegie Mellon’s Department of Social and Decision Sciences as well as Dr. Christian Wagner, in the University of Nottingham’s Intelligent Modelling and Analysis Research Group, to help them better understand the options available.

Eventually, Hibshi chose an approach which utilized fuzzy logic to capture the uncertainty in her research data. “By basing the model on fuzzy logic, we are able to work with the variation in opinion that we see across responses,” she says. “Take, for example, an approach where we modeled based off the average response. Essentially what you are doing is removing some of the data. What about a response which is close to average but is not average?” Hibshi goes on to explain, “such an approach doesn’t account for variation in definition. Say, for example, I wanted to rate the security of a system on a scale from 0 to 10. Would I always model “adequate” to be 5? What if someone believes “adequate” is closer to a rating of 3?”

Fuzzy logic, Hibshi explains enables her to take the data from her researchers and model it “as is”, including this layer of uncertainty. This approach, she notes, builds in a layer of assurance to the eventual system which will allow the results to hold water. “This isn’t my personal assumptions forced upon the data,” Hibshi points out. “I can say ‘this isn’t my bias coming through in the system, this is built from the decision making processes of the top 100 security experts in the world.”

Any beyond faithfully representing the distribution of data, fuzzy logic operates on a basis of if/then rules, allowing greater transparency. “Users can see into the decision making process of the system,” she explains, ““You can see a breakdown of the system’s reasoning where it says ‘if this is the network factor, this is your encryption, and this is your password, then this is your security level” From that point, Hibshi says, users can agree or disagree with an assessment, making adjustments accordingly. “If, for example, it is an exclusively internal network maybe encryption isn’t such a high priority and your approach may change slightly.”

Leveraging the Interdisciplinary

In the long run, Hibshi hopes that her work will not only benefit security practitioners but also the field of computer science more broadly. “With my work I am trying to help the field of computer science to reach out to other disciplinary fields.” Too often, Hibshi points out, computer scientists are quick to jump into exploring a solution because the gulf between other fields and computer science can seem so wide. “We can find common ground, though,” she points out.. “And in finding that common ground, there are a wealth of resources available to help us build better systems that work as envisioned.”

By compelling researchers to question assumptions earlier in the project cycle, Hibshi believes, higher quality work will result. To illustrate this, she points to the term security. “What does it really mean when we say something is highly secure?” Hibshi asks. “We, as computer scientists, may enter into a project with an assumption of what high, medium, and low security means. This assumption may even be affirmed by our peers in our research group. But is that assumption true enough to base an entire system on?”

There are, she notes, many decades of research in the humanities and social sciences - particularly psychometrics - regarding how researchers can authoritatively define a quality scale, such as “security”, through rigorous testing and experimentation. “We test and question aspects of a system’s architecture or its implementation plan, but we need to also question the more fundamental components that lead to these higher level technical features,” Hibshi says. “It is my hope that my work and the approach that we are taking will help computer science to recognize this and work to become more interdisciplinary for this exact reason.”