Carnegie Mellon University

cylab logo, orange, grey, nominated fellows

September 26, 2024

CyLab names 2024 Presidential Fellows

By By Michael Cunningham

Each year, CyLab recognizes high-achieving Ph.D. students pursuing security and/or privacy-related research, with a CyLab Presidential Fellowship, covering an entire year of tuition.

This year’s CyLab Presidential Fellowship recipients are:

 

Quang Dao

0926-quang-dao.png

Ph.D. Student, Computer Science Department
Advised by Aayush Jain, Assistant Professor, Computer Science Department, and Riad Wahby, Assistant Professor, Electrical and Computer Engineering

Quang’s research focuses on ensuring the long-term security of zero-knowledge proof systems (ZKPs) through two primary avenues: formal verification of ZKP implementations, and post-quantum secure proof systems.

ZKP is a powerful cryptographic primitive that allows one party to prove arbitrary statements about their private data, without revealing anything beyond the validity of those statements. Recent advancements have made ZKPs vastly more succinct and efficient, leading to their widespread adoption on blockchains where they help secure billions of dollars and protect sensitive user details.

However, the security of existing ZKP constructions falls short of what is needed for real-world deployments. Current implementations must contend with attack vectors not captured by standard security proofs, are often riddled with bugs, and face potential obsolescence with the advent of large-scale quantum computers.

 “My research aims to develop post-quantum ZKPs that address both these theoretical and practical concerns,” said Dao. “I believe the key to resolving these issues lies in leveraging alternative post-quantum assumptions, such as Learning Parity with Noise, Multivariate Quadratic, or isogeny-based assumptions. By enhancing the long-term security and succinctness of ZKP systems, I hope to facilitate their widespread adoption, ultimately contributing to a more private and secure digital landscape.”

Sara Mahdizadeh Shahri

0926-sara-mahdizadeh-shahri.png

Ph.D. Student, Electrical and Computer Engineering Department
Advised by Akshitha Sriraman, Assistant Professor, Electrical and Computer Engineering Department

Modern web services such as social media, online banking, and online healthcare require vast data centers with thousands of servers. Since these services are user-facing, they traditionally adopt a “performance- first” approach to quickly send responses to end users to enhance user experience, thus improving revenue.

However, building web systems using the “performance-first” approach can compromise privacy and equity, as service operators can improve performance by implicitly using user information to introduce request priorities, causing biased responses. Thus, there is a critical need to systematically study, identify, and reduce demographic bias in modern web systems.

Sara’s research vision is to rethink the data center computing stack across hardware and software systems to enable demographic bias-free web systems that also preserve user privacy. Her proposed research makes a case for how web systems must consider demographic bias as a key systems concern (similar to performance or power).

“My goal is to identify demographic bias in modern web systems and to provide solutions to monitor and mitigate such biases,” said Mahdizadeh Shahri. “Specifically, I propose to investigate whether the ‘performance-first’ approach of modern web systems unintentionally introduces demographic bias, thereby discriminating against certain demographics. To the best of our knowledge, I am the first in the computer systems community to introduce demographic bias as a first-order systems concern. Thus, in terms of long-term impact, my work will identify and reduce bias, increase users' trust in service platforms, mitigate deep-rooted societal inequities, and improve the user base.”

Ian McCormack

0926-ian-mccormack.png

Ph.D. Student, Software and Societal Systems Department
Advised by Jonathan Aldrich, Professor, Software and Societal Systems Department and Director, Software Engineering Ph.D. program; and Joshua Sunshine, Assistant Professor, Software and Societal Systems Department

Ian’s research addresses the challenges of foreign function interfaces in Rust — a programming language that is generating a lot of excitement because it is fast and low-level while still providing guarantees of memory safety and race freedom, both of which are critical to the security of modern applications. However, developers can choose to bypass Rust's safety restrictions by using a subset of “unsafe” features, and security vulnerabilities can result if “unsafe” code accesses memory in ways that violate Rust’s memory model.

Rust developers use the Miri dynamic analysis tool to check that “unsafe” code in Rust, and Ian is developing a tool to help them also find errors in foreign code.

“Gradual Verification of Rust’s aliasing model is the solution to this problem,” said McCormack. “After completing a prototype of our design, we will conduct a multi-stage empirical study which evaluates it against real-world instances of memory safety issues and functional correctness problems to demonstrate that gradual verification can provide a path toward static guarantees of security at all levels of abstraction. Through gradual verification, we will fix the correctness gap at the heart of the Rust ecosystem with a production-ready solution informed by developers' needs.”

Qi Pang

0926-qi-pang.png

Ph.D. Student, Computer Science Department
Advised by Wenting Zheng, Assistant Professor, Computer Science Department, and Virginia Smith, Leonardo Associate Professor of Machine Learning

Qi’s research focuses on addressing the privacy and security issues in machine learning (ML) systems by revealing new attack vulnerabilities (ADI and Watermark Attack) and developing efficient and secure ML systems using cryptographic and ML techniques (MPCDiff and BOLT).

Despite the widespread deployment of ML systems in the real world, privacy issues remain significant concerns for workloads that use sensitive data. Qi’s research aims to help mitigate an important problem: verifiable differential privacy (DP). Many inference attack studies have demonstrated that ML models can leak private training data. DP is a widely adopted technique to prevent such privacy leakage by introducing randomness to the training algorithm, which inevitably compromises model performance.

“An efficient and maliciously secure DP verification protocol that is applicable to both centralized and decentralized settings is essential,” said Pang. “In the upcoming academic year, I plan to address this problem by designing ZKP-friendly DP algorithms that can efficiently verify the execution of DP algorithms in the presence of untrusted parties. We believe that our protocol will have significant real-world impact, advancing towards practical and verifiable DP algorithms.”

Prasoon Patidar

0926-prasoon-patidar.png

Ph.D. Student, Software and Societal Systems Department
Advised by Yuvraj Agarwal, Associate Professor, Software and Societal Systems Department

Human Activity Recognition (HAR) and Human Activity Discovery (HAD) systems offer significant benefits for applications such as Active and Assisted Living (AAL), healthcare monitoring, security, surveillance, and tele-immersion. Traditional information-rich sensors, including cameras and audio devices, excel in detecting and discovering new human activities due to their detailed motion, spatial, and audio cues. However, continuous deployment of these sensors raises serious privacy concerns, as they can inadvertently capture sensitive personal information without explicit user consent, undermining user trust and compliance with privacy regulations.

Prasoon’s research focuses on developing an activity discovery system for unseen environments that optimizes the balance between model performance and user privacy protection. He aims to minimize the amount and sensitivity of data collected from users while maintaining high-quality activity recognition. To achieve this, he proposes a multi-faceted approach that leverages innovative training methods and prioritizes user control over data sharing.

“This approach distinguishes itself from existing methods by emphasizing user control, minimizing the collection of sensitive data, and gradually incorporating user input to improve model performance,” said Patidar. “By striking a balance between data utility and privacy protection, our proposed system aims to provide a more secure and personalized experience for users.”