Stanford Security Lunch
Winter 2022

Get announcements:


January 05, 2022 SocialHEISTing: A Tale of Stolen Facebook Accounts

Speaker:  Jeremiah Onaolapo (University of Vermont)

Abstract:  In an enquiring mind, a question emerges: Do the demographic attributes of social accounts influence the behavior of cybercriminals when they break into those accounts? In this talk, I will discuss my investigations into the influence of the age and gender attributes of Facebook accounts on the activity of cybercriminals who break into them. I will also describe the monitoring system at play, and present the key findings that emerged from that study. For instance, cybercriminals who breached teen accounts wrote more messages and posts than the ones who broke into adult accounts. The implication of this, and more, will be discussed during the presentation.

Paper:  USENIX 2021

January 12, 2022 Using Honeypots to Fight Amplification DDoS

Speaker:  Johannes Krupp (CISPA)

Abstract:  Amplification DDoS attacks plague the Internet for a long time. Although quite simple from a technical point of view, these attacks can reach powerful attack bandwidths of several Tbps while also hiding the attacker behind a veil of IP spoofing. The latter in particular allows miscreants to go about their deeds without fear of prosecution. We therefore present three honeypot-based traceback mechanisms for amplification DDoS attacks, which enable practical identification of attack sources.

January 19, 2022 Polls, Clickbait, and Commemorative $2 Bills: Problematic Political Advertising on News and Media Websites Around the 2020 U.S. Elections

Speaker:  Eric Zeng and Miranda Wei (UW)

Abstract:  Online advertising can be used to mislead, deceive, and manipulate Internet users, and political advertising is no exception. In this talk, we will present a measurement study of online advertising around the 2020 United States elections, with a focus on identifying dark patterns and other potentially problematic content in political advertising. We scraped ad content on 745 news and media websites from six geographic locations in the U.S. from September 2020 to January 2021, collecting 1.4 million ads. We performed a systematic qualitative analysis of political content in these ads, as well as a quantitative analysis of the distribution of political ads on different types of websites. Our findings reveal the widespread use of problematic tactics in political ads, often in the pursuit of fundraising or profit, such as bait-and-switch ads formatted as opinion polls to entice users to click, the use of political controversy by content farms for clickbait, and the more frequent occurrence of political ads on highly partisan news websites. We make policy recommendations for online political advertising, including greater scrutiny of political ads from organizations besides political campaigns, and comprehensive standards for political content across advertising platforms.

Paper:  IMC 2021

January 26, 2022 How WhatsApp's End-to-End Encrypted Backups Work

Speaker:  Slavik Krassovsky and Kevin Lewi (Meta)

Abstract:  WhatsApp provides end-to-end encryption (E2EE) by default so that messages can be seen only by the sender and recipient, and no one in between. And now, if people choose to enable E2EE backups, neither WhatsApp nor the backup service provider will be able to access their backup or their backup encryption key.
In this presentation, we will describe how our recently-launched E2EE backups work, which includes the use of server-side HSMs that interact with clients using the OPAQUE (Eurocrypt 2018) password authentication protocol.

February 02, 2022 Efficient Use of Cryptography for Authenticating Satellite Navigation Systems

Speaker:  Jason Anderson

Abstract:  Our current satellite navigations systems (e.g., GPS) remain vulnerable to spoofing threats, motivating their augmentation with cryptography to support cryptographic authentication. Given the data-constraints on the system and how the signal arrival times determine the information delivered, cryptographic authentication poses a significant challenge for existing and new systems. This talk discusses the field’s current strategies to tackle the challenge and the recent contributions by Stanford’s GPS Lab including proposed concepts that utilize Timed Efficient Stream Loss-tolerant Authentication.

February 09, 2022 Collaborative zk-SNARKs: Zero-Knowledge Proofs for Distributed Secrets

Speaker:  Alex Ozdemir

Abstract:  A zk-SNARK is a powerful cryptographic primitive that provides a succinct and efficiently checkable argument that the prover has a witness to a public NP statement, without revealing the witness. However, in their native form, zk-SNARKs only apply to a secret witness held by a single party. In practice, a collection of parties often need to prove a statement where the secret witness is distributed or shared among them.
We implement and experiment with collaborative zk-SNARKs: proofs over the secrets of multiple, mutually distrusting parties. We construct these by lifting conventional zk-SNARKs into secure protocols among N provers to jointly produce a single proof over the distributed witness. We optimize the proof generation algorithm in pairing-based zk-SNARKs so that algebraic techniques for multiparty computation (MPC) yield efficient proof generation protocols. For some zk-SNARKs, optimization is more challenging. This suggests MPC "friendliness" as an additional criterion for evaluating zk-SNARKs.
We implement 3 collaborative proofs and evaluate the concrete cost of proof generation. We find that over a 3Gb/s link, security against a malicious minority of provers can be achieved with approximately the same runtime as a single prover. Security against N-1 malicious provers requires only a 2x slowdown. This efficiency is unusual: most computations slow down by several orders of magnitude when securely distributed. It is also significant: most server-side applications that can tolerate the cost of a single-prover proof should also be able to tolerate the cost of a collaborative proof.

Paper:  ePrint

February 16, 2022 Envisioning Online Hate and Harassment as a Security Problem

Speaker:  Deepak Kumar

Abstract:  In this talk, I will discuss some of our recent papers that study human experiences with various forms of online hate and harassment, ranging from toxic content to cyber stalking. I will start with a taxonomy of major online hate and harassment attacks, demonstrate how many such attacks have direct security analogs, and discuss how this mapping can be used to inform potential defensive solutions for online hate and harassment. Through a study conducted for three years that surveyed Internet users around the world, I will next demonstrate that user experiences with online hate and harassment are highly varied, often based on a complex interplay between users' identities and previous digital experiences. Given these insights, I will finally discuss potential areas for personalizing users' online experiences, which range from simple personalized tuning of existing automated classifiers to privacy-preserving, personalized models of hate and harassment online. Such systems may be another tool in the toolbox for content moderators and platforms who seek to protect their users from unwanted digital abuse.

February 23, 2022 Supporting At-Risk Users: Recent Highlights

Speaker:  Sunny Consolvo (Google)

Abstract:  In this talk, I'll provide highlights of our recent work to support at-risk users. I'll give an overview of our research with people involved with political campaigns, who face increased digital security threats from well-funded, sophisticated attackers, especially nation-states. I'll also present an early look at some systemazation work we've been doing in collaboration with the University of Maryland. We've developed a framework for reasoning about at-risk users that includes 10 unifying contextual risk factors--such as marginalization or access to a sensitive resource--which augment or amplify digital-safety threats and their resulting harms, as well as the technical and non-technical practices that at-risk users adopt to attempt to protect themselves from digital-safety threats.

March 02, 2022 SiliFuzz: Fuzzing CPUs by proxy

Speaker:  Kostya Serebryany (Google)

Abstract:  CPUs are becoming more complex with every generation, at both the logical and the physical levels. This potentially leads to more logic bugs and electrical defects in CPUs being overlooked during testing, which causes data corruption or other undesirable effects when these CPUs are used in production. These ever-present problems may also have simply become more evident as more CPUs are operated and monitored by large cloud providers.
If the RTL ("source code") of a CPU were available, we could apply greybox fuzzing to the CPU model almost as we do to any other software [arXiv:2102.02308]. However our targets are general purpose x86_64 CPUs produced by third parties, where we do not have the RTL design, so in our case CPU implementations are opaque. Moreover, we are more interested in electrical defects as opposed to logic bugs.
We present SiliFuzz, a work-in-progress system that finds CPU defects by fuzzing software proxies, like CPU simulators or disassemblers, and then executing the accumulated test inputs (known as the corpus) on actual CPUs on a large scale. The major difference between this work and traditional software fuzzing is that a software bug fixed once will be fixed for all installations of the software, while for CPU defects we have to test every individual core repeatedly over its lifetime due to wear and tear. In this paper we also analyze four groups of CPU defects that SiliFuzz has uncovered and describe patterns shared by other SiliFuzz findings.

Preprint:  arXiv

March 09, 2022 Axiomatic Hardware-Software Contracts for Security

Speaker:  Nicholas Mosier and Hanna Lachnitt

Abstract:  Microarchitectural attacks are side/covert channel attacks which enable leakage/communication as a direct result of hardware optimizations. Secure computation on modern hardware thus requires hardware-software contracts which include in their definition of software-visible state any micraorchitectural state that can be exposed via microarchitectural attacks. Defining such contracts has become an active area of research. In this talk, we will present leakage containment models (LCMs)—novel axiomatic hardware-software contracts which support formally reasoning about the security guarantees of programs when they run on particular microarchitectures. Our first contribution is an axiomatic vocabulary for formally defining LCMs, derived from the established axiomatic vocabulary used to formalize processor memory consistency models. Using this vocabulary, we formalize microarchitectural leakage—focusing on leakage through hardware memory systems—so that it can be automatically detected in programs. To illustrate the efficacy of LCMs, we first demonstrate that our leakage definition faithfully captures a sampling of (transient and non-transient) microarchitectural attacks from the literature. Next, we develop a static analysis tool, called Clou, which automatically identifies microarchitectural vulnerabilities in programs given a specific LCM. We use Clou to search for Spectre gadgets in benchmark programs as well as real-world crypto-libraries (OpenSSL and Libsodium), finding new instances of leakage. To promote research on LCMs, we design the Subrosa toolkit for formally defining and automatically evaluating/comparing LCM specifications.

March 16, 2022 Are iPhones Really Better for Privacy?

Speaker:  Konrad Kollnig (Oxford)

Abstract:  While many studies have looked at privacy properties of the Android app ecosystem, comparatively much less is known about iOS. This is despite the fact that the market share of iOS in the U.S. is nearly two thirds.
In this talk, I will present our large-scale study of 24k Android and iOS apps from 2020 along several dimensions relating to user privacy. I will put particular focus on the unique challenges faced in analyzing iOS apps and how to overcome them.

Paper:  PETS 2022