Stanford Security Lunch
Fall 2020

Get announcements:

September 16, 2020 Using SMT Solvers to Automate Chosen Ciphertext Attacks

Speaker:  Max Zinkus (JHU)

Abstract:  In this work we investigate the problem of automating the development of adaptive chosen ciphertext attacks on systems that contain vulnerable format oracles. Unlike previous attempts, which simply automate the execution of known attacks, we consider a more challenging problem: to programmatically derive a novel attack strategy, given only a machine-readable description of the plaintext verification function and the malleability characteristics of the encryption scheme. We present a new set of algorithms that use SAT and SMT solvers to reason deeply over the design of the system, producing an automated attack strategy that can entirely decrypt protected messages. Developing our algorithms required us to adapt techniques from a diverse range of research fields, as well as to explore and develop new ones. We implement our algorithms using existing theory solvers. The result is a practical tool called Delphinium that succeeds against real-world and contrived format oracles. To our knowledge, this is the first work to automatically derive such complex chosen ciphertext attacks.

Paper:  eprint 2019/958, USENIX Security

Slides:  slides

September 23, 2020 Scaling Verifiable Computation Using Efficient Set Accumulators

Speaker:  Alex Ozdemir (Stanford CS)

Abstract:  Verifiable outsourcing systems offload a large computation to a remote server, but require that the remote server provide a succinct proof, called a SNARK, that proves that the server carried out the computation correctly. Real-world applications of this approach can be found in several blockchain systems that employ verifiable outsourcing to process a large number of transactions off-chain. This reduces the on-chain work to simply verifying a succinct proof that transaction processing was done correctly. In practice, verifiable outsourcing of state updates is done by updating the leaves of a Merkle tree, recomputing the resulting Merkle root, and proving using a SNARK that the state update was done correctly. In this work, we use a combination of existing and novel techniques to implement an RSA accumulator inside of a SNARK, and use it as a replacement for a Merkle tree. We specifically optimize the accumulator for compatibility with SNARKs. Our experiments show that the resulting system can dramatically reduce costs compared to existing approaches that use Merkle trees for committing to the current state. These results apply broadly to any system that needs to offload batches of state updates to a remote untrusted server.

Joint Work with:  Riad Wahby, Barry Whitehat, and Dan Boneh

Paper:  USENIX Security 2020

September 30, 2020 Understanding and Designing for Security and Privacy in Multi-User AR Interactions

Speaker:  Kimberly Ruth (Stanford CS)

Abstract:  Augmented reality (AR) technologies, which overlay digital content atop the user’s perception of the real world, are on the brink of pervasive commercial deployment. AR brings new modes of interaction, eliciting unique user expectations and behaviors, but its user-facing security and privacy challenges remain underexplored. In particular, my work identifies multi-user AR – which includes such compelling use cases as in-person collaborative tools, multiplayer gaming, and telepresence – as an area containing many unsolved problems in handling users’ malicious or careless behavior. In this talk, I discuss my work on security and privacy for multi-user AR interactions. I argue that AR’s physical-world integration drives novel user expectations, that navigating this physicality poses crucial challenges for supporting secure and private AR content sharing, and that these challenges pose fundamental design questions that must be addressed while the technology is still nascent. I show that the security and privacy properties of an AR interaction are tied to interaction semantics, motivating a flexible AR content sharing module that can support many application needs. I systematize security and functionality design goals for such a module, and I introduce ShareAR, an application-level module for the Microsoft HoloLens that meets these goals. ShareAR is publicly available and has been qualitatively tested by two undergraduate developers. In addition to directly supporting secure multi-user AR interactions, this work provides a scaffold for defining security and privacy for multi-user AR content sharing, taking steps toward allowing AR to securely reach its full potential.

October 07, 2020 Who is targeted by email-based phishing and malware? Measuring factors that differentiate risk

Speaker:  Camelia Simoiu (Stanford Computational Social Science)

Abstract:  As technologies to defend against phishing and malware often impose an additional financial and usability cost on users (such as security keys), a question remains as to who should adopt these heightened protections. We measure over 1.2 billion email-based phishing and malware attacks against Gmail users to understand what factors place a person at heightened risk of attack. We find that attack campaigns are typically short-lived and at first glance indiscriminately target users on a global scale. However, by modeling the distribution of targeted users, we find that a person's demographics, location, email usage patterns, and security posture all significantly influence the likelihood of attack. Our findings represent a first step towards empirically identifying the most at-risk users.

October 14, 2020 On the Origin of Scanning: The Impact of Location on Internet-Wide Scans

Speaker:  Gerry Wan (Stanford CS)

Abstract:  Fast IPv4 scanning has enabled researchers to answer a wealth of security and networking questions. Yet, despite widespread use, there has been little validation of the methodology's accuracy, including whether a single scan provides sufficient coverage. In this paper, we analyze how scan origin affects the results of Internet-wide scans by completing three HTTP, HTTPS, and SSH scans from seven geographically and topologically diverse networks. We find that individual origins miss an average 1.6-8.4% of HTTP, 1.5-4.6% of HTTPS, and 8.3-18.2% of SSH hosts. We analyze why origins see different hosts, and show how permanent and temporary blocking, packet loss, geographic biases, and transient outages affect scan results. We discuss the implications for scanning and provide recommendations for future studies.

Paper:  IMC '20

October 21, 2020 VoteAgain: A scalable coercion-resistant voting system

Speaker:  Wouter Lueks (EPFL)

Abstract:  The strongest threat model for voting systems considers coercion resistance: protection against coercers that force voters to modify their votes, or to abstain. Existing remote voting systems either do not provide this property; require expensive operations for tallying; or burden users with the need to store cryptographic key material and with the responsibility to deceive their coercers. We propose VoteAgain, a scalable voting scheme that relies on the revoting paradigm to provide coercion resistance. VoteAgain uses a novel deterministic ballot padding mechanism to ensure that coercers cannot see whether a vote has been replaced. This mechanism ensures tallying takes quasilinear time, making VoteAgain the first revoting scheme that can handle elections with millions of voters. We prove that VoteAgain provides ballot privacy, coercion resistance, and verifiability; and we demonstrate its scalability using a prototype implementation of its core cryptographic primitives.

Paper:  USENIX Security 2020

October 28, 2020 Everything is a Race and Nakamoto Always Wins

Speaker:  Ertem Nusret Tas (Stanford EE)

Abstract:  Nakamoto invented the longest chain protocol and claimed its security by analyzing the private double-spend attack, a race between the adversary and the honest nodes to grow a longer chain. But is it the worst attack? We answer the question in the affirmative for three classes of longest chain protocols, designed for different consensus models: 1) Nakamoto's original Proof-of-Work protocol; 2) Ouroboros and SnowWhite Proof-of-Stake protocols; 3) Chia Proof-of-Space protocol. As a consequence, exact characterization of the maximum tolerable adversary power is obtained for each protocol as a function of the average block time normalized by the network delay. The security analysis of these protocols is performed in a unified manner by a novel method of reducing all attacks to a race between the adversary and the honest nodes.

Paper:  ePrint 2020/601

November 18, 2020 Resolving the Availability-Finality Dilemma, or, How to Fix Ethereum 2.0's Consensus Protocol

Speaker:  Joachim Neu (Stanford)

Abstract:  The CAP theorem says that no consensus protocol can be live under dynamic participation and safe under network partitions. To resolve this availability-finality dilemma, we formalize a new class of flexible consensus protocols, ebb-and-flow protocols, which support a full dynamically available ledger in conjunction with a finalized prefix ledger. The finalized ledger falls behind the full ledger when the network partitions but catches up when the network heals. Gasper, the current candidate for Ethereum 2.0's beacon chain, aims to achieve this property, but we discovered an attack in the standard synchronous network model. We present a construction of provably secure ebb-and-flow protocols with optimal resilience, based on black-box composition of an off-the-shelf dynamically available and an off-the-shelf partially synchronous BFT protocol.

Paper:  arXiv:2009.04987

Blog post:  Decentralized Thoughts

December 02, 2020 Retrofitting Fine Grain Isolation In the Firefox Renderer

Speaker:  Tal Garfinkel (Stanford)

Abstract:  Until recently, Firefox and other major browsers have relied exclusively on coarse grain isolation such as privilege separation and site-isolation, to contain exploits. While these protect the localhost and prevents cross-site attacks, attackers are still frequently able to use vulnerabilities in the browser to gain total control of a victim's application. To mitigate this threat, we are migrating Firefox to an architecture that isolates parts of the renderer in lightweight sandboxes--thus offering meaningful protection to individual applications. Retrofitting isolation can be labor-intensive, very prone to security bugs, and requires critical attention to performance. To help, we developed RLbox, a framework that minimizes the burden of converting Firefox to securely and efficiently use untrusted code. To enable this, RLBox employs static information flow enforcement, and lightweight dynamic checks, expressed directly in the C++ type system. RLBox employs software-based-fault isolation (NaCL, WebAssembly) to provide efficient and scalable isolation. Performance overheads in Firefox are modest and transient, and have only a minor impact on page latency. Firefox has been shipping with RLBox April 2020. While the initial targets of our sandboxing efforts was a third-party font-shaping library, Graphite, now Firefox developers and security engineers are using RLBox to sandbox both third-party libraries and legacy Mozilla code in domains including media decoding, spell checking, and even speech synthesis.

Work in Collaboration with:  Shravan Narayan, Craig Disselkoen, Nathan Froyd, Eric Rahm, Sorin Lerner, Hovav Shacham, and Deian Stefan

Paper:  USENIX Security

December 09, 2020 Application Security via Formal Methods for Systems

Speaker:  Caroline Trippel (Stanford)

Abstract:  Modern computer systems can introduce security issues into seemingly secure programs. This is because hardware does not execute program instructions atomically. Rather, individual instructions get “cracked” into a collection of hardware events that take place on behalf of the user-facing instruction. Events corresponding to one instruction can interleave and interact with the events corresponding to another instruction in a variety of different ways during a program’s execution. Some of these interactions can translate to program-level security vulnerabilities.

Evaluating the security of a program requires searching the space of all possible ways in which the program could run a particular hardware implementation for execution scenarios that feature security violations. Fortunately, the field of automated reasoning has developed tools for conducting such an analysis, subject to the user’s ability to provide specifications of the target 1) hardware system and 2) security property. Both specifications impact the soundness, completeness, and efficiency of the final security verification effort. In this talk, I will give an overview of some of our work on applying formal methods techniques to the problem of hardware security verification in modern processor designs. In particular, I will talk about the CheckMate approach for modeling hardware systems in a way that makes them amenable to efficient formal security analysis, and I will discuss how we are addressing the specification challenge above to, for example, automatically lift formal hardware specifications for security analysis directly from RTL.