Stanford Security Lunch
Spring 2017

Get announcements:

April 05, 2017 Internet of Things (IOT) Security

Speaker:  Brian Witten (Symantec Labs)

Abstract:  This talk will describe the security mistakes behind some of the headlines of recent IOT security debacles, and also describe current best practices for protecting IOT systems end-to-end, as background for leading edge research in network security and machine learning applicable to IOT security, and walk through a sampling of those research efforts.

April 12, 2017 Upcoming Anti-Crypto Measures in Europe

Speaker:  Riana Pfefferkorn

Abstract:  Riana Pfefferkorn is the Cryptography Fellow at the Center for Internet and Society (CIS) at Stanford Law School. She will discuss growing efforts in Europe to enhance law enforcement access to encrypted information. In June, the European Commission plans to propose several options for police access to encrypted data, including binding legislation and non-legislative measures such as "voluntary" agreements with companies. Germany, France, and the UK have led the current push to regulate encryption, calling for "balance" between law enforcement interests, privacy, and security. The details of the proposals have not been announced, meaning it remains to be seen whether they will be (1) technologically coherent and (2) responsive and proportionate to law enforcement's actual needs. Riana will present the responses by 12 EU countries to a questionnaire about which encryption technologies the authorities encounter as an impediment to their investigations, how they respond to that challenge, and what they need in order to improve their investigatory capabilities. CIS plans to produce a report that analyzes the questionnaire responses and offers public-policy and technical recommendations on the forthcoming EC proposals, and seeks CS students interested in assisting with the report.

April 19, 2017 Securing the perimeter at LinkedIn

Speaker:  David Freeman

Abstract:  As the world's largest professional network, LinkedIn is subject to a barrage of fraudulent and/or abusive activity aimed at its member-facing products. LinkedIn's Anti-Abuse Team is tasked with detecting bad activity and building proactive solutions to keep it from happening in the first place. In this talk we'll explore various types of abuse we see at LinkedIn and discuss some of the solutions we've built to defend against them. We'll focus on perimeter defense: keeping bad guys from creating fake accounts at scale, from taking over real members' accounts, and from using bots to steal large amounts of data. Most member-facing abuse is perpetrated by fake accounts; in order to stop abuse we thus want to catch fake accounts as soon as possible after they are created. In the first part of the talk we will describe a machine-learning system we have built that detects clusters of fake accounts based on patterns observed in the account profile data alone, allowing us to catch the accounts before they do any damage. This system has found and removed more than one million fake accounts from LinkedIn. Login defense presents a challenge because passwords are known to have many weaknesses, but no alternative authentication mechanism has been successfully rolled out at scale. In the second part of the talk we will present a statistical login-scoring model we have developed that strengthens password- based authentication without changing the user experience. Finally, we discuss the problem of stopping unauthorized bot access. The main challenges here are that we need to decide whether to serve the data based on a single request, and we need to make this decision quickly so as not to impact user experience. We will give an overview of the infrastructure we have developed to score requests and our modeling approach that attempts to funnel bots into paths already covered by our fake account models. Bio: David Freeman leads Anti-Abuse and Anomaly Detection Relevance at LinkedIn. His team of machine learning engineers builds statistical models to detect fraud, abuse, and unusual activity across the LinkedIn site and ecosystem. He has a Ph.D. in mathematics from UC Berkeley and did postdoctoral research in cryptography and security at CWI and Stanford University.

April 26, 2017 Functional Encryption: Deterministic to Randomized Functions from Simple

Speaker:  David Wu

Abstract:  Assumptions Functional encryption (FE) enables fine-grained control of sensitive data by allowing users to only compute certain functions for which they have a key. The vast majority of work in FE has focused on deterministic functions, but for many applications, the functionality of interest is more naturally captured by a randomized function. Recently, Goyal, Jain, Koppula, and Sahai (TCC 2015) initiated a formal study of FE for randomized functionalities with security against malicious encrypters, and gave a selectively secure construction from indistinguishability obfuscation. To date, this is the only construction of FE for randomized functionalities in the public-key setting. This stands in stark contrast to FE for deterministic functions which has been realized from a variety of assumptions. In this talk, I will describe a generic transformation that converts any general-purpose, public-key FE scheme for deterministic functionalities into one that supports randomized functionalities. Our transformation can be instantiated using very standard number-theoretic assumptions. Then, applying our transformation to existing FE constructions, we obtain several adaptively-secure, public-key functional encryption schemes for randomized functionalities with security against malicious encrypters from many different assumptions such as concrete assumptions on multilinear maps, indistinguishability obfuscation, and in the bounded-collusion setting, the existence of public-key encryption, together with standard number-theoretic assumptions. Joint work with Shashank Agrawal. To appear in Eurocrypt 2017.

May 03, 2017 Quantum Operating Systems

Speaker:  Henry Corrigan-Gibbs

Abstract:  If large-scale quantum computers become commonplace, the operating system will have to provide new abstractions to capture the power of this bizarre new hardware. In this talk, we consider the systems-level issues that quantum computers would raise, and we demonstrate that quantum machines would offer surprising speed-ups for a number of everyday systems tasks, such as fuzzing, unit testing, CPU scheduling, and web prefetching. This is joint work with David J. Wu and Dan Boneh and is to appear at HotOS.

May 10, 2017 Efficient Quantum Resistant Confidential Transactions for Bitcoin

Speaker:  Benedikt Bünz

Abstract:  In popular media, one of the most covered and controversial features of Bitcoin is its supposed anonimity. However, in recent time researchers, companies and law enforcement agency have shown significant weaknesses in the privacy properties of Bitcoin. One proposal to increase privacy is a new transaction type called Confidential Transactions. In this talk, we present an improved implementation of confidential transactions that makes use of novel zero-knowledge proofs that a number is in a small range. We both present highly efficient proofs as well as a proposal to make Confidential Transactions resilient against silent inflation through quantum adversaries. Joint work with Dan Boneh.

May 17, 2017 Ensemble Adversarial Training

Speaker:  Florian Tramer

Abstract:  Many machine learning models are vulnerable to adversarial examples, maliciously perturbed inputs designed to mislead the model. These inputs often transfer between models, thus enabling black-box attacks against deployed ML systems. Adversarial training explicitly includes adversarial examples at training time in order to increase a model’s robustness to attacks. Although adversarial training substantially increases robustness to different white-box attacks (i.e., with knowledge of the model’s parameters), we show that, surprisingly, adversarially trained models on MNIST and ImageNet remain vulnerable to the same attacks in a black-box setting, where adversarial examples are transferred from a separate model. To explain this discrepancy between white-box and black-box robustness, we show that the defended model’s decision surface exhibits sharp curvature in a very small neighborhood of the data points, thus spuriously hindering white-box attacks based on first-order approximations of the model’s output or loss. We harness this observation in two ways: First, we propose a simple yet powerful novel attack that first applies a small random perturbation to an input, before finding the optimal perturbation under a first-order approximation. Our attack outperforms prior “single-step” attacks on models trained with or without adversarial training. Second, we propose Ensemble Adversarial Training, an extension of adversarial training that additionally augments training data with perturbed inputs transferred from other fixed pre-trained models. On ImageNet and MNIST, ensemble adversarial training vastly increases robustness to black-box attacks. This is joint work with Alexey Kurakin, Nicolas Papernot, Dan Boneh & Patrick McDaniel

May 24, 2017 No Lunch Oakland

May 31, 2017 TBA

Speaker:  Yan Michalevsky

Title:  No Lunch but come to Yan's Defense

June 07, 2017 Decentralizing Policy-Hiding Attribute-Based Encryption

Abstract:  Attribute-based encryption (ABE) enables limiting access to encrypted data to users who possess certain attributes. Different aspects of ABE have been studied, such as the multi-authority setting (MA-ABE), and policy hiding, meaning the access policy is unknown to unauthorized parties, as in predicate encryption (PE). However, no practical scheme so far provided both properties, which are often desirable in real-world applications: supporting decentralization, while hiding the access policy. We present the first practical decentralized attribute-based encryption scheme which is policy-hiding. Our construction is based on a decentralized inner-product predicate encryption scheme, introduced in this paper, which hides the encryption policy. It results in an ABE scheme supporting conjunctions, disjunctions and threshold policies, that protects the access policy from parties that are not authorized to decrypt the content. Further, we address the issue of receiver privacy. By using our scheme in combination with vector commitments, we hide the overall set of attributes possessed by the receiver from individual authorities, only revealing the attribute that the authority is controlling. Finally, we propose randomizing-polynomial encodings that immunize the scheme in the presence of corrupt authorities.

Speaker:  Yan Michalewsky