Robust Machine Learning

Rearchitecting Classification Frameworks For Increased Robustness

We enforce invariances found in objects to improve the robustness accuracy trade-off found in Deep Neural Networks (DNNs). Our evaluation is performed using multiple adversarial defenses across various domain specific tasks (traffic sign classification and speaker recognition) as well as the general case of image classification.

Find our paper at https://arxiv.org/abs/1905.10900
See our code at https://github.com/byron123t/3d-adv-pc and https://github.com/byron123t/YOPO-You-Only-Propagate-Once

Our hierarchical architecture for enforcing invariances Our high level design of a hierarchical architecture which enforces robust features or invariances

Theoretical Analysis of Randomized Smoothing

Recent advances in machine learning (ML) algorithms, especially deep neural networks (DNNs), have demonstrated remarkable success (sometimes exceeding human-level performance) on several tasks, including face and speech recognition. However, ML algorithms are vulnerable to adversarial attacks, such test-time, training-time, and backdoor attacks. In test-time attacks an adversary crafts adversarial examples, which are specially crafted perturbations imperceptible to humans which, when added to an input example, force a machine learning model to misclassify the given input example. Adversarial examples are a concern when deploying ML algorithms in critical contexts, such as information security and autonomous driving. Researchers have responded with a plethora of defenses. One promising defense is randomized smoothing in which a classifier’s prediction is smoothed by adding random noise to the input example we wish to classify. In this paper, we theoretically and empirically explore randomized smoothing. We investigate the effect of randomized smoothing on the feasible hypotheses space, and show that for some noise levels the set of hypotheses which are feasible shrinks due to smoothing, giving one reason why the natural accuracy drops after smoothing. To perform our analysis, we introduce a model for randomized smoothing which abstracts away specifics, such as the exact distribution of the noise. We complement our theoretical results with extensive experiments.

DARPA Guaranteeing AI Robustness Against Deception (GARD)

Military deception has been a subject of interest to the Department of Defense, and its predecessors since before the dawn of the modern computer: Sun Tzu’s The Art of War, which dates to before the common era (before year 0) discusses deception as a military tactic. The allies famously carried out large-scale military deception during World War II in the form of Operation Bodyguard. WIth the use of dummy military vehicles and simulated radio traffic, Operation Bodyguard succeeded in diverting the attention of Axis forces away from Normandy, the site of D-Day landings, to Pas-De-Calais, the department of France closest to Britain.

Modern reconnaissance techniques yield an abundance of data, far too large for human analysts to carefully sort through its entirety in an unaided fashion. Computerized image recognition systems have the potential to reduce the burden on human intelligence analysts, but in their current state, such systems are vulnerable acts of deception similar to those undertaken by the Allies during Operation Bodyguard.

In this DARPA-funded project, we leverage our knowledge of adversarial attacks against computational image systems, and our knowledge of defenses against such attacks, to design and experiment with systems more robust to military deception. WIth robust computerized image recognition systems at their disposal, the strain on human intelligence analysts can be reduced.

Fairness in ML

Section 1 of Amendment XIV to the Constitution of the United States of America, adopted during the reconstruction era, contains the following text:

No State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any State deprive any person of life, liberty, or property, without due process of law; nor deny to any person within its jurisdiction the equal protection of the laws.

This text has formed the foundation of some of the most consequential court decisions in American history, including Brown v. Board of Education, Roe v. Wade, and Bush v. Gore. These court decisions have all called into question whether decisions made by humans follow the spirit of Amendment XIV.

As machine learning systems become more advanced, they have become more prevalent in society. As such, Machine Learning systems have been the subject of increasing controversy, including: software designed to predict recidivism being accused of racial discrimination and social media being accused of shadow banning. Therein lies our interest in Fair Machine Learning: How do we verify machine learning systems follow the spirit of Amendment XIV? How do we design machine learning systems which follow the spirit of Amendment XIV?