B.S. Electrical Engineering and Computer Sciences, 2017
University of California, Berkeley
The vast proliferation of face recognition systems have brought forth a myriad of privacy concerns. To mitigate these privacy concerns, so called face obfuscation systems [1] have been developed. Face obfuscation systems generate imperceptible perturbations, when added to an image, cause the facial recognition system to misidentify the user.
In this project, we show that these face obfuscation systems are demographically aware. This is a direct result of face recognition system construction. Furthermore, we show that faces in demographic groups less frequent in training set, have suffer from reduced privacy utility.
Randomized Smoothing [2] is a defense against adversarial inputs to machine learning models. In this project, we study randomized smoothing from a statistical-learning-theoretic lens. We show that, under certain conditions, a model upon which the randomized smoothing is performed, compared to a model of the same architecture upon which randomized smoothing is not performed, yields lower natural test accuracy. Extensive experiments are provided to support our conclusions.
Among the most common characterizations of machine learning fairness are sample complexities for multicalibration error convergence. Multicalibration error is a notion of group fairness; it refers to the discrepancy, for a given population group, between the predicted label and the average of realized predicted labels of all samples within the group.
Multicalibration error is advantageous to study because it constitutes a comprehensive framework for group fairness. As a measure of model precision, Multicalibration error can be decoupled from training constraints and prediction accuracy.
In this project, we determine a bound on the number of samples in an entire dataset necessary to guarantee that all demographic and/or protected groups in the dataset achieve convergence in their calibration error. Our work shows that sample complexities for multicalibration error can be achieved by reparametrizing bounds for empirical risk minimization learning.
hrosenberg _AT_ ece.wisc.edu
KM6BJQ
[1] Chandrasekaran, Varun, Chuhan Gao, Brian Tang, Kassem Fawaz, Somesh Jha, and Suman Banerjee. “Face-Off: Adversarial Face Obfuscation” Proceedings on Privacy Enhancing Technologies. 2021.
[2] Cohen, Jeremy, Elan Rosenfeld, and Zico Kolter. “Certified Adversarial Robustness via Randomized Smoothing.” International Conference on Machine Learning. 2019.