B.S. Computer Science, 2020
University of Wisconsin - Madison
I’m a 1st year Computer Science PhD student at the University of Michigan and a member of the Real-Time Computing Lab (advised by Prof. Kang Shin). My research focuses on the security, privacy, and ethics of machine learning systems. As an undergraduate student, I was a part of the Security and Privacy Research Group (Wisconsin-Privacy) working with Prof. Kassem Fawaz and Varun Chandrasekaran on machine learning security and privacy. In the past, I have interned at Roblox Corporation and Optum as a software engineer. Despite the challenges, I am having a wonderful time being a PhD student. For more information, you can visit my personal webpage.
As social robots become increasingly prevalent in day-to-day environments, they will participate in conversations and manage the information shared with them in an appropriate manner. However, little is known about how robots might appropriately discern the sensitivity of information, which has major implications for human-robot trust. In an effort to address this issue, we designed a privacy controller named CONFIDANT for conversational social robots, capable of using contextual metadata (e.g., sentiment, relationships, topic, etc.) from conversations to model privacy boundaries.
The proliferation of automated facial recognition in various commercial and government sectors has caused significant privacy concerns for individuals. A recent and popular approach to address these privacy concerns is to employ evasion attacks against facial recognition systems. The key to these approaches is the generation of perturbations using a pre-trained metric embedding network. This dependence of face obfuscation on metric embedding networks, which are known to be unfair in the context of facial recognition, surfaces the question of demographic fairness - are there demographic disparities in the performance of face obfuscation systems?
Find our paper at https://arxiv.org/abs/2108.02707
Advances in deep learning have made face recognition technologies pervasive. While useful to social media platforms and users, this technology carries significant privacy threats. Coupled with the abundant information they have about users, service providers can associate users with social interactions, visited places, activities, and preferences - some of which the user may not want to share. We propose Face-Off, a privacy-preserving framework that introduces strategic perturbations to the user’s face to prevent it from being correctly recognized.
Find our paper at https://arxiv.org/abs/2003.08861
See our web application at https://faceoff.xyz
While generalizing well over natural inputs, neural networks are vulnerable to adversarial inputs. Existing defenses against adversarial inputs have largely been detached from the real world. These defenses also come at a cost to accuracy. We find that applying invariants to the classification task makes robustness and accuracy feasible together. Two questions follow: how to extract and model these invariances? and how to design a classification paradigm that leverages these invariances to improve the robustness accuracy trade-off? The remainder of the paper discusses solutions to the aformenetioned questions.
Find our paper at https://arxiv.org/abs/1905.10900
See our code at https://github.com/byron123t/3d-adv-pc and https://github.com/byron123t/YOPO-You-Only-Propagate-Once
Face-Off: Adversarial Face Obfuscation
Varun Chandrasekaran, Chuhan Gao, Brian Tang, Kassem Fawaz, Somesh Jha, Suman Banerjee PETS, 2021
I’m always open to collaborating with people on research or startup ideas.
Please contact me via byron123t [at] gmail [dot] com or bjaytang [at] umich [dot] edu. I’m always willing to meet up in person as well if you happen to be on campus or in the Chicago area.
Or add me on LinkedIn here https://www.linkedin.com/in/btang12/