Colloquia Series: Yevgeniy Vorobeychik https://engineering.wustl.edu/Events/Pages/CSE-Colloquia-Series-Yevgeniy-Vorobeychik-.aspx1281Colloquia Series: Yevgeniy Vorobeychik 2018-03-05T06:00:00Z11:30 a.m.Jolley Hall, Room 309<div></div><div><p style="text-align: center;"><strong>Adversarial AI for Social Good</strong></p><p><strong>Abstract</strong></p><p><strong></strong>A major emerging research topic at the interdisciplinary interface of security, privacy, and AI is how to develop and deploy AI techniques in such adversarial environments.  My research in this area combines techniques from optimization, game theory, network science, machine learning, and systems security, to address fundamental problems such as how to learn classifiers that are robust to evasion attacks, protect elections from malicious influence, and share high-quality data while minimizing privacy risk.  In this talk, I will discuss our research on the latter two problems.<br/></p><p style="text-align: justify;">My research on protecting elections aims to develop effective methods to preserve the integrity of election results in the face of malicious attacks.  I will describe a general framework for reasoning about protection decisions (such as auditing) using a game theoretic approach which combines large-scale optimization with social choice theory.  I will then briefly mention several recent efforts at modeling how elections may be subverted through social influence (such as spreading fake news over social media), and how we can limit diffusion of such malicious influence.</p><p style="text-align: justify;">Next, I will describe how we approach two problems in the context of privacy-preserving data sharing: sharing structured data (such as portions of the EMR that include demographics and diagnostic codes, as well as genomic summary statistics) and sanitizing clinical notes. I will present a framework for modeling privacy risk from an adversarial perspective, and a game theoretic approach for balancing the utility from shared data with privacy risk. Finally, I will describe an approach for reasoning about privacy risk associated with sanitizing clinical notes using machine learning techniques, and present a novel algorithm for this task which has provable guarantees about privacy risk (given our threat model) and preserves most of the original content.</p><p><strong>Biography</strong></p><p>Yevgeniy Vorobeychik is an Assistant Professor of Computer Science and Biomedical Informatics at Vanderbilt University. He received a Ph.D. (2008) in Computer Science and Engineering from the University of Michigan. His work focuses on game theoretic modeling of security and privacy, adversarial machine learning, algorithmic and behavioral game theory, optimization, and network science. Dr. Vorobeychik received an NSF CAREER award in 2017, and was an invited IJCAI-16 early career spotlight speaker. He is one of the team leads for the NIH-funded Center for Genetic Privacy and Identity in Community Settings at Vanderbilt, and directs the Computational Economics Research Lab. Dr. Vorobeychik was nominated for the 2008 ACM Doctoral Dissertation Award and received honorable mention for the 2008 IFAAMAS Distinguished Dissertation Award.<br/></p></div>