Biography
I am an Assistant Professor of Computer Science and Engineering at the University of South Florida where I lead the Quality of AI (QoAI) lab. My research primarily focuses on holistically improving next-generation Artificial Intelligence models (such as LLMs) with respect to multiple qualitative facets, such as utility, fairness, robustness, and security. I am especially interested in developing data-centric learning approaches that can benefit a large class of AI models in multiple domains such as Natural Language Processing and Computer Vision. I also work to translate these ideas to impact real-world models deployed in production systems (such as social media AIs), which by default are closed-source. Prior to joining USF, I received my PhD in Computer Science from UC Davis.
Email: anshumanc[at]usf[dot]edu
Find me on: Github || Google Scholar || Twitter/X
I am looking to recruit exceptional PhD students for Spring/Fall 2025 to work on research relating to (1) Data-centric methods for Generative AI, (2) Multimodal AI, (3) AI/ML safety, and (4) using AI/ML for mitigating social media harms. Please reach out by email and apply here if interested.
RECENT NEWS:
9/24/2024: Our submission (PQN) to the Prosocial Ranking Challenge organized by UC Berkeley (Berkeley Center for Human-Compatible AI) was selected as one of the top four rankers in the competition!
8/12/2024: Our paper "Incentivizing News Consumption on Social Media Platforms Using Large Language Models and Realistic Bot Accounts" was accepted for publication in PNAS Nexus [pdf]
3/13/2024: Our paper "Revisiting Zero-Shot Abstractive Summarization in the Era of Large Language Models from the Perspective of Position Bias" was accepted at the NAACL 2024 Main Conference as an oral talk [pdf] [code]
1/16/2024: Our paper "What Data Benefits My Classifier? Enhancing Model Performance and Interpretability Through Influence-Based Data Selection" was accepted as an oral talk (top 1.2% of papers) at ICLR 2024 [pdf] [code]
12/1/2023: Invited to attend a research convening on LLMs and social media interventions at Google NYC organized by Google/Jigsaw and Prosocial Design Network [blog post]
11/29/2023: Our paper "Towards Fair Video Summarization" has been accepted for publication in Transactions on Machine Learning Research (TMLR) [pdf] [code]
9/21/2023: Our paper "Auditing YouTube’s Recommendation System for Ideologically Congenial, Extreme, and Problematic Recommendations" was accepted for publication in PNAS [pdf] [code]
1/20/2023: Our paper "Robust Fair Clustering: A Novel Fairness Attack and Defense Framework" was accepted at the ICLR 2023 Main Conference [pdf] [code] [poster]
PAST NEWS:
10/15/2022: I was invited to give a seminar talk on Robust Clustering at Brandeis University, Boston by Prof. Hongfu Liu
9/14/2022: Our paper "On the Robustness of Deep Clustering Models: Adversarial Attacks and Defenses" was accepted at the NeurIPS 2022 Main Conference [pdf] [supplementary] [code] [poster]
6/15/2022: Our paper on "Updatable Clustering via Patches" was accepted as a poster at the Updatable Machine Learning (UpML) workshop @ ICML 2022 [pdf]
1/25/2022: Invited by Kyle Polich as a guest on the Data Skeptic podcast [Spotify link]
10/21/2021: Our paper on "Fair Clustering Using Antidote Data" was accepted at the AFCR workshop @ NeurIPS 2021 for a contributed talk (top 6 papers), and is published in PMLR [pdf] [supplementary]
10/21/2021: Our paper on "Fairness Degrading Adversarial Attacks Against Clustering" was accepted at the AFCR workshop @ NeurIPS 2021 as a poster [pdf] [supplementary] [code]
10/11/2021: Invited keynote at MTD workshop @ ACM CCS 2021 for our paper on MTD for adversarial machine learning (talk by Prof. Mohapatra) [pdf] [supplementary]
9/17/2021: Our survey paper on fairness in clustering was accepted for publication in IEEE Access [pdf]
1/10/2020: Our paper "Suspicion-Free Adversarial Attacks Against Clustering Algorithms" was accepted in the AAAI 2020 Main Technical Conference [pdf] [code] [poster]
1/25/2019: Invited to talk about our research on adversarial attacks against clustering at Uber AI in SF by Ryan Turner
12/1/2018: Presented our paper on the Tensorflex framework at the MLOSS workshop @ NeurIPS 2018 [pdf] [code]