Allen AI Young Investigators
About the Program
Allen AI Young Investigators is a postdoctoral program offering unique benefits. The program will enable you to balance working collaboratively on an AI2 project while pursuing an independent research agenda.
- Duration: 1-3 years
- Start Date: Flexible (rolling application with no deadline)
- Candidates: Are within one year of completing their PhD, or already have a PhD.
- Dedicated AI2 mentor: Mentorship in research, grant writing, and more
- 50% collaborative work on an AI2 project
- 50% work on your own projects
- Generous travel budget
- AI2 provides support for obtaining a visa through its immigration attorney, and pays the necessary expenses
- Access to AI2’s data, AWS infrastructure, and other resources as needed
- No grant writing, teaching, or administrative responsibilities
- $100K research funding from AI2 after completion (based on proposal)
YI Program Alumni
Lucy Lu Wang
Lucy Lu Wang will join the University of Washington Information School as an Assistant Professor in Fall 2022. She completed her PhD in Biomedical Informatics and Medical Education at the University of Washington, and in 2019, joined the Semantic Scholar research team at AI2 as a Young Investigator. Her research interests include biomedical NLP, health informatics, and document understanding and accessibility. Her work on supplement interaction detection, gender trends in academic publishing, COVID-19 datasets, and document understanding has been featured in publications such as Geekwire, Boing Boing, Axios, VentureBeat, and the New York Times. Select publications while at AI2: SUPP. AI: finding evidence for supplement-drug interactions (ACL 2020; demo), S2ORC: The Semantic Scholar Open Research Corpus (ACL 2020), CORD-19: The Covid-19 Open Research Dataset (NLP-COVID at ACL 2020), SciA11y: Converting Scientific Papers to Accessible HTML (ASSETS 2021; demo), MS^2: Multi-Document Summarization of Medical Studies (EMNLP 2021).
Daniel Khashabi is an assistant professor at the Department of Computer Science at Johns Hopkins University and the Center for Language and Speech Processing (CLSP). His work focuses on computational foundations of intelligent behavior within various mediums of communication such as natural language. This involves developing formalisms that characterize and result in natural language processing (NLP) systems capable of understanding and reasoning with (and about) an uncertain world, while being general to handle a broader space of contexts. Daniel obtained a Ph.D. from the University of Pennsylvania in 2019 and a BSc from Amirkabir University of Technology (Tehran Polytechnic) in 2012. Before joining Johns Hopkins, Daniel spent two wonderful years as a postdoctoral fellow at the Allen Institute for AI (2019-2022) where he closely worked with Yejin Choi, Hanna Hajishirzi, and Ashish Sabharwal.
Maarten Sap is an assistant professor in Carnegie Mellon University’s Language Technologies Department (CMU LTI). His research focuses on making NLP systems socially intelligent, and understanding social inequality and bias in language. He has presented his work in top-tier NLP and AI conferences, receiving a best short paper nomination at ACL 2019 and a best paper award at the WeCNLP 2020 summit. His research has been covered in the New York Times, Forbes, Fortune, and Vox. Additionally, he and his team won the inaugural 2017 Amazon Alexa Prize, a social chatbot competition. Before joining CMU, he was a postdoc/young investigator at the Allen Institute for AI (AI2) on project MOSAIC. He received his PhD from the University of Washington’s Paul G. Allen School of Computer Science & Engineering where he was advised by Yejin Choi and Noah Smith. In the past, he interned at AI2 working on social commonsense reasoning, and at Microsoft Research working on deep learning models for understanding human cognition.
Swabha Swayamdipta is an Assistant Professor of Computer Science and a Gabilan Assistant Professor at the University of Southern California. Her research interests are in natural language processing and machine learning, with a primary interest in the estimation of dataset quality, the semi-automatic collection of impactful data, and evaluating how human biases affect dataset construction and model decisions. At USC, Swabha has launched the Data, Interpretability, Language and Learning (DILL) Lab. She received her PhD from Carnegie Mellon University, followed by a postdoc at the Allen Institute for AI. Her work has received outstanding paper awards at ICML 2022, NeurIPS 2021 and an honorable mention for the best paper at ACL 2020.
Katherine (Katie) Keith will join Williams College as an Assistant Professor of Computer Science in Fall 2022. From 2021-2022, she was a Postdoctoral Young Investigator with the Semantic Scholar team at the Allen Institute for Artificial Intelligence. She graduated with a PhD from the College of Information and Computer Sciences at the University of Massachusetts Amherst where she was advised by Brendan O’Connor. Her research interests are at the intersection of natural language processing, computational social science, and causal inference. She was a co-organizer of the First Workshop on Causal Inference and NLP, a host of the podcast “Diaries of Social Data Research,” a co-organizer of the “NLP+CSS 201 Online Tutorial Series,” and a recipient of a Bloomberg Data Science PhD fellowship.
Ana Marasović obtained her PhD in computational linguistics from Heidelberg University in 2019. She was a Young Investigator at AI2 and a postdoctoral researcher at the University of Washington. As of fall 2022, she is Assistant Professor in the School of Computing at the University of Utah. Her primary research interests are at the confluence of natural language processing (NLP), multimodality, and explainable artificial intelligence (XAI), with a focus on building trustworthy and intuitive language technology. Her publications at AI2 include: “Few-Shot Self-Rationalization with Natural Language Prompts”, Findings of NAACL 2022, “Teach Me to Explain: A Review of Datasets for Explainable NLP”, NeurIPS 2021, “Natural Language Rationales with Full-Stack Visual Reasoning: From Pixels to Semantic Frames to Commonsense Graphs”, Findings of EMNLP 2020, and “Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks”, ACL 2020.
Vered Shwartz completed her PhD in computer science from Bar-Ilan University in 2019. She was a Young Investigator at AI2 and a postdoctoral researcher at the University of Washington. As of fall 2021, she is Assistant Professor in the Computer Science department at the University of British Columbia. Her research interests include computational semantics and pragmatics, commonsense reasoning, and multiword expressions. Her publications at AI2 include: Unsupervised Commonsense Question Answering with Self-Talk (EMNLP 2020), Back to the Future: Unsupervised Backprop-based Decoding for Counterfactual and Abductive Commonsense Reasoning (EMNLP 2020), “You are grounded!”: Latent Name Artifacts in Pre-trained Language Models (EMNLP 2020), and Learning to Rationalize for Nonmonotonic Reasoning with Distant Supervision (AAAI 2021).
Antoine Bosselut completed his PhD at the University of Washington. He was a young investigator at the Allen Institute for AI and a postdoctoral researcher at Stanford University. He joined the faculty at Ecole Polytechnique Fédérale de Lausanne (EPFL) in Fall 2021. His research interests are in the integration of human knowledge with modern NLP systems, with a focus on commonsense representation and reasoning, and neuro-symbolic modeling. His works at AI2 include COMET: Commonsense Transformers for Automatic Knowledge Graph Construction (ACL 2019), Dynamic Neuro-Symbolic Knowledge Graph Construction for Zero-shot Commonsense Question Answering (AAAI 2021), COMET-ATOMIC 2020: On Symbolic and Neural Commonsense Knowledge Graphs (AAAI 2021), and “I’m Not Mad”: Commonsense Implications of Negation and Contradiction (NAACL 2021).
Jesse Dodge completed his PhD from Carnegie Mellon’s Language Technologies Institute in 2020, and he spent much of his PhD as a visiting student at the University of Washington. He joined the AllenNLP team at AI2 as a Young Investigator in October 2020, and became a full time Research Scientist at AI2 in July 2021. He’s interested in efficiency and reproducibility in natural language processing and machine learning. Selected publications with AI2 include Green AI (CACM 2020), Documenting Large Webtext Corpora: A Case Study on the Colossal Clean Crawled Corpus (EMNLP 2021), and Competency Problems: On Finding and Removing Artifacts in Language Data (EMNLP 2021). His work has been featured in The New York Times, Wired, and Slate.
Christopher Clark completed his PhD at the University of Washington supervised by Luke Zettlemoyer in 2020. He joined the AI2 Prior team as a Young Investigator shortly after, and became a full-time research scientist with the PRIOR computer vision team at AI2 in 2021. As a Young Investigator he worked on Iconary: A Pictionary-Based Game for Testing Multimodal Communication with Drawings and Text (EMNLP 2021) and Webly Supervised Concept Expansion for General Purpose Vision Models (in submission).
Gabriel Stanovsky completed his PhD in computer science at the Bar-Ilan University in Israel in 2018. Since then, he was a part of the Young Investigator program at AI2 and a postdoctoral researcher at the University of Washington. In the fall of 2020, he joined the faculty of the School of Computer Science and Engineering at the Hebrew University of Jerusalem. His research interests revolve around intermediate semantic representations and their application in various real-world tasks, such as machine translation, question answering, and information extraction. His works with AI2 include: DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs, Evaluating Gender Bias in Machine Translation, and Gender Trends in Computer Science Authorship.
Rachel Rudinger completed her Ph.D. in Computer Science from Johns Hopkins University in 2019. Her research interests include natural language understanding, computational semantics, commonsense reasoning, and fairness in NLP. As of summer 2020, she is an assistant professor in the Computer Science department at the University of Maryland, College Park. Her papers with AI2 include Thinking Like a Skeptic: Defeasible Inference in Natural Language and “You are grounded!”: Latent Name Artifacts in Pre-trained Language Models.
Mark Yatskar completed his PhD at the University of Washington in 2017. He will join the faculty of the University of Pennsylvania in the fall of 2020. His research interests are at the intersection of natural language processing and computer vision, as well as fairness in computing. His publications with AI2 include Balanced Datasets Are Not Enough: Estimating and Mitigating Gender Bias in Deep Image Representations (ICCV 2019), QuAC: Question Answering in Context (EMNLP 2018), and Neural Motifs: Scene Graph Parsing with Global Context (CVPR 2018).
Roy Schwartz completed his PhD at the School of Computer Science and Engineering at the Hebrew University of Jerusalem in 2016. He is currently a postdoctoral researcher at the University of Washington and a Young Investigator at AI2. Roy will continue working with AI2 until 2020, at which time he will join the faculty of the School of Computer Science and Engineering at the Hebrew University of Jerusalem. His research interests are semantic representation and syntactic parsing. His publications with AI2 include SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference (EMNLP 2018), Rational Recurrences (EMNLP 2018), and LSTMs Exploit Linguistic Attributes of Data (ACL RepL4NLP Workshop 2018).
Mohit Iyyer completed his PhD at the University of Maryland, College Park. He is currently an assistant professor in computer science at UMass Amherst. His research interests lie broadly in natural language processing and machine learning. His publications with AI2 include Deep Contextualized Word Representations (NAACL 2018) and QuAC: Question Answering in Context (EMNLP 2018).