Videos

See AI2's full collection of videos on our YouTube channel.
All Years
All Videos
Viewing 21-30 of 163 videos
  • Natural Language Understanding for Events and Participants in Text Thumbnail

    Natural Language Understanding for Events and Participants in Text

    May 6, 2019  |  Rachel Rudinger
    Consider the difference between the two sentences “Pat didn’t remember to water the plants” and “Pat didn’t remember that she had watered the plants.” Fluent English speakers recognize that the former sentence implies that Pat did not water the plants, while the latter sentence implies she did. This distinction is crucial to understanding the meaning of these sentences, yet it is one that automated natural language processing (NLP) systems struggle to make. In this talk, I will discuss my work on developing state-of-the-art NLP models that make essential inferences about events (e.g., a “watering” event) and participants (e.g., “Pat” and “the plants”) in natural language sentences. In particular, I will focus on two supervised NLP tasks that serve as core tests of language understanding: Event Factuality Prediction and Semantic Proto-Role Labeling. I will also discuss my work on unsupervised acquisition of common-sense knowledge from large natural language text corpora, and the concomitant challenge of detecting problematic social biases in NLP models trained on such data.
  • Understanding question comprehension, and generalizability Thumbnail

    Understanding question comprehension, and generalizability

    May 2, 2019  |  Pramod Kaushik Mudrakarta
    We present two results: 1) Analysis techniques for state-of-the-art question-answering models on images, tables and passages of text. We show how these networks often ignore important question terms. Leveraging such non-robust behavior, we present a variety of adversarial examples derived by perturbing the questions. Our strongest attacks drop the accuracy of a visual question answering model from 61.1% to 19%, and that of a tabular question answering model from 33.5% to 3.3%. We demonstrate that attributions can augment standard measures of accuracy and empower investigation of model performance. When a model is accurate but for the wrong reasons, attributions can surface erroneous logic in the model that indicates inadequacies in the data. 2) Parameter-efficient transfer learning: We present a novel method for re-purposing pretrained neural networks to new tasks while maintaining most of the weights intact. The basic approach is to learn a model patch - a small set of parameters - that will specialize to each task, instead of fine-tuning the last layer or the entire network. For instance, we show that learning a set of scales and biases is sufficient to convert a pretrained network to perform well on qualitatively different problems (e.g. converting a Single Shot MultiBox Detection (SSD) model into a 1000-class image classification model while reusing 98% of parameters of the SSD feature extractor). Our approach allows both simultaneous (multi-task) as well as sequential transfer learning. In several multi-task learning problems, despite using much fewer parameters than traditional logits-only fine-tuning, we match single-task performance.
  • Augmenting Collective Human Work Thumbnail

    Augmenting Collective Human Work

    May 1, 2019  |  Jonathan Bragg
    A longstanding goal of artificial intelligence (AI) is to develop agents that can assist or augment humans. Such agents have the potential to transform society. While AI agents can excel at well-defined tasks like games, much more limited progress has been made solving real-world problems like interacting with humans, where data collection is costly, objectives are ill-defined, and safety is critical. In this talk, I will discuss how we can design agents to improve the efficiency and success of collective human work ("crowdsourcing"), by leveraging techniques from AI, reinforcement learning, and optimization, together with structured contributions from human workers and task designers. This approach improves on current methods for designing such agents, which typically require large amounts of manual experimentation and costly data collection to get right. I will demonstrate the effectiveness of this approach on several crowdsourcing management problems, and also share recent work on how agents can make shared decisions with humans to achieve better outcomes.
  • Both Sides Now: Generating and Understanding Visually-Grounded Language Thumbnail

    Both Sides Now: Generating and Understanding Visually-Grounded Language

    April 24, 2019  |  Peter Anderson
    From robots to cars, virtual assistants and voice-controlled drones, computing devices are increasingly expected to communicate naturally with people and to understand the visual context in which they operate. In this talk, I will present our latest work on generating and comprehending visually-grounded language. First, we will discuss the challenging task of describing an image (image captioning). I will introduce captioning models that leverage multiple data sources, including object detection datasets and unaligned text corpora, in order to learn about the long-tail of visual concepts found in the real world. To support and encourage further efforts in this area, I will present the 'nocaps' benchmark for novel object captioning. In the second part of the talk, I will describe our recent work on developing agents that follow natural language instructions in reconstructed 3D environments using the R2R dataset for vision-and-language navigation.
  • User-centric Recommendation Models and Systems Thumbnail

    User-centric Recommendation Models and Systems

    April 12, 2019  |  Longqi Yang
    The daily actions and decisions of people are increasingly shaped by recommendation systems, from e-commerce and content platforms to education and wellness applications. These systems selectively suggest and present information items based on their characterization of user preferences. However, existing preference modeling methods are limited due to the incomplete and biased nature of the behavioral data that inform the models. As a result, recommendations can be narrow, skewed, homogeneous, and divergent from users’ aspirations. In this talk, I will introduce user-centric recommendation models and systems that address the incompleteness and bias of existing methods and increase systems’ utility for individuals. Specifically, I will present my work addressing two key research challenges: (1) inferring debiased preferences from biased behavioral data using counterfactual reasoning, and (2) eliciting unobservable current and aspirational preferences from users through interactive machine learning. I will conclude with discussion of field experiments that demonstrate how user-centric systems can promote healthier diets and better content choices.
  • Learning Challenges in Natural Language Processing Thumbnail

    Learning Challenges in Natural Language Processing

    April 8, 2019  |  Swabha Swayamdipta
    As the availability of data for language learning grows, the role of linguistic structure is under scrutiny. At the same time, it is imperative to closely inspect patterns in data which might present loopholes for models to obtain high performance on benchmarks. In a two-part talk, I will address each of these challenges. First, I will introduce the paradigm of scaffolded learning. Scaffolds enable us to leverage inductive biases from one structural source for prediction of a different, but related structure, using only as much supervision as is necessary. We show that the resulting representations achieve improved performance across a range of tasks, indicating that linguistic structure remains beneficial even with powerful deep learning architectures. In the second part of the talk, I will showcase some of the properties exhibited by NLP models in large data regimes. Even as these models report excellent performance, sometimes claimed to beat humans, a closer look reveals that predictions are not a result of complex reasoning, and the task is not being completed in a generalizable way. Instead, this success can be largely attributed to exploitation of some artifacts of annotation in the datasets. I will discuss some questions our finding raises, as well as directions for future work.
  • Learning Structured Information from Language Thumbnail

    Learning Structured Information from Language

    April 3, 2019  |  Arzoo Katiyar
    Extracting information from text entails deriving a structured, and typically domain-specific, representation of entities and relations from unstructured text. The information thus extracted can potentially facilitate applications such as question answering, information retrieval, conversational dialogue and opinion analysis. However, extracting information from text in a structured form is difficult: it requires understanding words and the relations that exist between them in the context of both the current sentence and the document as a whole. In this talk, I will present my research on neural models that learn structured output representations comprised of textual mentions of entities and relations within a sentence. In particular, I will propose the use of novel output representations that allow the neural models to learn better dependencies in the output structure and achieve state-of-the-art performance on both tasks as well as on nested variations. I will also describe our recent work on expanding the input context beyond sentences by incorporating coreference resolution to learn entity-level rather than mention-level representations and show that these representations can capture the information regarding the saliency of entities in the document.
  • Natural Language Understanding with Indirect Supervision Thumbnail

    Natural Language Understanding with Indirect Supervision

    March 29, 2019  |  Daniel Khashabi
    Can we solve language understanding tasks without relying on task-specific annotated data? This could be important in scenarios where the inputs range across various domains and it is expensive to create annotated data. I discuss two different language understanding problems (Question Answering and Entity Typing) which have traditionally relied on on direct supervision. For these problems, I present two recent works where exploiting properties of the underlying representations and indirect signals help us move beyond traditional paradigms. And as a result, we observe better generalization across domains.
  • Spatiotemporal understanding of people using scenes, objects and poses Thumbnail

    Spatiotemporal understanding of people using scenes, objects and poses

    March 11, 2019  |  Rohit Girdhar
    Humans are arguably one of the most important entities that AI systems would need to understand to be useful and ubiquitous. From autonomous cars observing pedestrians, to assistive robots helping the elderly, a large part of this understanding is focused on recognizing human actions, and potentially, their intentions. Humans themselves are quite good at this task: we can look at a person and explain in great detail every action they are doing. Moreover, we can reason over those actions over time, and even predict what potential actions they may intend do in the future. Computer vision algorithms, on the other hand, have lagged far behind on this task. In my research, I’ve explored techniques to improve human action understanding from a visual input, with the key insight being that human actions are dependent on the state of their environment (parameterized by the scene and the objects in it) apart from their own state (parameterized by their pose). In this talk, I will talk about three key ways I exploit this dependence: (1) Learning to aggregate this contextual information to recognize human actions; (2) Predicting a prior on human actions by learning about the affordances of the scenes and objects they interact with; and finally, (3) Moving towards longer term temporal reasoning through a new dataset and benchmark tasks.
  • AI & Policy Workshop Thumbnail

    AI & Policy Workshop

    March 7, 2019  |  
    "An Ethical Crisis in Computing?" Moshe Vardi | Karen Ostrum George Distinguished Professor, Computational Engineering, Rice University "Algorithmic Accountability: Designing for Safety" Ben Shneiderman | Distinguished Professor, Department of Computer Science, University of Maryland, College Park "AI Policy: What to Do Now, Soon, and One Day" Ryan Calo | Lane Powell & D. Wayne Gittinger Associate Professor of Law, University of Washington "Less Talk, More Do: Applied Ethics in AI" Tracy Kosa | Adjunct Professor, Faculty of Law and Albers School of Business, Seattle University Panel Q&A Oren Etzioni and speakers