Menu
Viewing 1-20 of 149 videos See AI2’s full collection of videos on our YouTube channel.
    • May 6, 2019

      Rachel Rudinger

      Consider the difference between the two sentences “Pat didn’t remember to water the plants” and “Pat didn’t remember that she had watered the plants.” Fluent English speakers recognize that the former sentence implies that Pat did not water the plants, while the latter sentence implies she did. This distinction is crucial to understanding the meaning of these sentences, yet it is one that automated natural language processing (NLP) systems struggle to make. In this talk, I will discuss my work on developing state-of-the-art NLP models that make essential inferences about events (e.g., a “watering” event) and participants (e.g., “Pat” and “the plants”) in natural language sentences. In particular, I will focus on two supervised NLP tasks that serve as core tests of language understanding: Event Factuality Prediction and Semantic Proto-Role Labeling. I will also discuss my work on unsupervised acquisition of common-sense knowledge from large natural language text corpora, and the concomitant challenge of detecting problematic social biases in NLP models trained on such data.

      Less More
    • May 2, 2019

      Pramod Kaushik Mudrakarta

      We present two results: 1) Analysis techniques for state-of-the-art question-answering models on images, tables and passages of text. We show how these networks often ignore important question terms. Leveraging such non-robust behavior, we present a variety of adversarial examples derived by perturbing the questions. Our strongest attacks drop the accuracy of a visual question answering model from 61.1% to 19%, and that of a tabular question answering model from 33.5% to 3.3%. We demonstrate that attributions can augment standard measures of accuracy and empower investigation of model performance. When a model is accurate but for the wrong reasons, attributions can surface erroneous logic in the model that indicates inadequacies in the data. 2) Parameter-efficient transfer learning: We present a novel method for re-purposing pretrained neural networks to new tasks while maintaining most of the weights intact. The basic approach is to learn a model patch - a small set of parameters - that will specialize to each task, instead of fine-tuning the last layer or the entire network. For instance, we show that learning a set of scales and biases is sufficient to convert a pretrained network to perform well on qualitatively different problems (e.g. converting a Single Shot MultiBox Detection (SSD) model into a 1000-class image classification model while reusing 98% of parameters of the SSD feature extractor). Our approach allows both simultaneous (multi-task) as well as sequential transfer learning. In several multi-task learning problems, despite using much fewer parameters than traditional logits-only fine-tuning, we match single-task performance.

      Less More
    • May 1, 2019

      Jonathan Bragg

      A longstanding goal of artificial intelligence (AI) is to develop agents that can assist or augment humans. Such agents have the potential to transform society. While AI agents can excel at well-defined tasks like games, much more limited progress has been made solving real-world problems like interacting with humans, where data collection is costly, objectives are ill-defined, and safety is critical.

      In this talk, I will discuss how we can design agents to improve the efficiency and success of collective human work ("crowdsourcing"), by leveraging techniques from AI, reinforcement learning, and optimization, together with structured contributions from human workers and task designers. This approach improves on current methods for designing such agents, which typically require large amounts of manual experimentation and costly data collection to get right. I will demonstrate the effectiveness of this approach on several crowdsourcing management problems, and also share recent work on how agents can make shared decisions with humans to achieve better outcomes.

      Less More
    • April 24, 2019

      Peter Anderson

      From robots to cars, virtual assistants and voice-controlled drones, computing devices are increasingly expected to communicate naturally with people and to understand the visual context in which they operate. In this talk, I will present our latest work on generating and comprehending visually-grounded language. First, we will discuss the challenging task of describing an image (image captioning). I will introduce captioning models that leverage multiple data sources, including object detection datasets and unaligned text corpora, in order to learn about the long-tail of visual concepts found in the real world. To support and encourage further efforts in this area, I will present the 'nocaps' benchmark for novel object captioning. In the second part of the talk, I will describe our recent work on developing agents that follow natural language instructions in reconstructed 3D environments using the R2R dataset for vision-and-language navigation.

      Less More
    • April 12, 2019

      Longqi Yang

      The daily actions and decisions of people are increasingly shaped by recommendation systems, from e-commerce and content platforms to education and wellness applications. These systems selectively suggest and present information items based on their characterization of user preferences. However, existing preference modeling methods are limited due to the incomplete and biased nature of the behavioral data that inform the models. As a result, recommendations can be narrow, skewed, homogeneous, and divergent from users’ aspirations.

      In this talk, I will introduce user-centric recommendation models and systems that address the incompleteness and bias of existing methods and increase systems’ utility for individuals. Specifically, I will present my work addressing two key research challenges: (1) inferring debiased preferences from biased behavioral data using counterfactual reasoning, and (2) eliciting unobservable current and aspirational preferences from users through interactive machine learning. I will conclude with discussion of field experiments that demonstrate how user-centric systems can promote healthier diets and better content choices.

      Less More
    • April 8, 2019

      Swabha Swayamdipta

      As the availability of data for language learning grows, the role of linguistic structure is under scrutiny. At the same time, it is imperative to closely inspect patterns in data which might present loopholes for models to obtain high performance on benchmarks. In a two-part talk, I will address each of these challenges.

      First, I will introduce the paradigm of scaffolded learning. Scaffolds enable us to leverage inductive biases from one structural source for prediction of a different, but related structure, using only as much supervision as is necessary. We show that the resulting representations achieve improved performance across a range of tasks, indicating that linguistic structure remains beneficial even with powerful deep learning architectures.

      In the second part of the talk, I will showcase some of the properties exhibited by NLP models in large data regimes. Even as these models report excellent performance, sometimes claimed to beat humans, a closer look reveals that predictions are not a result of complex reasoning, and the task is not being completed in a generalizable way. Instead, this success can be largely attributed to exploitation of some artifacts of annotation in the datasets. I will discuss some questions our finding raises, as well as directions for future work.

      Less More
    • April 3, 2019

      Arzoo Katiyar

      Extracting information from text entails deriving a structured, and typically domain-specific, representation of entities and relations from unstructured text. The information thus extracted can potentially facilitate applications such as question answering, information retrieval, conversational dialogue and opinion analysis. However, extracting information from text in a structured form is difficult: it requires understanding words and the relations that exist between them in the context of both the current sentence and the document as a whole.

      In this talk, I will present my research on neural models that learn structured output representations comprised of textual mentions of entities and relations within a sentence. In particular, I will propose the use of novel output representations that allow the neural models to learn better dependencies in the output structure and achieve state-of-the-art performance on both tasks as well as on nested variations. I will also describe our recent work on expanding the input context beyond sentences by incorporating coreference resolution to learn entity-level rather than mention-level representations and show that these representations can capture the information regarding the saliency of entities in the document.

      Less More
    • March 29, 2019

      Daniel Khashabi

      Can we solve language understanding tasks without relying on task-specific annotated data? This could be important in scenarios where the inputs range across various domains and it is expensive to create annotated data.

      I discuss two different language understanding problems (Question Answering and Entity Typing) which have traditionally relied on on direct supervision. For these problems, I present two recent works where exploiting properties of the underlying representations and indirect signals help us move beyond traditional paradigms. And as a result, we observe better generalization across domains.

      Less More
    • March 11, 2019

      Rohit Girdhar

      Humans are arguably one of the most important entities that AI systems would need to understand to be useful and ubiquitous. From autonomous cars observing pedestrians, to assistive robots helping the elderly, a large part of this understanding is focused on recognizing human actions, and potentially, their intentions. Humans themselves are quite good at this task: we can look at a person and explain in great detail every action they are doing. Moreover, we can reason over those actions over time, and even predict what potential actions they may intend do in the future. Computer vision algorithms, on the other hand, have lagged far behind on this task. In my research, I’ve explored techniques to improve human action understanding from a visual input, with the key insight being that human actions are dependent on the state of their environment (parameterized by the scene and the objects in it) apart from their own state (parameterized by their pose). In this talk, I will talk about three key ways I exploit this dependence: (1) Learning to aggregate this contextual information to recognize human actions; (2) Predicting a prior on human actions by learning about the affordances of the scenes and objects they interact with; and finally, (3) Moving towards longer term temporal reasoning through a new dataset and benchmark tasks.

      Less More
    • March 7, 2019

      "An Ethical Crisis in Computing?" Moshe Vardi | Karen Ostrum George Distinguished Professor, Computational Engineering, Rice University

      "Algorithmic Accountability: Designing for Safety" Ben Shneiderman | Distinguished Professor, Department of Computer Science, University of Maryland, College Park

      "AI Policy: What to Do Now, Soon, and One Day" Ryan Calo | Lane Powell & D. Wayne Gittinger Associate Professor of Law, University of Washington

      "Less Talk, More Do: Applied Ethics in AI" Tracy Kosa | Adjunct Professor, Faculty of Law and Albers School of Business, Seattle University

      Panel Q&A Oren Etzioni and speakers

      Less More
    • March 1, 2019

      Reut Tsarfaty

      Can we program computers in our native tongue? This idea, termed natural language programming (NLPRO), has attracted attention almost since the inception of computers themselves.

      From the point of view of software engineering (SE), efforts to program in natural language (NL) have relied thus far on controlled natural languages (CNL) -- small unambiguous fragments of English with restricted grammars and limited expressivity. Is it possible to replace these CNLs with truly natural, human language? From the point of view of natural language processing (NLP), current technology successfully extracts information from NL texts. However, the level of NL understanding required for programming in NL goes far beyond such information extraction. Is it possible to endow computers with a dynamic kind of NL understanding? In this talk I argue that the solutions to these seemingly separate challenges are actually closely intertwined, and that one community's challenge is the other community's stepping stone for a huge leap and vice versa.

      Specifically, in this talk I propose to view executable programs in SE as semantic structures in NLP, as the basis for broad-coverage semantic parsing. I present a feasibility study on the semantic parsing of requirements documents into executable scenarios, where the requirements are written in a restricted yet highly ambiguous fragment of English, and the target representation employs live sequence charts (LSC), a multi-modal executable programming language. The parsing architecture I propose jointly models sentence-level and discourse-level processing in a generative probabilistic framework. I empirically show that the discourse-based model consistently outperforms the sentence-based model, constructing a system that reflects both the static (entities, properties) and dynamic (behavioral scenarios) requirements in the input document.

      Less More
    • February 5, 2019

      Julia Lane

      The social sciences are at a crossroads The great challenges of our time are human in nature - terrorism, climate change, the use of natural resources, and the nature of work - and require robust social science to understand the sources and consequences. Yet the lack of reproducibility and replicability evident in many fields is even more acute in the study of human behavior both because of the difficulty of sharing confidential data and because of the lack of scientific infrastructure. Much of the core infrastructure is manual and ad-hoc in nature, threatening the legitimacy and utility of social science research.

      A major challenge is search and discovery. The vast majority of social science data and outputs cannot be easily discovered by other researchers even when nominally deposited in the public domain. A new generation of automated search tools could help researchers discover how data are being used, in what research fields, with what methods, with what code and with what findings. And automation can be used to reward researchers who validate the results and contribute additional information about use, fields, methods, code, and findings. In sum, the use of data depends critically on knowing how it has been produced and used before: the required elements what do the data measure, what research has been done by what researchers, with what code, and with what results.

      In this presentation I describe the work that we are doing to build and develop automated tools to create the equivalent of an Amazon.com or TripAdvisor for the access and use of confidential microdata.

      Less More
    • January 25, 2019

      Qiang Nign

      Time is an important dimension when we describe the world because the world is evolving over time and many facts are time-sensitive. Understanding time is thus an important aspect of natural language understanding and many applications may rely on it, e.g., information retrieval, summarization, causality, and question answering.

      In this talk, I will mainly focus on a key component of it, temporal relation extraction. The task has long been challenging because the actual timestamps of those events are rarely expressed explicitly and their temporal order has to be inferred, from lexical cues, between the lines, and often based on strong background knowledge. Additionally, collecting enough and high-quality annotations to facilitate machine learning algorithms for this task is also difficult, which makes it even more challenging to investigate the task. I tackled this task in three perspectives, structured learning, common sense, and data collection, and have improved the state-of-the-art by approximately 20% in absolute F1. My current system, CogCompTime, is available at this online demo: http://groupspaceuiuc.com/temporal/. In the future, I expect to expand my research in these directions to other core problems in AI such as incidental supervision, semantic parsing, and knowledge representation.

      Less More
    • January 11, 2019

      Rik Koncel-Kedziorski

      In this talk I will introduce a new model for encoding knowledge graphs and generating texts from them. Graphical knowledge representations are ubiquitous in computing, but pose a challenge for text generation techniques due to their non-hierarchical structure and collapsing of long-distance dependencies. Moreover, automatically extracted knowledge is noisy, and so requires a text generation model be robust. To address these issues, I introduce a novel attention-based encoder-decoder model for knowledge-graph-to-text generation. This model extends the popular Transformer for text encoding to function over graph-structured inputs. The result is a powerful, general model for graph encoding which can incorporate global structural information when contextualizing vertices in their local neighborhoods. Through detailed automatic and human evaluations I demonstrate the value of conditioning text generation on graph-structured knowledge, as well as the superior performance of the proposed model compared to recent work.

      Less More
    • December 14, 2018

      Tal Linzen

      Recent technological advances have made it possible to train recurrent neural networks (RNNs) on a much larger scale than before. While these networks have proved effective in NLP applications, their limitations and the mechanisms by which they accomplish their goals are poorly understood. In this talk, I will show how methods from cognitive science can help elucidate and improve the syntactic representations employed by RNN language models. I will review evidence that RNN language models are able to process syntactic dependencies in typical sentences with considerable success across languages (Linzen et al 2016, TACL; Gulordava et al. 2018, NAACL). However, when evaluated on experimentally controlled materials, their error rate increases sharply; explicit syntactic supervision mitigates the drop in performance (Marvin & Linzen 2018, EMNLP). Finally, I will discuss how language model adaptation can provide a tool for probing RNN syntactic representations, following the inspiration of the syntactic priming paradigm from psycholinguistics (van Schijndel & Linzen 2018, EMNLP).

      Less More
    • December 12, 2018

      Panupong (Ice) Pasupat

      Natural language understanding models have achieved good enough performance for commercial products such as virtual assistants. However, their scopes are mostly still limited to preselected domains or simpler sentences. I will present my line of work which extends natural language understanding in two frontiers: handling open-domain environments such as the Web (breadth) and handling complex sentences (depth).

      The presentation will focus on the task of answering complex questions on semi-structured Web tables using question-answer pairs as supervision. Within the framework of semantic parsing, which is to learn to parse sentences into executable logical forms, I will explain our proposed methods to (1) flexibly handle lexical and syntactic mismatches between the questions and logical forms, (2) filter misleading logical forms that sometimes give correct answers, and (3) reuse parts of good logical forms to make training more efficient. I will also briefly mention how these ideas can be applied to several other natural language understanding tasks for Web interaction.

      Less More
    • December 11, 2018

      Abhisek Das

      Building intelligent agents that possess the ability to perceive the rich visual environment around us, communicate this understanding in natural language to humans and other agents, and execute actions in a physical environment, is a long-term goal of Artificial Intelligence. In this talk, I will present some of my recent work at various points on this spectrum in connecting vision and language to actions; from Visual Dialog (CVPR17, ICCV17, HCOMP17) -- where we develop models capable of holding free-form visually-grounded natural language conversation towards a downstream goal and ways to evaluate them -- to Embodied Question Answering (CVPR18, CoRL18) -- where we augment these models to actively navigate in simulated environments and gather visual information necessary for answering questions.

      Less More
    • December 6, 2018

      Oren Etzioni

      Dr. Oren Etzioni, Chief Executive Officer of the Allen Institute of Artificial Intelligence and professor of computer science at the University of Washington, addresses one of the Holy Grails of AI: acquiring, representing and utilizing common-sense knowledge, during a distinguished lecture series held at the Office of Naval Research.

      Less More
    • November 16, 2018

      Shyam Upadhyay

      Lack of annotated data is a constant obstacle in developing machine learning models, especially for natural language processing (NLP) tasks. In this talk, I explore this problem in the realm of Multilingual NLP, where the challenges become more acute as most of the annotation efforts in the NLP community have been predominantly aimed at English.

      In particular, I will discuss two techniques for overcoming the lack of annotation in multilingual settings. I focus on two information extraction tasks --- cross-lingual entity linking and name transliteration to English --- for which traditional approaches rely on generous amounts of supervision in the language of interest. In the first part of the talk, I show how we can perform cross-lingual entity linking by sharing supervision across languages through a shared multilingual feature space. This approach enables us to complement the supervision in a low-resource language with supervision from a high resource language. In the second part, I show how we use freely available knowledge and unlabeled data to substitute for lack of supervision for the transliteration task. Key to the approach is a constrained bootstrapping algorithm that mines new example pairs for improving the transliteration model. Results on both tasks show the effectiveness of these approaches, and pave the way for future tasks involving the 3-way interaction of text, knowledge, and reasoning, in a multilingual setting.

      Less More
    • November 12, 2018

      Kevin Jamieson

      In many science and industry applications, data-driven discovery is limited by the rate of data collection like the time it takes skilled labor to operate a pipette or the cost of expensive reagents or use of experimental apparatuses. When measurement budgets are necessarily small, adaptive data collection that uses previously collected data to inform future data collection in a closed loop can make the difference between inferring a phenomenon or not. While methods like multi-armed bandits have provided great insights into optimal means of collecting data in the last several years, these algorithms require a number of measurements that scales linearly with the total number of possible actions or measurements that can be made, even if discovering just one among possibly many true positives is desired. For example, if many of our 20,000 genes are critical for cell-growth and a measurement corresponds to knocking out just one gene and measuring a noisy phenotype signal, one may expect that we can find a single influential gene with far fewer than 20,000 total measurements. In this talk I will ground this intuition in a theoretical framework and describe several applications where I have applied this perspective and new algorithms including crowd-sourcing preferences, multiple testing with false discovery control, hyperparameter tuning, and crowdfunding.

      Less More