Menu
Viewing 1-20 of 137 videos See AI2’s full collection of videos on our YouTube channel.
    • February 5, 2019

      Julia Lane

      Abstract: The social sciences are at a crossroads The great challenges of our time are human in nature - terrorism, climate change, the use of natural resources, and the nature of work - and require robust social science to understand the sources and consequences. Yet the lack of reproducibility and replicability evident in many fields is even more acute in the study of human behavior both because of the difficulty of sharing confidential data and because of the lack of scientific infrastructure. Much of the core infrastructure is manual and ad-hoc in nature, threatening the legitimacy and utility of social science research.

      A major challenge is search and discovery. The vast majority of social science data and outputs cannot be easily discovered by other researchers even when nominally deposited in the public domain. A new generation of automated search tools could help researchers discover how data are being used, in what research fields, with what methods, with what code and with what findings. And automation can be used to reward researchers who validate the results and contribute additional information about use, fields, methods, code, and findings. In sum, the use of data depends critically on knowing how it has been produced and used before: the required elements what do the data measure, what research has been done by what researchers, with what code, and with what results.

      In this presentation I describe the work that we are doing to build and develop automated tools to create the equivalent of an Amazon.com or TripAdvisor for the access and use of confidential microdata.

      Less More
    • January 25, 2019

      Qiang Nign

      Time is an important dimension when we describe the world because the world is evolving over time and many facts are time-sensitive. Understanding time is thus an important aspect of natural language understanding and many applications may rely on it, e.g., information retrieval, summarization, causality, and question answering.

      In this talk, I will mainly focus on a key component of it, temporal relation extraction. The task has long been challenging because the actual timestamps of those events are rarely expressed explicitly and their temporal order has to be inferred, from lexical cues, between the lines, and often based on strong background knowledge. Additionally, collecting enough and high-quality annotations to facilitate machine learning algorithms for this task is also difficult, which makes it even more challenging to investigate the task. I tackled this task in three perspectives, structured learning, common sense, and data collection, and have improved the state-of-the-art by approximately 20% in absolute F1. My current system, CogCompTime, is available at this online demo: http://groupspaceuiuc.com/temporal/. In the future, I expect to expand my research in these directions to other core problems in AI such as incidental supervision, semantic parsing, and knowledge representation.

      Less More
    • January 11, 2019

      Rik Koncel-Kedziorski

      In this talk I will introduce a new model for encoding knowledge graphs and generating texts from them. Graphical knowledge representations are ubiquitous in computing, but pose a challenge for text generation techniques due to their non-hierarchical structure and collapsing of long-distance dependencies. Moreover, automatically extracted knowledge is noisy, and so requires a text generation model be robust. To address these issues, I introduce a novel attention-based encoder-decoder model for knowledge-graph-to-text generation. This model extends the popular Transformer for text encoding to function over graph-structured inputs. The result is a powerful, general model for graph encoding which can incorporate global structural information when contextualizing vertices in their local neighborhoods. Through detailed automatic and human evaluations I demonstrate the value of conditioning text generation on graph-structured knowledge, as well as the superior performance of the proposed model compared to recent work.

      Less More
    • December 14, 2018

      Tal Linzen

      Recent technological advances have made it possible to train recurrent neural networks (RNNs) on a much larger scale than before. While these networks have proved effective in NLP applications, their limitations and the mechanisms by which they accomplish their goals are poorly understood. In this talk, I will show how methods from cognitive science can help elucidate and improve the syntactic representations employed by RNN language models. I will review evidence that RNN language models are able to process syntactic dependencies in typical sentences with considerable success across languages (Linzen et al 2016, TACL; Gulordava et al. 2018, NAACL). However, when evaluated on experimentally controlled materials, their error rate increases sharply; explicit syntactic supervision mitigates the drop in performance (Marvin & Linzen 2018, EMNLP). Finally, I will discuss how language model adaptation can provide a tool for probing RNN syntactic representations, following the inspiration of the syntactic priming paradigm from psycholinguistics (van Schijndel & Linzen 2018, EMNLP).

      Less More
    • December 12, 2018

      Panupong (Ice) Pasupat

      Natural language understanding models have achieved good enough performance for commercial products such as virtual assistants. However, their scopes are mostly still limited to preselected domains or simpler sentences. I will present my line of work which extends natural language understanding in two frontiers: handling open-domain environments such as the Web (breadth) and handling complex sentences (depth).

      The presentation will focus on the task of answering complex questions on semi-structured Web tables using question-answer pairs as supervision. Within the framework of semantic parsing, which is to learn to parse sentences into executable logical forms, I will explain our proposed methods to (1) flexibly handle lexical and syntactic mismatches between the questions and logical forms, (2) filter misleading logical forms that sometimes give correct answers, and (3) reuse parts of good logical forms to make training more efficient. I will also briefly mention how these ideas can be applied to several other natural language understanding tasks for Web interaction.

      Less More
    • December 11, 2018

      Abhisek Das

      Building intelligent agents that possess the ability to perceive the rich visual environment around us, communicate this understanding in natural language to humans and other agents, and execute actions in a physical environment, is a long-term goal of Artificial Intelligence. In this talk, I will present some of my recent work at various points on this spectrum in connecting vision and language to actions; from Visual Dialog (CVPR17, ICCV17, HCOMP17) -- where we develop models capable of holding free-form visually-grounded natural language conversation towards a downstream goal and ways to evaluate them -- to Embodied Question Answering (CVPR18, CoRL18) -- where we augment these models to actively navigate in simulated environments and gather visual information necessary for answering questions.

      Less More
    • December 6, 2018

      Oren Etzioni

      Dr. Oren Etzioni, Chief Executive Officer of the Allen Institute of Artificial Intelligence and professor of computer science at the University of Washington, addresses one of the Holy Grails of AI: acquiring, representing and utilizing common-sense knowledge, during a distinguished lecture series held at the Office of Naval Research.

      Less More
    • November 16, 2018

      Shyam Upadhyay

      Lack of annotated data is a constant obstacle in developing machine learning models, especially for natural language processing (NLP) tasks. In this talk, I explore this problem in the realm of Multilingual NLP, where the challenges become more acute as most of the annotation efforts in the NLP community have been predominantly aimed at English.

      In particular, I will discuss two techniques for overcoming the lack of annotation in multilingual settings. I focus on two information extraction tasks --- cross-lingual entity linking and name transliteration to English --- for which traditional approaches rely on generous amounts of supervision in the language of interest. In the first part of the talk, I show how we can perform cross-lingual entity linking by sharing supervision across languages through a shared multilingual feature space. This approach enables us to complement the supervision in a low-resource language with supervision from a high resource language. In the second part, I show how we use freely available knowledge and unlabeled data to substitute for lack of supervision for the transliteration task. Key to the approach is a constrained bootstrapping algorithm that mines new example pairs for improving the transliteration model. Results on both tasks show the effectiveness of these approaches, and pave the way for future tasks involving the 3-way interaction of text, knowledge, and reasoning, in a multilingual setting.

      Less More
    • November 12, 2018

      Kevin Jamieson

      In many science and industry applications, data-driven discovery is limited by the rate of data collection like the time it takes skilled labor to operate a pipette or the cost of expensive reagents or use of experimental apparatuses. When measurement budgets are necessarily small, adaptive data collection that uses previously collected data to inform future data collection in a closed loop can make the difference between inferring a phenomenon or not. While methods like multi-armed bandits have provided great insights into optimal means of collecting data in the last several years, these algorithms require a number of measurements that scales linearly with the total number of possible actions or measurements that can be made, even if discovering just one among possibly many true positives is desired. For example, if many of our 20,000 genes are critical for cell-growth and a measurement corresponds to knocking out just one gene and measuring a noisy phenotype signal, one may expect that we can find a single influential gene with far fewer than 20,000 total measurements. In this talk I will ground this intuition in a theoretical framework and describe several applications where I have applied this perspective and new algorithms including crowd-sourcing preferences, multiple testing with false discovery control, hyperparameter tuning, and crowdfunding.

      Less More
    • October 26, 2018

      Sam Thomson

      Is there a class of models that perform competitively with LSTMs, yet are interpretable, parallelizable, data-efficient, and whose mathematical properties are already well-studied? I will present a recent line of work where we show that weighted finite-state automata (WFSAs) can be made unreasonably effective sequence encoders by letting their transition weights be calculated by neural nets.

      First, we introduce a specific architecture, Soft Patterns (SoPa), which generalizes convolutional neural networks (CNNs), capturing fixed-length but gappy patterns. We show that SoPa is competitive with LSTMs at text classification, and even outperforms LSTMs in small data regimes.

      Next, we explore the limits of this general approach. We show that several existing recurrent neural networks (RNNs) are in fact WFSAs in disguise, including quasi-recurrent neural networks, simple recurrent units, input switched affine networks, and more. These networks are already in popular use, showing strong performance on a variety of tasks. We formally define and characterize this class of RNNs, which include CNNs but not arbitrary RNNs, dubbing them "rational recurrences."

      Less More
    • October 22, 2018

      Chelsea Finn

      Machine learning excels primarily in settings where an engineer can first reduce the problem to a particular function, and collect a substantial amount of labeled input-output pairs for that function. In drastic contrast, humans are capable of learning a range of versatile behaviors from streams of raw sensory data with minimal external instruction. How can we develop machines that learn more like the latter? In this talk, I will discuss recent work on learning versatile behaviors from raw sensory observations with minimal human supervision. In particular, I will show how we can use meta-learning to infer goals and intentions from humans with only a few positive examples, how robots can leverage large amounts of unlabeled experience to develop and plan with visual predictive models of the world, and how we can combine elements of meta-learning and unsupervised learning to develop agents that propose their own goals and learn to achieve them.

      Less More
    • October 17, 2018

      Rishabh Iyer

      Visual Data in the form of Images and Videos have been growing at an unprecedented rate in the last few years. While this massive data is a blessing to data science by helping improve predictive accuracy, it is also a curse since humans are unable to consume this large amount of data. Moreover, today, machine generated videos (via Drones, Dash-cams, Body-cams, Security cameras etc.) are being generated at a rate higher than what we as humans can process, and majority of this data is plagued with redundancy. In this talk, I will present a unified framework for Submodular Optimization which provides an end to end solution to these problems. We first show that submodular functions naturally model notions of diversity, coverage, representation and information. Moreover they also lend themselves to practical and provably near optimal algorithms for optimization, thereby providing practical data summarization strategies. Along the way, we will highlight several implementational aspects of submodular optimization, including memoization tricks useful in building real world summarization systems.

      We also show how we can efficiently learn submodular functions for different domains and tasks. We will demonstrate the utility of this in summarization tasks related to visual data: Image collection summarization and domain specific video summarization. What comprises a good visual summary depends on the domain at hand -- creating a video summary of a soccer game will involve very different modeling characteristics compared to a surveillance video. We try to take a principled approach towards domain specific video summarization, we argue how we can efficiently learn the right weights for the different model families. We shall point out several interesting observations and insights learnt from this characterization. Towards the end of this talk, we shall extend this work to training data subset selection, where we shall show how we can use our summarization framework for reducing training complexity, quick turn-around times for hyper-parameter tuning and Diversified Active Learning.

      Less More
    • October 10, 2018

      Lucy Wang

      Human interpretability is essential in biomedicine, because information flow between computational platforms and human stakeholders is crucial to the proper management and care of disease. Biomedical data is abundant, but do not lend themselves to easy summary and interpretation. Luckily, there are many structured biomedical knowledge resources that can be used to assist in the analysis of all these data. How best to integrate ontological data with contemporary machine learning techniques is one of my main research interest, the other of which is to apply these integrated techniques to enhancing our understanding of specific human diseases.

      My research can by summarized into two themes: 1) the development of tools for modeling biomedical knowledge, and 2) the application of biomedical knowledge and natural language processing techniques to understanding biomedical and clinical texts. In this talk, I will describe a few of my projects and propose ways to extend some of these research ideas in the future.

      Less More
    • October 1, 2018

      Ana Marasovic

      Abstract Anaphora Resolution (AAR) is a challenging task of finding a (typically) non-nominal antecedent of pronouns and noun phrases that refer to abstract objects like facts, events, actions or situations, in the (typically) preceding discourse. An example is given below.

      Our intuition is that we can learn what is the correct antecedent for a given abstract anaphor by learning attributes of the relation that holds between the sentence with the abstract anaphor and its antecedent. We propose a siamese-LSTM mention-ranking model to learn what characterizes mentioned relations [1].

      Although the current resources for AAR are really scarce, we can train our models on many instances of antecedent-anaphoric sentence pairs. Such pairs can be automatically extracted from parsed corpora by searching for constructions with embedded sentences, applying a simple transformation that replaces the embedded sentence with an abstract anaphor and using the cut-off embedded sentence as the antecedent [1].

      I will show results of the mention-ranking model trained for shell noun resolution [2] and results on an abstract anaphora subset of the ARRAU corpus [3]. Finally, I will discuss ideas on how the training data extraction method and the mention-ranking model could be further improved for the challenges ahead. In particular, I will talk about:

      (i) quality of harvested training data to answer whether nominal and pronominal anaphors be learned independently, (ii) selecting antecedents from a wider preceding window, (iii) addressing differences between anaphora types with multi-task learning, (iv) addressing differenced in harvested and natural data with adversarial training, (v) utilizing pretrained language models.

      Less More
    • September 27, 2018

      Nicolas Fiorini

      PubMed is a free search engine for the biomedical literature accessed by millions of users from around the world each day. With the rapid growth of biomedical literature, finding and retrieving the most relevant papers for a given query is increasingly challenging. I will introduce Best Match, the new relevance search algorithm for PubMed that leverages click logs and learning-to-rank. The Best Match algorithm is trained with past user searches with dozens of relevance ranking signals (factors), the most important being the past usage of an article, publication date, BM25 score, and the type of article. This new algorithm demonstrated state-of-the-art retrieval performance in benchmarking experiments as well as an improved user experience in real-world testing.

      Less More
    • September 19, 2018

      Kevin Gimpel

      A key challenge in natural language understanding is recognizing when two sentences have the same meaning. I'll discuss our work on this problem over the past few years, including the exploration of compositional functional architectures, learning criteria, and naturally-occurring sources of training data. The result is a single sentence embedding model that outperforms all systems from the 2012-2016 SemEval semantic textual similarity competitions without training on any of the annotated data from those tasks.

      As a by-product, we developed a large dataset of automatically-generated paraphrase pairs by using parallel text and neural machine translation. We've since used the dataset, which we call ParaNMT-50M, to impart a notion of meaning equivalence to controlled text generation tasks, including syntactically-controlled paraphrasing and textual style transfer.

      Less More
    • August 29, 2018

      Robin Jia

      Reading comprehension systems that answer questions over a context passage can often achieve high test accuracy, but they are frustratingly brittle: they often rely heavily on superficial cues, and therefore struggle on out-of-domain inputs. In this talk, I will describe our work on understanding and challenging these systems. First, I will show how to craft adversarial reading comprehension examples by adding irrelevant distracting text to the context passage. Next, I will present the newest version of the SQuAD dataset, SQuAD 2.0, which tests whether models can distinguish answerable questions from similar but unanswerable ones. Finally, I will propose a new way of evaluating reading comprehension systems by measuring their zero-shot performance on other NLP tasks, such as relation extraction or semantic parsing, that have been converted to textual question answering problems.

      Less More
    • August 28, 2018

      Dan Weld

      Since AI software uses techniques like deep lookahead search and stochastic optimization of huge neural networks, it often results in complex behavior that is difficult for people to understand. Yet organizations are deploying AI algorithms in many mission-critical settings. To trust their behavior, we must make AI intelligible, either by using inherently interpretable models or by developing new methods for explaining and adjusting otherwise overwhelmingly complex decisions using local approximation, vocabulary alignment, and interactive explanation. This talk argues that intelligibility is essential, surveys recent work on building such systems, and highlights key directions for research.

      Less More
    • August 24, 2018

      Sebastian Ruder

      Deep neural networks excel at learning from labeled data. In contrast, learning from unlabeled data, especially under domain shift, which is common in many real-world applications, remains a challenge. In this talk, I will touch on three aspects of learning under domain shift: First I will discuss an approach to select relevant data for domain adaptation in order to minimize negative transfer. Secondly, I will show how classic bootstrapping algorithms can be applied to neural networks and that they make for strong baselines in this challenging setting. Finally, I will describe new methods to use language models for semi-supervised learning.

      Less More
    • August 21, 2018

      Chen Liang

      Learning to generate programs from natural language can support a wide range of applications including question answering, virtual assistant, AutoML, etc. It is natural to apply reinforcement learning to directly optimize the task reward, and generalization to new unseen inputs is crucial. However, three challenges need to be addressed: (1) how to model the structures in the programs; (2) how to efficiently learn from sparse rewards; (3) how to explore a large search space. In this talk, I will present (1) Neural Symbolic Machines (NSM), a hybrid framework that integrates a neural “programmer” with a symbolic "computer" to generate programs for multi-step reasoning; (2) Memory Augmented Policy optimization (MAPO), a novel policy optimization formulation that incorporates a memory buffer of promising trajectories to reduce the variance of policy gradient estimates, especially given sparse rewards. NSM with MAPO is the first end-to-end model trained with RL that achieves new state-of-the-art on weakly supervised semantic parsing, evaluated on 3 well-established benchmarks: WebQuestionsSP, WikiTableQuestions, and WikiSQL.

      Less More