Videos

See AI2's full collection of videos on our YouTube channel.
All Years
All Videos
Viewing 31-40 of 164 videos
  • AI & Policy Workshop Thumbnail

    AI & Policy Workshop

    March 7, 2019  |  
    "An Ethical Crisis in Computing?" Moshe Vardi | Karen Ostrum George Distinguished Professor, Computational Engineering, Rice University "Algorithmic Accountability: Designing for Safety" Ben Shneiderman | Distinguished Professor, Department of Computer Science, University of Maryland, College Park "AI Policy: What to Do Now, Soon, and One Day" Ryan Calo | Lane Powell & D. Wayne Gittinger Associate Professor of Law, University of Washington "Less Talk, More Do: Applied Ethics in AI" Tracy Kosa | Adjunct Professor, Faculty of Law and Albers School of Business, Seattle University Panel Q&A Oren Etzioni and speakers
  • Natural Language Programming (NLPRO): Turning Texts into Executable Code Thumbnail

    Natural Language Programming (NLPRO): Turning Texts into Executable Code

    March 1, 2019  |  Reut Tsarfaty
    Can we program computers in our native tongue? This idea, termed natural language programming (NLPRO), has attracted attention almost since the inception of computers themselves. From the point of view of software engineering (SE), efforts to program in natural language (NL) have relied thus far on controlled natural languages (CNL) -- small unambiguous fragments of English with restricted grammars and limited expressivity. Is it possible to replace these CNLs with truly natural, human language? From the point of view of natural language processing (NLP), current technology successfully extracts information from NL texts. However, the level of NL understanding required for programming in NL goes far beyond such information extraction. Is it possible to endow computers with a dynamic kind of NL understanding? In this talk I argue that the solutions to these seemingly separate challenges are actually closely intertwined, and that one community's challenge is the other community's stepping stone for a huge leap and vice versa. Specifically, in this talk I propose to view executable programs in SE as semantic structures in NLP, as the basis for broad-coverage semantic parsing. I present a feasibility study on the semantic parsing of requirements documents into executable scenarios, where the requirements are written in a restricted yet highly ambiguous fragment of English, and the target representation employs live sequence charts (LSC), a multi-modal executable programming language. The parsing architecture I propose jointly models sentence-level and discourse-level processing in a generative probabilistic framework. I empirically show that the discourse-based model consistently outperforms the sentence-based model, constructing a system that reflects both the static (entities, properties) and dynamic (behavioral scenarios) requirements in the input document.
  • Where’s the Data: A new approach to social science data search and discovery Thumbnail

    Where’s the Data: A new approach to social science data search and discovery

    February 5, 2019  |  Julia Lane
    The social sciences are at a crossroads The great challenges of our time are human in nature - terrorism, climate change, the use of natural resources, and the nature of work - and require robust social science to understand the sources and consequences. Yet the lack of reproducibility and replicability evident in many fields is even more acute in the study of human behavior both because of the difficulty of sharing confidential data and because of the lack of scientific infrastructure. Much of the core infrastructure is manual and ad-hoc in nature, threatening the legitimacy and utility of social science research. A major challenge is search and discovery. The vast majority of social science data and outputs cannot be easily discovered by other researchers even when nominally deposited in the public domain. A new generation of automated search tools could help researchers discover how data are being used, in what research fields, with what methods, with what code and with what findings. And automation can be used to reward researchers who validate the results and contribute additional information about use, fields, methods, code, and findings. In sum, the use of data depends critically on knowing how it has been produced and used before: the required elements what do the data measure, what research has been done by what researchers, with what code, and with what results. In this presentation I describe the work that we are doing to build and develop automated tools to create the equivalent of an Amazon.com or TripAdvisor for the access and use of confidential microdata.
  • Understanding Time In Natural Language Thumbnail

    Understanding Time In Natural Language

    January 25, 2019  |  Qiang Ning
    Time is an important dimension when we describe the world because the world is evolving over time and many facts are time-sensitive. Understanding time is thus an important aspect of natural language understanding and many applications may rely on it, e.g., information retrieval, summarization, causality, and question answering. In this talk, I will mainly focus on a key component of it, temporal relation extraction. The task has long been challenging because the actual timestamps of those events are rarely expressed explicitly and their temporal order has to be inferred, from lexical cues, between the lines, and often based on strong background knowledge. Additionally, collecting enough and high-quality annotations to facilitate machine learning algorithms for this task is also difficult, which makes it even more challenging to investigate the task. I tackled this task in three perspectives, structured learning, common sense, and data collection, and have improved the state-of-the-art by approximately 20% in absolute F1. My current system, CogCompTime, is available at this online demo: http://groupspaceuiuc.com/temporal/. In the future, I expect to expand my research in these directions to other core problems in AI such as incidental supervision, semantic parsing, and knowledge representation.
  • Text Generation from Knowledge Graphs Thumbnail

    Text Generation from Knowledge Graphs

    January 11, 2019  |  Rik Koncel-Kedziorski
    In this talk I will introduce a new model for encoding knowledge graphs and generating texts from them. Graphical knowledge representations are ubiquitous in computing, but pose a challenge for text generation techniques due to their non-hierarchical structure and collapsing of long-distance dependencies. Moreover, automatically extracted knowledge is noisy, and so requires a text generation model be robust. To address these issues, I introduce a novel attention-based encoder-decoder model for knowledge-graph-to-text generation. This model extends the popular Transformer for text encoding to function over graph-structured inputs. The result is a powerful, general model for graph encoding which can incorporate global structural information when contextualizing vertices in their local neighborhoods. Through detailed automatic and human evaluations I demonstrate the value of conditioning text generation on graph-structured knowledge, as well as the superior performance of the proposed model compared to recent work.
  • Using cognitive science to evaluate and interpret neural language models Thumbnail

    Using cognitive science to evaluate and interpret neural language models

    December 14, 2018  |  Tal Linzen
    Recent technological advances have made it possible to train recurrent neural networks (RNNs) on a much larger scale than before. While these networks have proved effective in NLP applications, their limitations and the mechanisms by which they accomplish their goals are poorly understood. In this talk, I will show how methods from cognitive science can help elucidate and improve the syntactic representations employed by RNN language models. I will review evidence that RNN language models are able to process syntactic dependencies in typical sentences with considerable success across languages (Linzen et al 2016, TACL; Gulordava et al. 2018, NAACL). However, when evaluated on experimentally controlled materials, their error rate increases sharply; explicit syntactic supervision mitigates the drop in performance (Marvin & Linzen 2018, EMNLP). Finally, I will discuss how language model adaptation can provide a tool for probing RNN syntactic representations, following the inspiration of the syntactic priming paradigm from psycholinguistics (van Schijndel & Linzen 2018, EMNLP).
  • Natural Language Interface for Web Interaction via Compositional Generation Thumbnail

    Natural Language Interface for Web Interaction via Compositional Generation

    December 12, 2018  |  Panupong (Ice) Pasupat
    Natural language understanding models have achieved good enough performance for commercial products such as virtual assistants. However, their scopes are mostly still limited to preselected domains or simpler sentences. I will present my line of work which extends natural language understanding in two frontiers: handling open-domain environments such as the Web (breadth) and handling complex sentences (depth). The presentation will focus on the task of answering complex questions on semi-structured Web tables using question-answer pairs as supervision. Within the framework of semantic parsing, which is to learn to parse sentences into executable logical forms, I will explain our proposed methods to (1) flexibly handle lexical and syntactic mismatches between the questions and logical forms, (2) filter misleading logical forms that sometimes give correct answers, and (3) reuse parts of good logical forms to make training more efficient. I will also briefly mention how these ideas can be applied to several other natural language understanding tasks for Web interaction.
  • Towards Agents that can See, Talk, and Act Thumbnail

    Towards Agents that can See, Talk, and Act

    December 11, 2018  |  Abhisek Das
    Building intelligent agents that possess the ability to perceive the rich visual environment around us, communicate this understanding in natural language to humans and other agents, and execute actions in a physical environment, is a long-term goal of Artificial Intelligence. In this talk, I will present some of my recent work at various points on this spectrum in connecting vision and language to actions; from Visual Dialog (CVPR17, ICCV17, HCOMP17) -- where we develop models capable of holding free-form visually-grounded natural language conversation towards a downstream goal and ways to evaluate them -- to Embodied Question Answering (CVPR18, CoRL18) -- where we augment these models to actively navigate in simulated environments and gather visual information necessary for answering questions.
  • Learning Common Sense: A Grand Challenge for Academic AI Research Thumbnail

    Learning Common Sense: A Grand Challenge for Academic AI Research

    December 6, 2018  |  Oren Etzioni
    Dr. Oren Etzioni, Chief Executive Officer of the Allen Institute for AI and professor of computer science at the University of Washington, addresses one of the Holy Grails of AI: acquiring, representing and utilizing common-sense knowledge, during a distinguished lecture series held at the Office of Naval Research.
  • Learning with Less Supervision in a Multilingual World Thumbnail

    Learning with Less Supervision in a Multilingual World

    November 16, 2018  |  Shyam Upadhyay
    Lack of annotated data is a constant obstacle in developing machine learning models, especially for natural language processing (NLP) tasks. In this talk, I explore this problem in the realm of Multilingual NLP, where the challenges become more acute as most of the annotation efforts in the NLP community have been predominantly aimed at English. In particular, I will discuss two techniques for overcoming the lack of annotation in multilingual settings. I focus on two information extraction tasks --- cross-lingual entity linking and name transliteration to English --- for which traditional approaches rely on generous amounts of supervision in the language of interest. In the first part of the talk, I show how we can perform cross-lingual entity linking by sharing supervision across languages through a shared multilingual feature space. This approach enables us to complement the supervision in a low-resource language with supervision from a high resource language. In the second part, I show how we use freely available knowledge and unlabeled data to substitute for lack of supervision for the transliteration task. Key to the approach is a constrained bootstrapping algorithm that mines new example pairs for improving the transliteration model. Results on both tasks show the effectiveness of these approaches, and pave the way for future tasks involving the 3-way interaction of text, knowledge, and reasoning, in a multilingual setting.