Videos

See AI2's full collection of videos on our YouTube channel.
All Years
All Videos
Viewing 151-160 of 164 videos
  • Event Discovery, Content Models, and Relevance Thumbnail

    Event Discovery, Content Models, and Relevance

    December 4, 2014  |  Aria Haghigi
    I discuss three problems in applied natural language processing and machine learning: event discovery from distributed discourse, document content models for information extraction, and relevance engineering for a large-scale personalization engine. The first two are information extraction problems over social media which attempt to utilize richer structure and context for decision making; these sections reflect work from the tail end of my purely academic work. The relevance section will discuss work done while at my former startup Prismatic and will focus on issues arising from productionizing real-time machine learning. Along the way, I'll share my thoughts and experience around productizing research and interesting future directions.
  • Toward Scene Understanding Thumbnail

    Toward Scene Understanding

    December 3, 2014  |  Roozbeh Mottaghi
    Scene understanding is one of the holy grails of computer vision, and despite decades of research, it is still considered an unsolved problem. In this talk, I will present a number of methods, which help us take a step further towards the ultimate goal of holistic scene understanding. In particular, I will talk about our work on object detection, 3D pose estimation, and contextual reasoning, and show that modeling these tasks jointly enables better understanding of scenes. At the end of the talk, I will describe our recent work on providing richer descriptions for objects in terms of their viewpoint and sub-category information.
  • Open and Exploratory Extraction of Relations (and Common Sense) from Large Text Corpora Thumbnail

    Open and Exploratory Extraction of Relations (and Common Sense) from Large Text Corpora

    November 10, 2014  |  Alan Akbik
    The use of deep syntactic information such as typed dependencies has been shown to be very effective in Information Extraction (IE). Despite this potential, the process of manually creating rule-based information extractors that operate on dependency trees is not intuitive for persons without an extensive NLP background. In this talk, I present an approach and a graphical tool that allows even novice users to quickly and easily define extraction patterns over dependency trees and directly execute them on a very large text corpus. This enables users to explore a corpus for structured information of interest in a highly interactive and data-guided fashion, and allows them to create extractors for those semantic relations they find interesting. I then present a project in which we use Information Extraction to automatically construct a very large common sense knowledge base. This knowledge base - dubbed "The Weltmodell" - contains common sense facts that pertain to proper noun concepts; an example of this is the concept "coffee", for which we know that it is typically drunk by a person or brought by a waiter. I show how we mine such information from very large amounts of text, how we quantify notions such as typicality and similarity, and discuss some ideas how such world knowledge can be used to address reasoning tasks.
  • Deep Natural Language Semantics by Combining Logical and Distributional Methods using Probabilistic Logic Thumbnail

    Deep Natural Language Semantics by Combining Logical and Distributional Methods using Probabilistic Logic

    November 4, 2014  |  Raymond Mooney
    Traditional logical approaches to semantics and newer distributional or vector space approaches have complementary strengths and weaknesses.We have developed methods that integrate logical and distributional models by using a CCG-based parser to produce a detailed logical form for each sentence, and combining the result with soft inference rules derived from distributional semantics that connect the meanings of their component words and phrases. For recognizing textual entailment (RTE) we use Markov Logic Networks (MLNs) to combine these representations, and for Semantic Textual Similarity (STS) we use Probabilistic Soft Logic (PSL). We present experimental results on standard benchmark datasets for these problems and emphasize the advantages of combining logical structure of sentences with statistical knowledge mined from large corpora.
  • Large-Scale Paraphrasing for Natural Language Generation Thumbnail

    Large-Scale Paraphrasing for Natural Language Generation

    October 1, 2014  |  Chris Callison-Burch
    I will present my method for learning paraphrases - pairs of English expressions with equivalent meaning - from bilingual parallel corpora, which are more commonly used to train statistical machine translation systems. My method equates pairs of English phrases like --thrown into jail, imprisoned-- when they share an aligned foreign phrase like festgenommen. Because bitexts are large and because a phrase can be aligned many different foreign phrases including phrases in multiple foreign languages, the method extracts a diverse set of paraphrases. For thrown into jail, we not only learn imprisoned, but also arrested, detained, incarcerated, jailed, locked up, taken into custody, and thrown into prison, along with a set of incorrect/noisy paraphrases. I'll show a number of methods for filtering out the poor paraphrases, by defining a paraphrase probability calculated from translation model probabilities, and by re-ranking the candidate paraphrases using monolingual distributional similarity measures.
  • Modeling Biological Processes for Reading Comprehension Thumbnail

    Modeling Biological Processes for Reading Comprehension

    August 5, 2014  |  Jonathan Berant
    Machine reading calls for programs that read and understand text, but most current work only attempts to extract facts from redundant web-scale corpora. In this talk, I will focus on a new reading comprehension task that requires complex reasoning over a single document. The input is a paragraph describing a biological process, and the goal is to answer questions that require an understanding of the relations between entities and events in the process. To answer the questions, we first predict a rich structure representing the process in the paragraph. Then, we map the question to a formal query, which is executed against the predicted structure. We demonstrate that answering questions via predicted structures substantially improves accuracy over baselines that use shallower representations.
  • Extracting Knowledge from Text with Tractable Markov Logic and Symmetry-Based Semantic Parsing Thumbnail

    Extracting Knowledge from Text with Tractable Markov Logic and Symmetry-Based Semantic Parsing

    July 25, 2014  |  Pedro Domingos
    Building very large commonsense knowledge bases and reasoning with them is a long-standing dream of AI. Today that knowledge is available in text; all we have to do is extract it. Text, however, is extremely messy, noisy, ambiguous, incomplete, and variable. A formal representation of it needs to be both probabilistic and relational, either of which leads to intractable inference and therefore poor scalability. In the first part of this talk I will describe tractable Markov logic, a language that is restricted enough to be tractable yet expressive enough to represent much of the commonsense knowledge contained in text. Even then, transforming text into a formal representation of its meaning remains a difficult problem. There is no agreement on what the representation primitives should be, and labeled data in the form of sentence-meaning pairs for training a semantic parser is very hard to come by. In the second part of the talk I will propose a solution to both these problems, based on concepts from symmetry group theory. A symmetry of a sentence is a syntactic transformation that does not change its meaning. Learning a semantic parser for a language is discovering its symmetry group, and the meaning of a sentence is its orbit under the group (i.e., the set of all sentences it can be mapped to by composing symmetries). Preliminary experiments indicate that tractable Markov logic and symmetry-based semantic parsing can be powerful tools for scalably extracting knowledge from text.
  • Paul Allen Discusses AI2 and the Future of AI (Discussion of AI2 begins at 17:30) Thumbnail

    Paul Allen Discusses AI2 and the Future of AI (Discussion of AI2 begins at 17:30)

    June 4, 2014  |  Paul Allen
    Paul Allen discusses his vision for the future of AI and AI2 in this fireside chat moderated by Gary Marcus of New York University at the 10th Anniversary Symposium - Allen Institute for Brain Science. AI2-related discussion begins at 17:30.
  • Crowdsourcing Insights into Problem Structure for Scientific Discovery Thumbnail

    Crowdsourcing Insights into Problem Structure for Scientific Discovery

    May 13, 2014  |  Bart Selman
    In recent years, there has been tremendous progress in solving large-scale reasoning and optimization problems. Central to this progress has been the ability to automatically uncover hidden problem structure. Nevertheless, for the very hardest computational tasks, human ingenuity still appears indispensable. We show that automated reasoning strategies and human insights can effectively complement each other, leading to hybrid human-computer solution strategies that outperform other methods by orders of magnitude. We illustrate our approach with challenges in scientific discovery in the areas of finite mathematics and materials science.
  • Learning and Inference for Natural Language Understanding Thumbnail

    Learning and Inference for Natural Language Understanding

    March 31, 2014  |  Dan Roth
    Machine Learning and Inference methods have become ubiquitous and have had a broad impact on a range of scientific advances and technologies and on our ability to make sense of large amounts of data. Research in Natural Language Processing has both benefited from and contributed to advancements in these methods and provides an excellent example for some of the challenges we face moving forward. I will describe some of our research in developing learning and inference methods in pursue of natural language understanding. In particular, I will address what I view as some of the key challenges, including (i) learning models from natural interactions, without direct supervision, (ii) knowledge acquisition and the development of inference models capable of incorporating knowledge and reason, and (iii) scalability and adaptation—learning to accelerate inference during the life time of a learning system.