Videos

See AI2's full collection of videos on our YouTube channel.
All Years
All Videos
Viewing 141-150 of 158 videos
  • Spectral Probabilistic Modeling and Applications to Natural Language Processing Thumbnail

    Spectral Probabilistic Modeling and Applications to Natural Language Processing

    March 3, 2015  |  Ankur Parikh
    Being able to effectively model latent structure in data is a key challenge in modern AI research, particularly in Natural Language Processing (NLP) where it is crucial to discover and leverage syntactic and semantic relationships that may not be explicitly annotated in the training set. Unfortunately, while incorporating latent variables to represent hidden structure can substantially increase representation power, the key problems of model design and learning become significantly more complicated. For example, unlike fully observed models, latent variable models can suffer from non-identifiability, making it difficult to distinguish the desired latent structure from the others. Moreover, learning is usually formulated as a non-convex optimization problem, leading to the use of local search heuristics that may become trapped in local optima.
  • Multimodal Science Learning Thumbnail

    Multimodal Science Learning

    February 26, 2015  |  Ken Forbus
    Creating systems that can work with people, using natural modalities, as apprentices is a key step towards human-level AI. This talk will describe how my group is combining research on sketch understanding, natural language understanding, and analogical learning within the Companion cognitive architecture to create systems that can reason and learn about science by working with people. Some promising results will be described (e.g. solving conceptual physics problems involving sketches, modeling conceptual change, learning by reading) as well as work in progress (e.g. interactive knowledge capture via analogy).
  • Semi-Supervised Learning In Realistic Settings Thumbnail

    Semi-Supervised Learning In Realistic Settings

    February 5, 2015  |  Bhavana Dalvi
    Semi-supervised learning (SSL) has been widely used over a decade for various tasks -- including knowledge acquisition-- that lack large amount of training data. My research proposes a novel learning scenario in which the system knows a few categories in advance, but the rest of the categories are unanticipated and need to be discovered from the unlabeled data. With the availability of enormous unlabeled datasets at low cost, and difficulty of collecting labeled data for all possible categories, it becomes even more important to adapt traditional semi-supervised learning techniques to such realistic settings.
  • Bayesian Case Model — Generative Approach for Case-based Reasoning and Prototype Thumbnail

    Bayesian Case Model — Generative Approach for Case-based Reasoning and Prototype

    January 7, 2015  |  Been Kim
    I will present the Bayesian Case Model (BCM), a general framework for Bayesian case-based reasoning (CBR) and prototype classification and clustering. BCM brings the intuitive power of CBR to a Bayesian generative framework. The BCM learns prototypes, the ``quintessential" observations that best represent clusters in a data set, by performing joint inference on cluster labels, prototypes and important features. Simultaneously, BCM pursues sparsity by learning subspaces, the sets of features that play important roles in the characterization of the prototypes. The prototype and subspace representation provides quantitative benefits in interpretability while preserving classification accuracy. Human subject experiments verify statistically significant improvements to participants' understanding when using explanations produced by BCM, compared to those given by prior art.
  • Event Discovery, Content Models, and Relevance Thumbnail

    Event Discovery, Content Models, and Relevance

    December 4, 2014  |  Aria Haghigi
    I discuss three problems in applied natural language processing and machine learning: event discovery from distributed discourse, document content models for information extraction, and relevance engineering for a large-scale personalization engine. The first two are information extraction problems over social media which attempt to utilize richer structure and context for decision making; these sections reflect work from the tail end of my purely academic work. The relevance section will discuss work done while at my former startup Prismatic and will focus on issues arising from productionizing real-time machine learning. Along the way, I'll share my thoughts and experience around productizing research and interesting future directions.
  • Toward Scene Understanding Thumbnail

    Toward Scene Understanding

    December 3, 2014  |  Roozbeh Mottaghi
    Scene understanding is one of the holy grails of computer vision, and despite decades of research, it is still considered an unsolved problem. In this talk, I will present a number of methods, which help us take a step further towards the ultimate goal of holistic scene understanding. In particular, I will talk about our work on object detection, 3D pose estimation, and contextual reasoning, and show that modeling these tasks jointly enables better understanding of scenes. At the end of the talk, I will describe our recent work on providing richer descriptions for objects in terms of their viewpoint and sub-category information.
  • Open and Exploratory Extraction of Relations (and Common Sense) from Large Text Corpora Thumbnail

    Open and Exploratory Extraction of Relations (and Common Sense) from Large Text Corpora

    November 10, 2014  |  Alan Akbik
    The use of deep syntactic information such as typed dependencies has been shown to be very effective in Information Extraction (IE). Despite this potential, the process of manually creating rule-based information extractors that operate on dependency trees is not intuitive for persons without an extensive NLP background. In this talk, I present an approach and a graphical tool that allows even novice users to quickly and easily define extraction patterns over dependency trees and directly execute them on a very large text corpus. This enables users to explore a corpus for structured information of interest in a highly interactive and data-guided fashion, and allows them to create extractors for those semantic relations they find interesting. I then present a project in which we use Information Extraction to automatically construct a very large common sense knowledge base. This knowledge base - dubbed "The Weltmodell" - contains common sense facts that pertain to proper noun concepts; an example of this is the concept "coffee", for which we know that it is typically drunk by a person or brought by a waiter. I show how we mine such information from very large amounts of text, how we quantify notions such as typicality and similarity, and discuss some ideas how such world knowledge can be used to address reasoning tasks.
  • Deep Natural Language Semantics by Combining Logical and Distributional Methods using Probabilistic Logic Thumbnail

    Deep Natural Language Semantics by Combining Logical and Distributional Methods using Probabilistic Logic

    November 4, 2014  |  Raymond Mooney
    Traditional logical approaches to semantics and newer distributional or vector space approaches have complementary strengths and weaknesses.We have developed methods that integrate logical and distributional models by using a CCG-based parser to produce a detailed logical form for each sentence, and combining the result with soft inference rules derived from distributional semantics that connect the meanings of their component words and phrases. For recognizing textual entailment (RTE) we use Markov Logic Networks (MLNs) to combine these representations, and for Semantic Textual Similarity (STS) we use Probabilistic Soft Logic (PSL). We present experimental results on standard benchmark datasets for these problems and emphasize the advantages of combining logical structure of sentences with statistical knowledge mined from large corpora.
  • Large-Scale Paraphrasing for Natural Language Generation Thumbnail

    Large-Scale Paraphrasing for Natural Language Generation

    October 1, 2014  |  Chris Callison-Burch
    I will present my method for learning paraphrases - pairs of English expressions with equivalent meaning - from bilingual parallel corpora, which are more commonly used to train statistical machine translation systems. My method equates pairs of English phrases like --thrown into jail, imprisoned-- when they share an aligned foreign phrase like festgenommen. Because bitexts are large and because a phrase can be aligned many different foreign phrases including phrases in multiple foreign languages, the method extracts a diverse set of paraphrases. For thrown into jail, we not only learn imprisoned, but also arrested, detained, incarcerated, jailed, locked up, taken into custody, and thrown into prison, along with a set of incorrect/noisy paraphrases. I'll show a number of methods for filtering out the poor paraphrases, by defining a paraphrase probability calculated from translation model probabilities, and by re-ranking the candidate paraphrases using monolingual distributional similarity measures.
  • Modeling Biological Processes for Reading Comprehension Thumbnail

    Modeling Biological Processes for Reading Comprehension

    August 5, 2014  |  Jonathan Berant
    Machine reading calls for programs that read and understand text, but most current work only attempts to extract facts from redundant web-scale corpora. In this talk, I will focus on a new reading comprehension task that requires complex reasoning over a single document. The input is a paragraph describing a biological process, and the goal is to answer questions that require an understanding of the relations between entities and events in the process. To answer the questions, we first predict a rich structure representing the process in the paragraph. Then, we map the question to a formal query, which is executed against the predicted structure. We demonstrate that answering questions via predicted structures substantially improves accuracy over baselines that use shallower representations.