Menu
Viewing 121-137 of 137 videos See AI2’s full collection of videos on our YouTube channel.
    • November 10, 2014

      Alan Akbik

      The use of deep syntactic information such as typed dependencies has been shown to be very effective in Information Extraction (IE). Despite this potential, the process of manually creating rule-based information extractors that operate on dependency trees is not intuitive for persons without an extensive NLP background. In this talk, I present an approach and a graphical tool that allows even novice users to quickly and easily define extraction patterns over dependency trees and directly execute them on a very large text corpus. This enables users to explore a corpus for structured information of interest in a highly interactive and data-guided fashion, and allows them to create extractors for those semantic relations they find interesting. I then present a project in which we use Information Extraction to automatically construct a very large common sense knowledge base. This knowledge base - dubbed "The Weltmodell" - contains common sense facts that pertain to proper noun concepts; an example of this is the concept "coffee", for which we know that it is typically drunk by a person or brought by a waiter. I show how we mine such information from very large amounts of text, how we quantify notions such as typicality and similarity, and discuss some ideas how such world knowledge can be used to address reasoning tasks.

      Less More
    • November 4, 2014

      Raymond Mooney

      Traditional logical approaches to semantics and newer distributional or vector space approaches have complementary strengths and weaknesses.We have developed methods that integrate logical and distributional models by using a CCG-based parser to produce a detailed logical form for each sentence, and combining the result with soft inference rules derived from distributional semantics that connect the meanings of their component words and phrases. For recognizing textual entailment (RTE) we use Markov Logic Networks (MLNs) to combine these representations, and for Semantic Textual Similarity (STS) we use Probabilistic Soft Logic (PSL). We present experimental results on standard benchmark datasets for these problems and emphasize the advantages of combining logical structure of sentences with statistical knowledge mined from large corpora.

      Less More
    • October 1, 2014

      Chris Callison-Burch

      I will present my method for learning paraphrases - pairs of English expressions with equivalent meaning - from bilingual parallel corpora, which are more commonly used to train statistical machine translation systems. My method equates pairs of English phrases like --thrown into jail, imprisoned-- when they share an aligned foreign phrase like festgenommen. Because bitexts are large and because a phrase can be aligned many different foreign phrases including phrases in multiple foreign languages, the method extracts a diverse set of paraphrases. For thrown into jail, we not only learn imprisoned, but also arrested, detained, incarcerated, jailed, locked up, taken into custody, and thrown into prison, along with a set of incorrect/noisy paraphrases. I'll show a number of methods for filtering out the poor paraphrases, by defining a paraphrase probability calculated from translation model probabilities, and by re-ranking the candidate paraphrases using monolingual distributional similarity measures.

      Less More
    • August 5, 2014

      Jonathan Berant

      Machine reading calls for programs that read and understand text, but most current work only attempts to extract facts from redundant web-scale corpora. In this talk, I will focus on a new reading comprehension task that requires complex reasoning over a single document. The input is a paragraph describing a biological process, and the goal is to answer questions that require an understanding of the relations between entities and events in the process. To answer the questions, we first predict a rich structure representing the process in the paragraph. Then, we map the question to a formal query, which is executed against the predicted structure. We demonstrate that answering questions via predicted structures substantially improves accuracy over baselines that use shallower representations.

      Less More
    • July 25, 2014

      Pedro Domingos

      Building very large commonsense knowledge bases and reasoning with them is a long-standing dream of AI. Today that knowledge is available in text; all we have to do is extract it. Text, however, is extremely messy, noisy, ambiguous, incomplete, and variable. A formal representation of it needs to be both probabilistic and relational, either of which leads to intractable inference and therefore poor scalability. In the first part of this talk I will describe tractable Markov logic, a language that is restricted enough to be tractable yet expressive enough to represent much of the commonsense knowledge contained in text. Even then, transforming text into a formal representation of its meaning remains a difficult problem. There is no agreement on what the representation primitives should be, and labeled data in the form of sentence-meaning pairs for training a semantic parser is very hard to come by. In the second part of the talk I will propose a solution to both these problems, based on concepts from symmetry group theory. A symmetry of a sentence is a syntactic transformation that does not change its meaning. Learning a semantic parser for a language is discovering its symmetry group, and the meaning of a sentence is its orbit under the group (i.e., the set of all sentences it can be mapped to by composing symmetries). Preliminary experiments indicate that tractable Markov logic and symmetry-based semantic parsing can be powerful tools for scalably extracting knowledge from text.

      Less More
    • June 4, 2014

      Paul Allen

      Paul Allen discusses his vision for the future of AI and AI2 in this fireside chat moderated by Gary Marcus of New York University at the 10th Anniversary Symposium - Allen Institute for Brain Science. AI2-related discussion begins at 17:30.

      Less More
    • May 13, 2014

      Bart Selman

      In recent years, there has been tremendous progress in solving large-scale reasoning and optimization problems. Central to this progress has been the ability to automatically uncover hidden problem structure. Nevertheless, for the very hardest computational tasks, human ingenuity still appears indispensable. We show that automated reasoning strategies and human insights can effectively complement each other, leading to hybrid human-computer solution strategies that outperform other methods by orders of magnitude. We illustrate our approach with challenges in scientific discovery in the areas of finite mathematics and materials science.

      Less More
    • March 31, 2014

      Dan Roth

      Machine Learning and Inference methods have become ubiquitous and have had a broad impact on a range of scientific advances and technologies and on our ability to make sense of large amounts of data. Research in Natural Language Processing has both benefited from and contributed to advancements in these methods and provides an excellent example for some of the challenges we face moving forward. I will describe some of our research in developing learning and inference methods in pursue of natural language understanding. In particular, I will address what I view as some of the key challenges, including (i) learning models from natural interactions, without direct supervision, (ii) knowledge acquisition and the development of inference models capable of incorporating knowledge and reason, and (iii) scalability and adaptation—learning to accelerate inference during the life time of a learning system.

      Less More
    • February 26, 2014

      Dafna Shahaf

      The amount of data in the world is increasing at incredible rates. Large-scale data has potential to transform almost every aspect of our world, from science to business; for this potential to be realized, we must turn data into insight. In this talk, I will describe two of my efforts to address this problem computationally: The first project, Metro Maps of Information, aims to help people understand the underlying structure of complex topics, such as news stories or research areas. Metro Maps are structured summaries that can help us understand the information landscape, connect the dots between pieces of information, and uncover the big picture. The second project proposes a framework for automatic discovery of insightful connections in data. In particular, we focus on identifying gaps in medical knowledge: our system recommends directions of research that are both novel and promising.

      Less More
    • February 26, 2014

      Brendan O'Connor

      What can text analysis tell us about society? Corpora of news, books, and social media encode human beliefs and culture. But it is impossible for a researcher to read all of today's rapidly growing text archives. My research develops statistical text analysis methods that measure social phenomena from textual content, especially in news and social media data. For example: How do changes to public opinion appear in microblogs? What topics get censored in the Chinese Internet? What character archetypes recur in movie plots? How do geography and ethnicity affect the diffusion of new language? Less

      Less More
    • January 23, 2014

      Gary Marcus

      For nearly half a century, artificial intelligence always seemed as if it just beyond reach, rarely more, and rarely less, than two decades away. Between Watson, Deep Blue, and Siri, there can be little doubt that progress in AI has been immense, yet "strong AI" in some ways still seems elusive. In this talk, I will give a cognitive scientist's perspective on AI. What have we learned, and what are we still struggling with? Is there anything that programmers of AI can still learn from studying the science of human cognition? Less

      Less More
    • November 5, 2013

      David Ferrucci

      Artificial Intelligence started with small data and rich semantic theories. The goal was to build systems that could reason over logical models of how the world worked; systems that could answer questions and provide intuitive, cognitively accessible explanations for their results. There was a tremendous focus on domain theory construction, formal deductive logics and efficient theorem proving. We had expert systems, rule-bases, forward chaining, backward chaining, modal logics, naïve physics, lisp, prolog, macro theories, micro theories, etc. The problem, of course, was the knowledge acquisition bottleneck; it was too difficult, slow and costly to render all common sense knowledge into an integrated, formal representation that automated reasoning engines could digest. In the meantime, huge volumes of unstructured data became available, compute power became ever cheaper and statistical methods flourished. AI evolved from being predominantly theory-driven to predominantly data-driven. Automated systems generated output using inductive techniques. Training over massive data produced flexible and capable control systems, powerful predictive engines in domains ranging from language translation to pattern recognition, from medicine to economics. Coming from a background in formal knowledge representation and automated reasoning, the writing was on the wall -- big data and statistical machine learning was changing the face of AI and quickly. Form the very inception of Watson, I put a stake in the ground; we will not even attempt to build rich semantic models of the domain. I imagined it would take 3 years just to come to consensus on the common ontology to cover such a broad domain. Rather, we will use a diversity of shallow text analytics, leverage loose and fuzzy interpretations of unstructured information. We would allow many researchers to build largely independent NLP components and rely on machine learning techniques to balance and combine these loosely federated algorithms to evaluate answers in the context of passages. The approach, with a heck of a lot of good engineering, worked. Watson was arguably the best factoid question-answering system in the world, and Watson Paths, could connect questions to answers over multiple steps, offering passage-based "inference chains" from question to answer without a single "if-then rule". But could it explain why an answer is right or wrong? Could it reason over a logical understanding of the domain? Could it automatically learn from language and build the logical or cognitive structures that enable and precede language itself? Could it understand and learn the way we do? No. No. No. No. This talk draws an arc from Theory-Driven AI to Data-Driven AI and positions Watson along that trajectory. It proposes that to advance AI to where we all know it must go, we need to discover how to efficiently combine human cognition, massive data and logical theory formation. We need to boot strap a fluent collaboration between human and machine that engages logic, language and learning to enable machines to learn how to learn and ultimately deliver on the promise of AI.

      Less More