Videos

See AI2's full collection of videos on our YouTube channel.
All Years
All Videos
Viewing 161-163 of 163 videos
  • Statistical Text Analysis for Social Science: Learning to Extract International Relations from the News Thumbnail

    Statistical Text Analysis for Social Science: Learning to Extract International Relations from the News

    February 26, 2014  |  Brendan O'Connor
    What can text analysis tell us about society? Corpora of news, books, and social media encode human beliefs and culture. But it is impossible for a researcher to read all of today's rapidly growing text archives. My research develops statistical text analysis methods that measure social phenomena from textual content, especially in news and social media data. For example: How do changes to public opinion appear in microblogs? What topics get censored in the Chinese Internet? What character archetypes recur in movie plots? How do geography and ethnicity affect the diffusion of new language? Less
  • Smart Machines, and What They Can Still Learn From People Thumbnail

    Smart Machines, and What They Can Still Learn From People

    January 23, 2014  |  Gary Marcus
    For nearly half a century, artificial intelligence always seemed as if it just beyond reach, rarely more, and rarely less, than two decades away. Between Watson, Deep Blue, and Siri, there can be little doubt that progress in AI has been immense, yet "strong AI" in some ways still seems elusive. In this talk, I will give a cognitive scientist's perspective on AI. What have we learned, and what are we still struggling with? Is there anything that programmers of AI can still learn from studying the science of human cognition? Less
  • AI: A Return to Meaning Thumbnail

    AI: A Return to Meaning

    November 5, 2013  |  David Ferrucci
    Artificial Intelligence started with small data and rich semantic theories. The goal was to build systems that could reason over logical models of how the world worked; systems that could answer questions and provide intuitive, cognitively accessible explanations for their results. There was a tremendous focus on domain theory construction, formal deductive logics and efficient theorem proving. We had expert systems, rule-bases, forward chaining, backward chaining, modal logics, naïve physics, lisp, prolog, macro theories, micro theories, etc. The problem, of course, was the knowledge acquisition bottleneck; it was too difficult, slow and costly to render all common sense knowledge into an integrated, formal representation that automated reasoning engines could digest. In the meantime, huge volumes of unstructured data became available, compute power became ever cheaper and statistical methods flourished. AI evolved from being predominantly theory-driven to predominantly data-driven. Automated systems generated output using inductive techniques. Training over massive data produced flexible and capable control systems, powerful predictive engines in domains ranging from language translation to pattern recognition, from medicine to economics. Coming from a background in formal knowledge representation and automated reasoning, the writing was on the wall -- big data and statistical machine learning was changing the face of AI and quickly. Form the very inception of Watson, I put a stake in the ground; we will not even attempt to build rich semantic models of the domain. I imagined it would take 3 years just to come to consensus on the common ontology to cover such a broad domain. Rather, we will use a diversity of shallow text analytics, leverage loose and fuzzy interpretations of unstructured information. We would allow many researchers to build largely independent NLP components and rely on machine learning techniques to balance and combine these loosely federated algorithms to evaluate answers in the context of passages. The approach, with a heck of a lot of good engineering, worked. Watson was arguably the best factoid question-answering system in the world, and Watson Paths, could connect questions to answers over multiple steps, offering passage-based "inference chains" from question to answer without a single "if-then rule". But could it explain why an answer is right or wrong? Could it reason over a logical understanding of the domain? Could it automatically learn from language and build the logical or cognitive structures that enable and precede language itself? Could it understand and learn the way we do? No. No. No. No. This talk draws an arc from Theory-Driven AI to Data-Driven AI and positions Watson along that trajectory. It proposes that to advance AI to where we all know it must go, we need to discover how to efficiently combine human cognition, massive data and logical theory formation. We need to boot strap a fluent collaboration between human and machine that engages logic, language and learning to enable machines to learn how to learn and ultimately deliver on the promise of AI.