Videos

See AI2's full collection of videos on our YouTube channel.
All Years
All Videos
Viewing 1-10 of 164 videos
  •  Brain-Body Co-Optimization of Embodied Machines | Embodied AI Lecture Series at AI2 Thumbnail

    Brain-Body Co-Optimization of Embodied Machines | Embodied AI Lecture Series at AI2

    July 9, 2021  |  Nick Cheney
    Embodied Cognition posits that the body of an agent is not only a vessel to contain the mind, but meaningfully influences the agent's brain and contributes to its intelligent behavior through morphological computation. In this talk, I'll introduce a system for studying the role of complex brains and bodies in soft robotics, demonstrate how this system may exhibit morphological computation, and describe a particular challenge that occurs when attempting to employ machine learning to optimize embodied machines and their behavior. I'll argue that simply considering and accounting for the co-dependencies suggested by embodied cognition can help us to overcome this challenge, and suggest that this approach may be helpful to the optimization of structure and function in machine learning domains outside of soft robotics.
  • Oren Etzioni - The case against (worrying about) existential risk from AI Thumbnail

    Oren Etzioni - The case against (worrying about) existential risk from AI

    June 16, 2021  |  Towards Data Science Podcast
    Host Jeremie Harris talks AI catastrophic risk skepticism with Oren Etzioni, CEO of the Allen Institute for AI, a world-leading AI research lab that's developed many well-known projects, including the popular AllenNLP library, and Semantic Scholar.
  • Towards Human-Level Spatial Perception: 3D Dynamic Scene Graphs and Certifiable Algorithms for Robot Perception Thumbnail

    Towards Human-Level Spatial Perception: 3D Dynamic Scene Graphs and Certifiable Algorithms for Robot Perception

    June 11, 2021  |  Luca Carlone
    Spatial perception —the robot’s ability to sense and understand the surrounding environment— is a key enabler for autonomous systems operating in complex environments, including self-driving cars and unmanned aerial vehicles. Recent advances in perception algorithms and systems have enabled robots to detect objects and create large-scale maps of an unknown environment, which are crucial capabilities for navigation, manipulation, and human-robot interaction. Despite these advances, researchers and practitioners are well aware of the brittleness of existing perception systems, and a large gap still separates robot and human perception. This talk discusses two efforts targeted at bridging this gap. The first effort targets high-level understanding. While humans are able to quickly grasp both geometric, semantic, and physical aspects of a scene, high-level scene understanding remains a challenge for robotics. I present our work on real-time metric-semantic understanding and 3D Dynamic Scene Graphs. I introduce the first generation of Spatial Perception Engines, that extend the traditional notions of mapping and SLAM, and allow a robot to build a “mental model” of the environment, including spatial concepts (e.g., humans, objects, rooms, buildings) and their relations at multiple levels of abstraction. The second effort focuses on robustness. I present recent advances in the design of certifiable perception algorithms that are robust to extreme amounts of noise and outliers and afford performance guarantees. I present fast certifiable algorithms for object pose estimation: our algorithms are “hard to break” (e.g., are robust to 99% outliers) and succeed in localizing objects where an average human would fail. Moreover, they come with a “contract” that guarantees their input-output performance. Certifiable algorithms and real-time high-level understanding are key enablers for the next generation of autonomous systems, that are trustworthy, understand and execute high-level human instructions, and operate in large dynamic environments and over an extended period of time.
  • Self-Adaptive Manipulation: Rethinking Embodiments in Embodied Intelligence | Embodied AI Lecture Series at AI2 Thumbnail

    Self-Adaptive Manipulation: Rethinking Embodiments in Embodied Intelligence | Embodied AI Lecture Series at AI2

    June 3, 2021  |  Shuran Song
    Recent advancements in embodied intelligence have shown exciting results in adapting to diverse and complex external environments. However, much work remains incognizant of the agents' internal hardware (i.e., the embodiment), which often plays a critical role in determining the system's overall functionality and performance. In this talk, we revisit the role of “embodiment” in embodied intelligence, specifically, in the context of robotic manipulation. The key idea behind ''self-adaptive manipulation'' is to treat a robot's hardware as an integral part of its behavior --- the learned manipulation policies should be conditioned on their hardware and also inform how hardware should be improved. I will use two of our recent works to illustrate both aspects: AdaGrasp for learning a unified policy for using different and novel gripper hardware, and Fit2Form for generating a new gripper hardware design that optimizes for the target task.
  • Do Blind AI Navigation Agents Build Maps? | Embodied AI Lecture Series at AI2 Thumbnail

    Do Blind AI Navigation Agents Build Maps? | Embodied AI Lecture Series at AI2

    May 14, 2021  |  Dhruv Batra
    The embodiment hypothesis is the idea that “intelligence emerges in the interaction of an agent with an environment and as a result of sensorimotor activity”. Imagine walking up to a home robot and asking “Hey robot – can you go check if my laptop is on my desk? And if so, bring it to me”. Or asking an egocentric AI assistant (operating on your smart glasses): “Hey – where did I last see my keys?“. In order to be successful, such an embodied agent would need a range of skills – visual perception (to recognize & map scenes and objects), language understanding (to translate questions and instructions into actions), and action (to move and find things in a changing environment). I will first give an overview of work happening at Georgia Tech and FAIR building up to this grand goal of embodied AI. Next, I will dive into a recent project where we asked if machines – specifically, navigation agents – build cognitive maps. Specifically, we train ‘blind’ AI agents – with sensing limited to only egomotion – to perform PointGoal navigation (‘go to delta-x, delta-y relative to start’) via reinforcement learning. We find that blind AI agents are surprisingly effective navigators in unseen environments (~95% success). Further still, we find that (1) these blind AI agents utilize memory over long horizons (remembering ~1,000 steps of past experience in an episode); (2) this memory enables them to take shortcuts, i.e. efficiently travel through previously unexplored parts of the environment; (3) there is emergence of maps in this memory, i.e. a detailed occupancy grid of the environment can be decoded from the agent memory; and (4) the emergent maps are selective and task dependent – the agent forgets unnecessary excursions and only remembers the end points of such detours. Overall, our experiments and analysis show that blind AI agents take shortcuts and build cognitive maps purely from learning to navigate, suggesting that cognitive maps may be a natural solution to the problem of navigation and shedding light on the internal workings of AI navigation agents.
  • Sensory and ecological bases of plan-based action selection | Embodied AI Lecture Series at AI2 Thumbnail

    Sensory and ecological bases of plan-based action selection | Embodied AI Lecture Series at AI2

    April 16, 2021  |  Malcolm A. MacIver
    Current evidence for the ability of some animals to plan—imagining some future set of possibilities and picking the one assessed to have the highest value—is restricted to birds and mammals. Nonetheless, all animals have had just as long to evolve what seems to be a useful capacity. In this talk, I review some work we have done to get at the question of why planning may be useless to many animals, but useful to a select few. We use a variety of algorithms for this work, from reinforcement learning-based methods to POMDPs, and now are testing predictions using live mammals in complex reprogrammable habitats with a robot predator.
  • IFFC Invited Talk: Accelerating Scientific Exploration by Mining and Visualizing COVID-19 Literature Thumbnail

    IFFC Invited Talk: Accelerating Scientific Exploration by Mining and Visualizing COVID-19 Literature

    February 15, 2021  |  Tom Hope
    The IFCC Virtual Conference on Critical Role of Clinical Laboratories in the Covid-19 Pandemic.
  • Natural Perturbation for Robust Question Answering - EMNLP 2020 Thumbnail

    Natural Perturbation for Robust Question Answering - EMNLP 2020

    October 28, 2020  |  Daniel Khashabi
    A study cost-efficiency of local perturbations for model training.
  • Heroes of NLP: Oren Etzioni Thumbnail

    Heroes of NLP: Oren Etzioni

    October 13, 2020  |  DeepLearning.AI
    Heroes of NLP is a video interview series featuring Andrew Ng, the founder of DeepLearning.AI, in conversation with thought leaders in NLP. Watch Andrew lead an enlightening discourse around how these industry and academic experts started in AI, their previous and current research projects, how their understanding of AI has changed through the decades, and what advice they can provide for learners of NLP. This is an interview featuring Andrew Ng and Oren Etioni, CEO of the Allen Institute for AI.
  • Is GPT-3 Intelligent? A Directors' Conversation with Oren Etzioni Thumbnail

    Is GPT-3 Intelligent? A Directors' Conversation with Oren Etzioni

    October 1, 2020  |  Stanford HAI
    In this latest Directors’ Conversation, HAI Denning Family Co-director John Etchemendy’s guest is Oren Etzioni, Allen Institute for Artificial Intelligence CEO, company founder, and professor of computer science. Here the two discuss language prediction model GPT-3, a better approach to an AI Turing test, and the real signs that we’re approaching AGI.