Menu
Viewing 2 videos from 2019 in Distinguished Lecture Series See AI2’s full collection of videos on our YouTube channel.
    • July 23, 2019

      Jeff Hammerbacher

      Many promising cancer immunotherapy treatment protocols rely on efficient and increasingly extensive methods for manipulating human immune cells. T cells are a frequent target of the laboratory and clinical research driving the development of such protocols as they are most often the effector of the cytotoxic activity that makes these treatments so potent. However, the cytokine signaling network that drives the differentiation and function of such cells is complex and difficult to replicate on a large scale in model biological systems. Abridged versions of these networks have been established over decades of research but it remains challenging to define their global structure as the classification of T cell subtypes operating in these networks, the mechanics of their formation, and the purpose of the signaling molecules they excrete are all controversial, with a slowly expanding understanding emerging in literature over time.

      To aid in the quantification of this understanding, we are developing a methodology for identifying references to well known cytokines, transcription factors, and T cell types in literature as well as classifying the relationships between the three in an attempt to determine what cytokines initiate the transcription programs that lead to various cell states in addition to the secretion profiles associated with those states. Entity recognition for this task is performed using SciSpacy and classification of the relations between these entities is based on an LSTM trained using Snorkel, where weak supervision is established through a variety of classification heuristics and distant supervision is provided via previously published immunology databases.

      Less More
    • April 24, 2019

      Peter Anderson

      From robots to cars, virtual assistants and voice-controlled drones, computing devices are increasingly expected to communicate naturally with people and to understand the visual context in which they operate. In this talk, I will present our latest work on generating and comprehending visually-grounded language. First, we will discuss the challenging task of describing an image (image captioning). I will introduce captioning models that leverage multiple data sources, including object detection datasets and unaligned text corpora, in order to learn about the long-tail of visual concepts found in the real world. To support and encourage further efforts in this area, I will present the 'nocaps' benchmark for novel object captioning. In the second part of the talk, I will describe our recent work on developing agents that follow natural language instructions in reconstructed 3D environments using the R2R dataset for vision-and-language navigation.

      Less More