Viewing 3 videos from 2019 in Distinguished Lecture Series See AI2’s full collection of videos on our YouTube channel.
    • November 12, 2019

      Dr. Asma Ben Abacha

      Consumer health questions pose specific challenges to automated answering. Two of the salient aspects are the higher linguistic and semantic complexity when compared to open domain questions, and the more pronounced need for reliable information. In this talk I will present two main approaches to deal with the increased complexity by recognizing question entailment and by question summarization, recently published respectively in BMC Bioinformatics and ACL 2019. In particular, our question entailment approach to question answering (QA) showed that restricting the answer sources to only reliable resources led to an improvement of the QA performance and our summarization experiments showed the relevance of data augmentation methods for abstractive question summarization. I’ll also talk about the MEDIQA shared task on question entailment, textual inference and medical question answering that we recently organized at ACL-BioNLP. In the second part of the talk, I will address more specifically questions about medications and present our last study and dataset on medication QA. Finally, I’ll describe our recent endeavors in visual question answering (VQA) from radiology images and the medical VQA challenge (VQA-Med) editions for 2019 and 2020 that we organize in the scope of ImageCLEF.

      Less More
    • July 23, 2019

      Jeff Hammerbacher

      Many promising cancer immunotherapy treatment protocols rely on efficient and increasingly extensive methods for manipulating human immune cells. T cells are a frequent target of the laboratory and clinical research driving the development of such protocols as they are most often the effector of the cytotoxic activity that makes these treatments so potent. However, the cytokine signaling network that drives the differentiation and function of such cells is complex and difficult to replicate on a large scale in model biological systems. Abridged versions of these networks have been established over decades of research but it remains challenging to define their global structure as the classification of T cell subtypes operating in these networks, the mechanics of their formation, and the purpose of the signaling molecules they excrete are all controversial, with a slowly expanding understanding emerging in literature over time.

      To aid in the quantification of this understanding, we are developing a methodology for identifying references to well known cytokines, transcription factors, and T cell types in literature as well as classifying the relationships between the three in an attempt to determine what cytokines initiate the transcription programs that lead to various cell states in addition to the secretion profiles associated with those states. Entity recognition for this task is performed using SciSpacy and classification of the relations between these entities is based on an LSTM trained using Snorkel, where weak supervision is established through a variety of classification heuristics and distant supervision is provided via previously published immunology databases.

      Less More
    • April 24, 2019

      Peter Anderson

      From robots to cars, virtual assistants and voice-controlled drones, computing devices are increasingly expected to communicate naturally with people and to understand the visual context in which they operate. In this talk, I will present our latest work on generating and comprehending visually-grounded language. First, we will discuss the challenging task of describing an image (image captioning). I will introduce captioning models that leverage multiple data sources, including object detection datasets and unaligned text corpora, in order to learn about the long-tail of visual concepts found in the real world. To support and encourage further efforts in this area, I will present the 'nocaps' benchmark for novel object captioning. In the second part of the talk, I will describe our recent work on developing agents that follow natural language instructions in reconstructed 3D environments using the R2R dataset for vision-and-language navigation.

      Less More