Research - Papers
Explore a selection of our published work on a variety of key research challenges in AI.
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption,…
Exploring Team-Sourced Hyperlinks to Address Navigation Challenges for Low-Vision Readers of Scientific Papers
Reading academic papers is a fundamental part of higher education and research, but navigating these information-dense texts can be challenging. In particular, low-vision readers using magnification…
FeedLens: Polymorphic Lenses for Personalizing Exploratory Search over Knowledge Graphs
The vast scale and open-ended nature of knowledge graphs (KGs) make exploratory search over them cognitively demanding for users. We introduce a new technique, polymorphic lenses , that improves…
Threddy: An Interactive System for Personalized Thread-based Exploration and Organization of Scientific Literature
Reviewing the literature to understand relevant threads of past work is a critical part of research and vehicle for learning. However, as the scientific literature grows the challenges for users to…
SciFact-Open: Towards open-domain scientific claim verification
While research on scientific claim verification has led to the development of powerful systems that appear to approach human performance, these approaches have yet to be tested in a realistic…
A Dataset of Alt Texts from HCI Publications
Figures in scientifc publications contain important information and results, and alt text is needed for blind and low vision readers to engage with their content. We conduct a study to characterize…
Multi-Scale Contrastive Co-Training for Event Temporal Relation Extraction
Extracting temporal relationships between pairs of events in texts is a crucial yet challenging problem for natural language understanding. Depending on the distance between the events, models must…
Few-Shot Self-Rationalization with Natural Language Prompts
Self-rationalization models that predict task labels and generate free-text elaborations for their predictions could enable more intuitive interaction with NLP systems. These models are, however,…
Literature-Augmented Clinical Outcome Prediction
We present BEEP (Biomedical Evidence-Enhanced Predictions), a novel approach for clinical outcome prediction that retrieves patient-specific medical literature and incorporates it into predictive…
Long Context Question Answering via Supervised Contrastive Learning
Long-context question answering (QA) tasks require reasoning over a long document or multiple documents. Addressing these tasks often benefits from identifying a set of evidence spans (e.g.,…