Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

A Controllable QA-based Framework for Decontextualization

Benjamin NewmanLuca SoldainiRaymond FokKyle Lo
2023
arXiv

Many real-world applications require surfacing extracted snippets to users, whether motivated by assistive tools for literature surveys or document cross-referencing, or needs to mitigate and… 

Aligning Language Models to User Opinions

EunJeong HwangBodhisattwa Prasad MajumderNiket Tandon
2023
arXiv

An important aspect of developing LLMs that interact with humans is to align models' behavior to their users. It is possible to prompt an LLM into behaving as a certain persona, especially a user… 

Anthropomorphization of AI: Opportunities and Risks

A. DeshpandeTanmay RajpurohitKarthik NarasimhanA. Kalyan
2023
arXiv.org

Anthropomorphization is the tendency to attribute human-like traits to non-human entities. It is prevalent in many social contexts -- children anthropomorphize toys, adults do so with brands, and it… 

Complex Mathematical Symbol Definition Structures: A Dataset and Model for Coordination Resolution in Definition Extraction

Anna Martin-BoyleAndrew HeadKyle LoDongyeop Kang
2023
arXiv

Mathematical symbol definition extraction is important for improving scholarly reading interfaces and scholarly information extraction (IE). However, the task poses several challenges: math symbols… 

CSTS: Conditional Semantic Textual Similarity

A. DeshpandeCarlos E. JimenezHoward ChenKarthik Narasimhan
2023
arXiv.org

Semantic textual similarity (STS) has been a cornerstone task in NLP that measures the degree of similarity between a pair of sentences, with applications in information retrieval, question… 

Decomposing Complex Queries for Tip-of-the-tongue Retrieval

Kevin LinKyle LoJoseph E. GonzalezDan Klein
2023
arXiv

When re-finding items, users who forget or are uncertain about identifying details often rely on creative strategies for expressing their information needs -- complex queries that describe content… 

Just CHOP: Embarrassingly Simple LLM Compression

Ananya Harsh JhaTom SherborneEvan Pete WalshIz Beltagy
2023
arXiv

Large language models (LLMs) enable unparalleled few- and zero-shot reasoning capabilities but at a high computational footprint. A growing assortment of methods for compression promises to reduce… 

OpenPI2.0: An Improved Dataset for Entity Tracking in Texts

Li ZhangHai XuAbhinav KommulaChris Callison-Burch
2023
arXiv

Representing texts as information about entities has long been deemed effective in event reasoning. We propose OpenPI2.0, an improved dataset for tracking entity states in procedural texts.… 

Improving Language Models via Plug-and-Play Retrieval Feedback

Wenhao YuZhihan ZhangZhenwen LiangAshish Sabharwal
2023
arXiv

Large language models (LLMs) exhibit remarkable performance across various NLP tasks. However, they often generate incorrect or hallucinated information, which hinders their practical applicability… 

Learning to Generate Novel Scientific Directions with Contextualized Literature-based Discovery

Qingyun WangDoug DowneyHeng JiTom Hope
2023
arXiv.org

Literature-Based Discovery (LBD) aims to discover new scientific knowledge by mining papers and generating hypotheses. Standard LBD is limited to predicting pairwise relations between discrete…