Viewing 2 videos from 2018 in Distinguished Lecture Series See AI2’s full collection of videos on our YouTube channel.
A key challenge in natural language understanding is recognizing when two sentences have the same meaning. I'll discuss our work on this problem over the past few years, including the exploration of compositional functional architectures, learning criteria, and naturally-occurring sources of training data. The result is a single sentence embedding model that outperforms all systems from the 2012-2016 SemEval semantic textual similarity competitions without training on any of the annotated data from those tasks.
As a by-product, we developed a large dataset of automatically-generated paraphrase pairs by using parallel text and neural machine translation. We've since used the dataset, which we call ParaNMT-50M, to impart a notion of meaning equivalence to controlled text generation tasks, including syntactically-controlled paraphrasing and textual style transfer.Less More
Since AI software uses techniques like deep lookahead search and stochastic optimization of huge neural networks, it often results in complex behavior that is difficult for people to understand. Yet organizations are deploying AI algorithms in many mission-critical settings. To trust their behavior, we must make AI intelligible, either by using inherently interpretable models or by developing new methods for explaining and adjusting otherwise overwhelmingly complex decisions using local approximation, vocabulary alignment, and interactive explanation. This talk argues that intelligibility is essential, surveys recent work on building such systems, and highlights key directions for research.Less More