Menu
Viewing 7 papers from 2017 in Semantic Scholar
Clear all
    • ACL 2017
      Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, and Russell Power
      Pre-trained word embeddings learned from unlabeled text have become a standard component of neural network architectures for NLP tasks. However, in most cases, the recurrent network that operates on word-level representations to produce context sensitive representations is trained on relatively…  (More)
    • WWW 2017
      Chenyan Xiong, Russell Power and Jamie Callan
      This paper introduces Explicit Semantic Ranking (ESR), a new ranking technique that leverages knowledge graph embedding. Analysis of the query log from our academic search engine, SemanticScholar.org, reveals that a major error source is its inability to understand the meaning of research concepts…  (More)
    • SemEval 2017
      Waleed Ammar, Matthew E. Peters, Chandra Bhagavatula, and Russell Power
      This paper describes our submission for the ScienceIE shared task (SemEval-2017 Task 10) on entity and relation extraction from scientific papers. Our model is based on the end-to-end relation extraction model of Miwa and Bansal (2016) with several enhancements such as semi-supervised learning via…  (More)
    • JCDL 2017
      Luca Weihs and Oren Etzioni
      Citations implicitly encode a community's judgment of a paper's importance and thus provide a unique signal by which to study scientific impact. Efforts in understanding and refining this signal are reflected in the probabilistic modeling of citation networks and the proliferation of citation-based…  (More)
    • SIGIR 2017
      Chenyan Xiong, Zhuyun Dai, Jamie Callan, Zhiyuan Liu, and Russell Power
      This paper proposes K-NRM, a kernel based neural model for document ranking. Given a query and a set of documents, K-NRM uses a translation matrix that models word-level similarities via word embeddings, a new kernel-pooling technique that uses kernels to extract multi-level soft match features…  (More)
    • Nature 2017
      Oren Etzioni
      The number of times a paper is cited is a poor proxy for its impact (see P. Stephan et al. Nature 544, 411–412; 2017). I suggest relying instead on a new metric that uses artificial intelligence (AI) to capture the subset of an author's or a paper's essential and therefore most highly influential…  (More)
    • ACL 2017
      Pradeep Dasigi, Waleed Ammar, Chris Dyer, and Eduard Hovy
      Type-level word embeddings use the same set of parameters to represent all instances of a word regardless of its context, ignoring the inherent lexical ambiguity in language. Instead, we embed semantic concepts (or synsets) as defined in WordNet and represent a word token in a particular context by…  (More)