Call for Short Papers

Deep Neural Networks have been revolutionizing several application domains in artificial intelligence: Computer Vision, Speech Recognition and Natural Language Processing. Concurrent to the recent progress in deep learning, significant progress has been happening in virtual reality, augmented reality, and smart wearable devices. These advances create unprecedented opportunities for researchers to tackle fundamental challenges in deploying deep learning systems to portable devices with limited resources (e.g. Memory, CPU, Energy, Bandwidth). Efficient methods in deep learning can have crucial impacts in using distributed systems, embedded devices, and FPGA for several AI tasks. Achieving these goals calls for ground-breaking innovations on many fronts: learning, optimization, computer architecture, data compression, indexing, and hardware design.

We invite submissions of short papers related to the following topics in the context of efficient methods in deep learning:

  • Network compression
  • Quantized neural networks (e.g. Binary neural networks)
  • Hardware accelerator for neural networks
  • Training and inference with low-precision operations
  • Real-time applications in deep neural networks (e.g. Object detection, Image segmentation, Online language translation, ...)
  • Distributed training/inference of deep neural networks
  • Fast optimization methods for neural networks

The papers should be no more than 4 pages (excluding reference pages) in NIPS 2016 format. Accepted papers will be presented as posters and at the spotlight presentation. A list of all of the accepted papers will be published in the workshop's website and the papers will be hosted by arXiv.org. All submissions will undergo double-blind reviews. In the case of previously published work, the review will be single-blind.