Call for Contributions

 

Important Dates


Deadline for submissions: October 20, 2010
Notification of acceptance: October 27, 2010

Submission Instructions


We invite submission of extended abstracts to the workshop. Extended abstracts should be 2-4 pages and adhere to the NIPS style (http://nips.cc/PaperInformation/StyleFiles). Submissions should include the title, authors' names, institutions and email addresses and should be sent in PDF or PS file format by email to gentrans-nips2010@cs.toronto.edu

Submissions will be reviewed by the organizing committee on the basis of relevance, significance, technical quality, and clarity. Selected submissions may be accepted either as an oral presentation or as a poster presentation: there will be a limited number of oral presentations.

We encourage submissions with a particular emphasis on:

  • Learning structured representations: How can machines extract invariant representations from a large supply of high-dimensional highly-structured unlabeled data? How can these representations be used to learn many different concepts (e.g., visual object categories) and expand on them without disrupting previously-learned concepts? How can these representations be used in multiple applications?
  • Transfer Learning: How can previously-learned representations help learning new tasks so that less labeled supervision is needed? How can this facilitate knowledge representation for transfer learning tasks?
  • One-shot learning: Can we develop rich generative models that are capable of efficiently leveraging background knowledge in order to learn novel categories based on a single or a few training example? Are there models suitable for deep transfer, or generalizing across domains, when presented with few examples?
  • Deep learning: Recently, there has been notable progress in learning deep probabilistic generative models, including Deep Belief Networks, Deep Boltzmann Machines, deep nonparametric Bayesian models, that contain many layers of hidden variables. Can these models be extended to transfer learning tasks as well as learning new concepts with only one or few examples? Can we use representations learned by the deep models as an input to more structured hierarchical Bayesian models?
  • Scalability and success in real-world applications: How well do existing transfer learning models scale to large-scale problems including problems in computer vision, natural language processing, and speech perception? How well do these algorithms perform when applied to modeling high-dimensional real-world distributions (e.g. the distribution of natural images)?