Introducing the 1st International Conference on Learning Representations (ICLR2013)


Held in conjunction with AISTATS2013, Scottsdale, Arizona, May 2nd-4th 2013

Submission deadline: January 15th 2013

It is well understood that the performance of machine learning methods
is heavily dependent on the choice of data representation (or
features) on which they are applied. The rapidly developing field of
representation learning is concerned with questions surrounding how we
can best learn meaningful and useful representations of data.  We take
a broad view of the field, and include in it topics such as deep
learning and feature learning, metric learning, kernel learning,
compositional models, non-linear structured prediction, and issues
regarding non-convex optimization.

Despite the importance of representation learning to machine learning
and to application areas such as vision, speech, audio and NLP, there
is currently no common venue for researchers who share a common
interest in this topic. The goal of ICLR is to help fill this void.

A non-exhaustive list of relevant topics:
– unsupervised representation learning
– supervised representation learning
– metric learning and kernel learning
– dimensionality expansion, sparse modeling
– hierarchical models
– optimization for representation learning
– implementation issues, parallelization, software platforms, hardware
– applications in vision, audio, speech, and natural language processing.
– other applications

This entry was posted in ICLR and tagged , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s