• People
  • Courses

Deep Learning, Spring 2014

This is a graduate course on deep learning, one of the hottest topics in machine learning and AI at the moment.

In the last two or three years, Deep learning has revolutionized speech recognition and image recognition. Deep learning is widely deployed by such companies as Google, Facebook, Microsoft, IBM, Baidu, Apple and others for audio/speech, image, video, and natural language processing.

Information

  • Spring 2014 instructor: Yann LeCun, 715 Broadway, Room 1220, 212-998-3283, yann [ a t ] cs.nyu.edu
  • Teaching Assistant: Liu Hao, haoliu [ at ] nyu.edu
  • Classes: Mondays 5:10 to 7:00 PM. Location: Cantor, room 101
  • Lab Sessions: Wednesdays 5:10 to 6:00 PM. Location: Warren Weaver Hall, room 109.
  • Office Hours for Prof. LeCun: Wednesdays 3:00-5:00 and 6:00-7:00 PM. Please send an email to Prof. LeCun prior to an office hour visit.

News

  • 2014-01-29: first lab session (WWH 109, 5:10-6:00 PM): A gentle introduction to Lua and Torch. Instructor: Roy Lowrance.

Course Material

Course Description

The course covers a wide variety of topics in deep learning, feature learning and neural computation. It covers the mathematical methods and theoretical aspects as well as algorithmic and practical issues. Deep Learning is at the core of many recent advances in AI, particularly in audio, image, video, and language analysis and undestanding.

Who Can Take This Course?

This course is primarily designed for student in the Data Science programs. But any student who is familiar with the basics of machine learning can take this course.

The only formal pre-requisites is to have successfully completed “Intro to Data Science” or any basic course on machine learning. Familiarity with computer programming is assumed. The course relies heavily on such mathematical tools as linear algebra, probability and statistics, multi-variate calculus, and function optimization. The basic mathematical concepts will be introduced when needed, but students will be expected to assimilate a non-trivial amount of mathematical concepts in a fairly short time.

Familiarity with basic ML/stats concepts such as multinomial linear regression, logistic regression, K-means clustering, Principal Components Analysis, and simple regularization is assumed.

Topics

The topics studied in the course include:

  • learning representations of data.
  • the energy-based view of model estimation.
  • basis function expansion
  • supervised learning in multilayer architectures. Backpropagation
  • optimization issues in deep learning
  • heterogeneous learning systems, modular approach to learning.
  • convolutional nets
  • applications to image recognition
  • structured prediction, factor graphs and deep architectures
  • applications to speech recognition
  • learning embeddings, metric learning
  • recurrent nets: learning dynamical systems
  • recursive nets: algebra on representations
  • the basics of unsupervised learning
  • the energy-based view of unsupervised learning
  • energy-shaping methods for unsupervised learning
  • decoder-only models: K-means, sparse coding, convolutional sparse coding
  • encoder-only models: ICA, Product of Experts, Field of Experts.
  • the encoder-decoder architecture
  • Sparse Auto-encoders,
  • Denoising, Contracting, and Saturating auto-encoders
  • Restricted Boltzmann Machines. Contrastive Divergence.
  • learning invariant features: group sparsity
  • feature factorization
  • scattering transform
  • software implementation issues. GPU implementations.
  • parallelizing deep learning
  • theoretical questions
  • open questions
 
 
/srv/www/cilvr/htdocs/data/pages/courses/deeplearning2014/start.txt · Last modified: 2015/01/16 06:50 by yann
Recent changes RSS feed Creative Commons License Valid XHTML 1.0 Valid CSS Driven by DokuWiki
Drupal Garland Theme for Dokuwiki