Deep Learning: Models and Optimization
Teacher
ECTS:
3
Course Hours:
12
Tutorials Hours:
6
Language:
French
Examination Modality:
écrit+CC
References
- Vijay Badrinarayanan, Alex Kendall, and Roberto Cipolla. Segnet: a deep convolutional encoderdecoder architecture for image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 12:2481-2495, December 2017.
- Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In EMNLP, 2014.
- Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang. Image super-resolution using deep convolutional networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 2:295-307, February 2016.
- John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, vol. 12:2121-2159, July 2011.
- Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, DavidWarde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, 2014.
- Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recurrent neural networks. In ICASSP, 2013.
- Geoffrey Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation, vol. 14, no. 8:1771-1800, August 2002.
- Geoffrey Hinton. A practical guide to training restricted boltzmann machines. Technical report, University of Toronto, August 2010.
- Sepp Hochreiter and Jurgen Schmidhuber. Long-short term memory. Neural Computation, vol. 9, no. 8:1735-1780, 1997.
- Diederik P. Kingma and Jimmy Lei Ba. Adam: a method for stochastic optimization. In ICLR, 2015.
- Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. In ICLR, 2014.
- Honglak Lee, Peter Pham, Yan Largman, and Andrew Y. Ng. Unsupervised feature learning for audio classification using convolutional deep belief networks. In NIPS, 2009.
- Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Kyoung Mu Lee. Enhanced deep residual networks for single image super-resolution. In CVPR Workshops, 2017.
- Pauline Luc, Natalia Neverova, Camille Couprie, Jakob Verbeek, and Yann LeCun. Predicting deeper into the future of semantic segmentation. In ICCV, 2017.
- Abdel-rahman Mohamed, George E. Dahl, and Geo_rey Hinton. Acoustic modeling using deep belief networks. IEEE Transactions on Audio, Speech, and Language Processing, vol. 20, issue 1:14-22, January 2012.
- Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the di_culty of training recurrent neural networks. Technical report, Université de Montréal, 2012.
- Ning Qian. On the momentum in gradient descent learning algorithms. Neural Networks, vol. 12, issue 1:145-151, January 1999.
- Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In ICLR, 2016.
- David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. Learning representations by backpropagating errors. Nature, vol. 323:533-536, October 1986.
- Ruslan Salakhutdinov and Geoffrey E. Hinton. Semantic hashing. In SIGIR, 2017.
- Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015.
- Casper Kaae Sonderby, Tapani Raiko, Maale Lars, Soren Kaae Sonderby, and Ole Winther. Ladder variational autoencoders. In NIPS, 2016.
- Christian Szegedy, Vincent Vanhoucke, Sergey Ioe, Jonathon Shlens, and ZbigniewWojna. Rethinking the inception architecture for computer vision. In CVPR, 2016.