Advanced Machine Learning
Teacher
STROMME Austin
Department: Statistics
ECTS:
4
Course Hours:
21
Tutorials Hours:
12
Language:
English
Examination Modality:
écrit+CC
Objective
The objective of this course is to present the theoretical basics of statistical learning by focusing essentially on binary classification.
The statistical (and algorithmic) complexity of this problem will be considered through the analysis of the Empirical Risk Minimization first, and then other general class of algorithms : SVM, Neural Nets, Boosting and Random Forest (if time permits). Their statistical properties will be discussed and compared.
These algorithms will be applied to real data during the TP sessions.
Planning
- The framework and examples
- Empirical Risk Minimization
- VC dimension and inequality
- Convexification and general losses
- Chaining
- Support Vector Machines
- Boosting
- Feed Forward Neural Nets
References
Y. Mansour, Machine Learning: Foundations, Tel-Aviv University, 2013
P. Rigollet, Mathematics of Machine Learning, MIT, 2015
A. Ng, Machine Learning, Stanford, 2015
S. Kakade and A. Tewari, Topics in Artificial Intelligence, TTIC, 2008
L. Devroye, L. Gyorfi and G. Lugosi. A Probabilistic Theory of Pattern Recognition. Springer 1996.
T. Hastie, R. Tibshirani and J. Friedman. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer 2009.