Online learning and aggregation
Enseignant
GAUCHER Solenne
Département : Statistics
Crédits ECTS :
3
Heures de cours :
15
Heures de TD :
6
Langue :
Français
Modalité d'examen :
écrit
Objectif
The aim of online learning is to provide effcient recursive algorithms of
prediction when the data are arriving sequentially in a streaming way rather
than as an array given once and for all. Whereas statistical learning is dealing
with independent identically distributed data, the emphasis in online learning
is on adversarial setting where the data are of arbitrary nature satisfying
mild conditions. In this setting, one of the key ideas is to use, at each
time instance, a suitable randomized choice from the given set of candidate
predictors. Analogous techniques can be also applied to solve the problem of
aggregation, that is, to obtain procedures that predict almost as good as the
best estimator in a given set. This course provides an introduction to online
learning and aggregation focusing on theoretical aspects.
Plan
- Online classication in realizable case, halving.
- Online gradient descent for convex and strongly convex loss. Online-to-batch conversion. Online linear regression.
- Randomization by exponential weighting. Prediction with expert advice.
- Adversarial multi-armed bandit problem.
- Aggregation of estimators.
- Gradient-free online learning. Continuos bandit problem.
Références
- Shalev-Schwartz, S. (2011) Online learning and online convex optimisation. Foundations and Trends in Machine Learning, vol. 4, pages 107-194.
- Tsybakov, A. (2020) Online learning and aggregation. Lecture Notes. (Detailed lecture notes are provided.)