|
Exposé sur l'article d'Emmanuel Candès et Yaniv Plan "Near-ideal model selection by L1 minimization"| old_uid | 5664 |
|---|
| title | Exposé sur l'article d'Emmanuel Candès et Yaniv Plan "Near-ideal model selection by L1 minimization" |
|---|
| start_date | 2008/11/24 |
|---|
| schedule | 13h30 |
|---|
| online | no |
|---|
| summary | We consider the fundamental problem of estimating the mean of a vector y = X beta + z , where
X is an n * p design matrix in which one can have far more variables than observations and z is
a stochastic error term—the so-called ‘p > n’ setup. When beta is sparse, or more generally, when
there is a sparse subset of covariates providing a close approximation to the unknown mean
vector, we ask whether or not it is possible to accurately estimate X beta using a computationally
tractable algorithm.
We show that in a surprisingly wide range of situations, the lasso happens to nearly select
the best subset of variables. Quantitatively speaking, we prove that solving a simple quadratic
program achieves a squared error within a logarithmic factor of the ideal mean squared error
one would achieve with an oracle supplying perfect information about which variables should
be included in the model and which variables should not. Interestingly, our results describe the
average performance of the lasso; that is, the performance one can expect in an vast ma jority
of cases where X beta is a sparse or nearly sparse superposition of variables, but not in all cases.
Our results are nonasymptotic and widely applicable since they simply require that pairs of
predictor variables are not too collinear. |
|---|
| responsibles | Biau, Stoltz, Massart |
|---|
| |
|