|
Efficient learning of sparse Conditional Random Fields| old_uid | 9244 |
|---|
| title | Efficient learning of sparse Conditional Random Fields |
|---|
| start_date | 2010/11/15 |
|---|
| schedule | 14h-16h |
|---|
| online | no |
|---|
| summary | Conditional Random Fields (CRFs) constitute an efficient approach for supervised sequence labeling. CRFs - a generalization of the logistic regression - allow to take structural dependencies into consideration and therefore are used for structured output prediction. We address the problem of model selection for CRFs based on imposing sparsity through a penalty term called elastic “net”. Our contribution is twofold. First, we introduce coordinate descent parameter update schemes for CRFs, since the penalty term is not differentiable in zero and gradient methods can not be applied directly. Second, we show how sparsity of the parameter set can be exploited to speed up training procedure and hence potentially handle very large dimensional models. We provide comparisons of the proposed approach with state-of-the-art CRFs training strategies. |
|---|
| responsibles | <not specified> |
|---|
| |
|