|
Next generation facial expression recognition systems| old_uid | 11228 |
|---|
| title | Next generation facial expression recognition systems |
|---|
| start_date | 2012/04/03 |
|---|
| schedule | 14h-15h |
|---|
| online | no |
|---|
| summary | The past decade has seen a large number of publications on Automatic Facial Expression Recognition systems (AFERS). The first AFERS programmes are now publicly available, either non-profit from academics, or for sale by companies, giving an indication of what currently works and what doesn't. The first facial expression recognition challenge (FERA2011) serves to shed further light on the efforts in this field, comparing many state of the art approaches on the same, challenging, dataset. What we now see is the advent of a second generation of AFERS. Building upon the successes and learning from the failures of the first generation, these new systems attempt to tackle all the open challenges in this field by combining different approaches, as well as integrating sources of information other than the face (e.g. head actions). In this talk I will describe two recent contributions towards such a second generation of AFERS, to wit the novel facial point detection algorithm Local Evidence Aggregation Regressors (LEAR) and a novel dynamic appearance descriptor called LPQ-TOP.
Bio:
Dr. Michel F. Valstar (http://www.cs.nott.ac.uk/~mfv) is a lecturer at the University of Nottingham. He was a Visiting Researcher at MIT's Media Lab, and a Research Associate in the intelligent Behaviour Understanding Group (iBUG) at Imperial College London. He received his masters' degree in Electrical Engineering at Delft University of Technology in 2005 and his PhD in computer science at Imperial College London in 2008. Currently he is working in the fields of computer vision and pattern recognition, where his main interest is in automatic recognition of human behaviour, specialising in the analysis of facial expressions. In 2011 he was the main organiser of the first facial expression recognition challenge, FERA2011, and the first Audio-Visual Emotion recognition Challenge, AVEC2011. In 2007 he won the BCS British Machine Intelligence Prize for part of his PhD work. He has published technical papers at authoritative Journals and conferences including SMC-B, TAC, CVPR, ICCV and SMC-B and his work has received popular press coverage in New Scientist and on BBC Radio. |
|---|
| responsibles | <not specified> |
|---|
| |
|