|
Using hyper realistic voice and face transformation filters for the study of human social interactions| title | Using hyper realistic voice and face transformation filters for the study of human social interactions |
|---|
| start_date | 2024/11/15 |
|---|
| schedule | 12h30-14h |
|---|
| online | no |
|---|
| location_info | salle Langevin |
|---|
| summary | Social interaction research is lacking an experimental paradigm enabling researchers to understand how specific social signals (e.g. vocal/facial expressions) or physical attributes (beauty, age, gender) causally influence social interactions. To go beyond these limitations, we built the experimental platform DuckSoup. DuckSoup is a video-conference experimental platform that enables researchers to transform the voices and faces of participants with transformation filters in real time during interactions. During this talk, I will show how we used DuckSoup to align (or misalign) the smiles of dating participants to reveal how smile alignment causally influences liking. I will also present a pre-registered replication of these findings in a "meet-up" context. I will conclude by presenting my vision for the future of social interaction research. Particularly, one where we are able to (1) explicitly test social cognition theories, such as emotion contagion or social bias theory in social interactions, by transforming participants’ social signals and physical attributes in real time (2) scale social interaction research to test theories in different cultures and (3) reveal the emergent multimodal mechanisms (e.g. emotional and physiological synchronisation) causally triggered by specific signals and physical attributes. I will finish with an ethical note about transformation filters, which are progressively becoming a widespread societal phenomenon, with potential to influence our social cognition. |
|---|
| responsibles | NC |
|---|
Workflow history| from state (1) | to state | comment | date |
| submitted | published | | 2024/11/12 14:37 UTC |
| |
|