Perceptually Valid Facial Expressions for Character-Based Applications
This paper addresses the problem of creating facial expression of mixed emotions in a perceptually valid way. The research has been done in the context of a “game-like” health and education applications aimed at studying social competency and facial expression awareness in autistic children as well...
Saved in:
| Main Authors: | , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Wiley
2009-01-01
|
| Series: | International Journal of Computer Games Technology |
| Online Access: | http://dx.doi.org/10.1155/2009/462315 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849434649622740992 |
|---|---|
| author | Ali Arya Steve DiPaola Avi Parush |
| author_facet | Ali Arya Steve DiPaola Avi Parush |
| author_sort | Ali Arya |
| collection | DOAJ |
| description | This paper addresses the problem of creating facial expression of mixed emotions in a perceptually valid way. The research has been done in the context of a “game-like” health and education applications aimed at studying social competency and facial expression awareness in autistic children as well as native language learning, but the results can be applied to many other applications such as games with need for dynamic facial expressions or tools for automating the creation of facial animations. Most existing methods for creating facial expressions of mixed emotions use operations like averaging to create the combined effect of two universal emotions. Such methods may be mathematically justifiable but are not necessarily valid from a perceptual point of view. The research reported here starts by user experiments aiming at understanding how people combine facial actions to express mixed emotions, and how the viewers perceive a set of facial actions in terms of underlying emotions. Using the results of these experiments and a three-dimensional emotion model, we associate facial actions to dimensions and regions in the emotion space, and create a facial expression based on the location of the mixed emotion in the three-dimensional space. We call these regionalized facial actions “facial expression units.” |
| format | Article |
| id | doaj-art-e9ce59bd0b8a446fae22ff7c6b720bc0 |
| institution | Kabale University |
| issn | 1687-7047 1687-7055 |
| language | English |
| publishDate | 2009-01-01 |
| publisher | Wiley |
| record_format | Article |
| series | International Journal of Computer Games Technology |
| spelling | doaj-art-e9ce59bd0b8a446fae22ff7c6b720bc02025-08-20T03:26:34ZengWileyInternational Journal of Computer Games Technology1687-70471687-70552009-01-01200910.1155/2009/462315462315Perceptually Valid Facial Expressions for Character-Based ApplicationsAli Arya0Steve DiPaola1Avi Parush2School of Information Technology, Carleton University, Ottawa, ON, K1S 5B6, CanadaSchool of Interactive Arts and Technology, Simon Fraser University, Surrey, BC, V3T 0A3, CanadaDepartment of Psychology, Carleton University, Ottawa, ON, K1S 5B6, CanadaThis paper addresses the problem of creating facial expression of mixed emotions in a perceptually valid way. The research has been done in the context of a “game-like” health and education applications aimed at studying social competency and facial expression awareness in autistic children as well as native language learning, but the results can be applied to many other applications such as games with need for dynamic facial expressions or tools for automating the creation of facial animations. Most existing methods for creating facial expressions of mixed emotions use operations like averaging to create the combined effect of two universal emotions. Such methods may be mathematically justifiable but are not necessarily valid from a perceptual point of view. The research reported here starts by user experiments aiming at understanding how people combine facial actions to express mixed emotions, and how the viewers perceive a set of facial actions in terms of underlying emotions. Using the results of these experiments and a three-dimensional emotion model, we associate facial actions to dimensions and regions in the emotion space, and create a facial expression based on the location of the mixed emotion in the three-dimensional space. We call these regionalized facial actions “facial expression units.”http://dx.doi.org/10.1155/2009/462315 |
| spellingShingle | Ali Arya Steve DiPaola Avi Parush Perceptually Valid Facial Expressions for Character-Based Applications International Journal of Computer Games Technology |
| title | Perceptually Valid Facial Expressions for Character-Based Applications |
| title_full | Perceptually Valid Facial Expressions for Character-Based Applications |
| title_fullStr | Perceptually Valid Facial Expressions for Character-Based Applications |
| title_full_unstemmed | Perceptually Valid Facial Expressions for Character-Based Applications |
| title_short | Perceptually Valid Facial Expressions for Character-Based Applications |
| title_sort | perceptually valid facial expressions for character based applications |
| url | http://dx.doi.org/10.1155/2009/462315 |
| work_keys_str_mv | AT aliarya perceptuallyvalidfacialexpressionsforcharacterbasedapplications AT stevedipaola perceptuallyvalidfacialexpressionsforcharacterbasedapplications AT aviparush perceptuallyvalidfacialexpressionsforcharacterbasedapplications |