Loading…
Regression-based intensity estimation of facial action units
Facial Action Coding System (FACS) is the de facto standard in the analysis of facial expressions. FACS describes expressions in terms of the configuration and strength of atomic units called Action Units: AUs. FACS defines 44 AUs and each AU intensity is defined on a nonlinear scale of five grades....
Saved in:
Published in: | Image and vision computing 2012-10, Vol.30 (10), p.774-784 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c339t-9035130b88922d0cb5158b74bc3c23db134d9f690ac11af04f9643504e4fdc083 |
---|---|
cites | cdi_FETCH-LOGICAL-c339t-9035130b88922d0cb5158b74bc3c23db134d9f690ac11af04f9643504e4fdc083 |
container_end_page | 784 |
container_issue | 10 |
container_start_page | 774 |
container_title | Image and vision computing |
container_volume | 30 |
creator | Savran, Arman Sankur, Bulent Taha Bilge, M. |
description | Facial Action Coding System (FACS) is the de facto standard in the analysis of facial expressions. FACS describes expressions in terms of the configuration and strength of atomic units called Action Units: AUs. FACS defines 44 AUs and each AU intensity is defined on a nonlinear scale of five grades. There has been significant progress in the literature on the detection of AUs. However, the companion problem of estimating the AU strengths has not been much investigated. In this work we propose a novel AU intensity estimation scheme applied to 2D luminance and/or 3D surface geometry images. Our scheme is based on regression of selected image features. These features are either non-specific, that is, those inherited from the AU detection algorithm, or are specific in that they are selected for the sole purpose of intensity estimation. For thoroughness, various types of local 3D shape indicators have been considered, such as mean curvature, Gaussian curvature, shape index and curvedness, as well as their fusion. The feature selection from the initial plethora of Gabor moments is instrumented via a regression that optimizes the AU intensity predictions. Our AU intensity estimator is person-independent and when tested on 25 AUs that appear singly or in various combinations, it performs significantly better than the state-of-the-art method which is based on the margins of SVMs designed for AU detection. When evaluated comparatively, one can see that the 2D and 3D modalities have relative merits per upper face and lower face AUs, respectively, and that there is an overall improvement if 2D and 3D intensity estimations are used in fusion.
► Regression for person-independent estimation of facial action unit intensities ► Mean curvature, Gaussian curvature, shape index, curvedness for 3D estimation ► Fusing curvature, Gaussian curvature, and curvedness achieves the best 3D estimation ► SVM regression of AdaBoost/AdaBoost.RT selected features is superior to SVM margins ► Modality fusion overcomes deficiencies of 3D in upper face and 2D in lower face |
doi_str_mv | 10.1016/j.imavis.2011.11.008 |
format | article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_1136572596</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0262885611001326</els_id><sourcerecordid>1136572596</sourcerecordid><originalsourceid>FETCH-LOGICAL-c339t-9035130b88922d0cb5158b74bc3c23db134d9f690ac11af04f9643504e4fdc083</originalsourceid><addsrcrecordid>eNp9UE1LAzEQDaJgrf4DD3v0suvMZj8SEEGKVaEgiJ5DNjuRlO1uTbaF_nvTrmdhYIaZN2_mPcZuETIErO7XmdvovQtZDohZDABxxmYo6jwVyMU5m0FexVqU1SW7CmENADXUcsYePujbUwhu6NNGB2oT14_UBzceEgpj5B3jKBlsYrVxuku0OTV2vRvDNbuwugt085fn7Gv5_Ll4TVfvL2-Lp1VqOJdjKoGXyKERQuZ5C6YpsRRNXTSGm5y3DfKilbaSoA2itlBYWRW8hIIK2xoQfM7uJt6tH3528S21ccFQ1-mehl1QiLwq67yUVYQWE9T4IQRPVm19FOEPCkEdzVJrNZmljmbFVQWnC4_TGkUZe0deBeOoN9Q6T2ZU7eD-J_gF-dZ0Ng</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1136572596</pqid></control><display><type>article</type><title>Regression-based intensity estimation of facial action units</title><source>ScienceDirect Journals</source><creator>Savran, Arman ; Sankur, Bulent ; Taha Bilge, M.</creator><creatorcontrib>Savran, Arman ; Sankur, Bulent ; Taha Bilge, M.</creatorcontrib><description>Facial Action Coding System (FACS) is the de facto standard in the analysis of facial expressions. FACS describes expressions in terms of the configuration and strength of atomic units called Action Units: AUs. FACS defines 44 AUs and each AU intensity is defined on a nonlinear scale of five grades. There has been significant progress in the literature on the detection of AUs. However, the companion problem of estimating the AU strengths has not been much investigated. In this work we propose a novel AU intensity estimation scheme applied to 2D luminance and/or 3D surface geometry images. Our scheme is based on regression of selected image features. These features are either non-specific, that is, those inherited from the AU detection algorithm, or are specific in that they are selected for the sole purpose of intensity estimation. For thoroughness, various types of local 3D shape indicators have been considered, such as mean curvature, Gaussian curvature, shape index and curvedness, as well as their fusion. The feature selection from the initial plethora of Gabor moments is instrumented via a regression that optimizes the AU intensity predictions. Our AU intensity estimator is person-independent and when tested on 25 AUs that appear singly or in various combinations, it performs significantly better than the state-of-the-art method which is based on the margins of SVMs designed for AU detection. When evaluated comparatively, one can see that the 2D and 3D modalities have relative merits per upper face and lower face AUs, respectively, and that there is an overall improvement if 2D and 3D intensity estimations are used in fusion.
► Regression for person-independent estimation of facial action unit intensities ► Mean curvature, Gaussian curvature, shape index, curvedness for 3D estimation ► Fusing curvature, Gaussian curvature, and curvedness achieves the best 3D estimation ► SVM regression of AdaBoost/AdaBoost.RT selected features is superior to SVM margins ► Modality fusion overcomes deficiencies of 3D in upper face and 2D in lower face</description><identifier>ISSN: 0262-8856</identifier><identifier>EISSN: 1872-8138</identifier><identifier>DOI: 10.1016/j.imavis.2011.11.008</identifier><language>eng</language><publisher>Elsevier B.V</publisher><subject>3D facial expression recognition ; Action unit intensity estimation ; AdaBoost.RT ; Curvature ; Facial ; Facial Action Coding System ; Feature selection ; Mathematical analysis ; Regression ; Regression analysis ; SVM regression ; Three dimensional ; Two dimensional</subject><ispartof>Image and vision computing, 2012-10, Vol.30 (10), p.774-784</ispartof><rights>2011 Elsevier B.V.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c339t-9035130b88922d0cb5158b74bc3c23db134d9f690ac11af04f9643504e4fdc083</citedby><cites>FETCH-LOGICAL-c339t-9035130b88922d0cb5158b74bc3c23db134d9f690ac11af04f9643504e4fdc083</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>315,786,790,27957,27958</link.rule.ids></links><search><creatorcontrib>Savran, Arman</creatorcontrib><creatorcontrib>Sankur, Bulent</creatorcontrib><creatorcontrib>Taha Bilge, M.</creatorcontrib><title>Regression-based intensity estimation of facial action units</title><title>Image and vision computing</title><description>Facial Action Coding System (FACS) is the de facto standard in the analysis of facial expressions. FACS describes expressions in terms of the configuration and strength of atomic units called Action Units: AUs. FACS defines 44 AUs and each AU intensity is defined on a nonlinear scale of five grades. There has been significant progress in the literature on the detection of AUs. However, the companion problem of estimating the AU strengths has not been much investigated. In this work we propose a novel AU intensity estimation scheme applied to 2D luminance and/or 3D surface geometry images. Our scheme is based on regression of selected image features. These features are either non-specific, that is, those inherited from the AU detection algorithm, or are specific in that they are selected for the sole purpose of intensity estimation. For thoroughness, various types of local 3D shape indicators have been considered, such as mean curvature, Gaussian curvature, shape index and curvedness, as well as their fusion. The feature selection from the initial plethora of Gabor moments is instrumented via a regression that optimizes the AU intensity predictions. Our AU intensity estimator is person-independent and when tested on 25 AUs that appear singly or in various combinations, it performs significantly better than the state-of-the-art method which is based on the margins of SVMs designed for AU detection. When evaluated comparatively, one can see that the 2D and 3D modalities have relative merits per upper face and lower face AUs, respectively, and that there is an overall improvement if 2D and 3D intensity estimations are used in fusion.
► Regression for person-independent estimation of facial action unit intensities ► Mean curvature, Gaussian curvature, shape index, curvedness for 3D estimation ► Fusing curvature, Gaussian curvature, and curvedness achieves the best 3D estimation ► SVM regression of AdaBoost/AdaBoost.RT selected features is superior to SVM margins ► Modality fusion overcomes deficiencies of 3D in upper face and 2D in lower face</description><subject>3D facial expression recognition</subject><subject>Action unit intensity estimation</subject><subject>AdaBoost.RT</subject><subject>Curvature</subject><subject>Facial</subject><subject>Facial Action Coding System</subject><subject>Feature selection</subject><subject>Mathematical analysis</subject><subject>Regression</subject><subject>Regression analysis</subject><subject>SVM regression</subject><subject>Three dimensional</subject><subject>Two dimensional</subject><issn>0262-8856</issn><issn>1872-8138</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2012</creationdate><recordtype>article</recordtype><recordid>eNp9UE1LAzEQDaJgrf4DD3v0suvMZj8SEEGKVaEgiJ5DNjuRlO1uTbaF_nvTrmdhYIaZN2_mPcZuETIErO7XmdvovQtZDohZDABxxmYo6jwVyMU5m0FexVqU1SW7CmENADXUcsYePujbUwhu6NNGB2oT14_UBzceEgpj5B3jKBlsYrVxuku0OTV2vRvDNbuwugt085fn7Gv5_Ll4TVfvL2-Lp1VqOJdjKoGXyKERQuZ5C6YpsRRNXTSGm5y3DfKilbaSoA2itlBYWRW8hIIK2xoQfM7uJt6tH3528S21ccFQ1-mehl1QiLwq67yUVYQWE9T4IQRPVm19FOEPCkEdzVJrNZmljmbFVQWnC4_TGkUZe0deBeOoN9Q6T2ZU7eD-J_gF-dZ0Ng</recordid><startdate>20121001</startdate><enddate>20121001</enddate><creator>Savran, Arman</creator><creator>Sankur, Bulent</creator><creator>Taha Bilge, M.</creator><general>Elsevier B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>20121001</creationdate><title>Regression-based intensity estimation of facial action units</title><author>Savran, Arman ; Sankur, Bulent ; Taha Bilge, M.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c339t-9035130b88922d0cb5158b74bc3c23db134d9f690ac11af04f9643504e4fdc083</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2012</creationdate><topic>3D facial expression recognition</topic><topic>Action unit intensity estimation</topic><topic>AdaBoost.RT</topic><topic>Curvature</topic><topic>Facial</topic><topic>Facial Action Coding System</topic><topic>Feature selection</topic><topic>Mathematical analysis</topic><topic>Regression</topic><topic>Regression analysis</topic><topic>SVM regression</topic><topic>Three dimensional</topic><topic>Two dimensional</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Savran, Arman</creatorcontrib><creatorcontrib>Sankur, Bulent</creatorcontrib><creatorcontrib>Taha Bilge, M.</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Image and vision computing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Savran, Arman</au><au>Sankur, Bulent</au><au>Taha Bilge, M.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Regression-based intensity estimation of facial action units</atitle><jtitle>Image and vision computing</jtitle><date>2012-10-01</date><risdate>2012</risdate><volume>30</volume><issue>10</issue><spage>774</spage><epage>784</epage><pages>774-784</pages><issn>0262-8856</issn><eissn>1872-8138</eissn><notes>ObjectType-Article-2</notes><notes>SourceType-Scholarly Journals-1</notes><notes>ObjectType-Feature-1</notes><notes>content type line 23</notes><abstract>Facial Action Coding System (FACS) is the de facto standard in the analysis of facial expressions. FACS describes expressions in terms of the configuration and strength of atomic units called Action Units: AUs. FACS defines 44 AUs and each AU intensity is defined on a nonlinear scale of five grades. There has been significant progress in the literature on the detection of AUs. However, the companion problem of estimating the AU strengths has not been much investigated. In this work we propose a novel AU intensity estimation scheme applied to 2D luminance and/or 3D surface geometry images. Our scheme is based on regression of selected image features. These features are either non-specific, that is, those inherited from the AU detection algorithm, or are specific in that they are selected for the sole purpose of intensity estimation. For thoroughness, various types of local 3D shape indicators have been considered, such as mean curvature, Gaussian curvature, shape index and curvedness, as well as their fusion. The feature selection from the initial plethora of Gabor moments is instrumented via a regression that optimizes the AU intensity predictions. Our AU intensity estimator is person-independent and when tested on 25 AUs that appear singly or in various combinations, it performs significantly better than the state-of-the-art method which is based on the margins of SVMs designed for AU detection. When evaluated comparatively, one can see that the 2D and 3D modalities have relative merits per upper face and lower face AUs, respectively, and that there is an overall improvement if 2D and 3D intensity estimations are used in fusion.
► Regression for person-independent estimation of facial action unit intensities ► Mean curvature, Gaussian curvature, shape index, curvedness for 3D estimation ► Fusing curvature, Gaussian curvature, and curvedness achieves the best 3D estimation ► SVM regression of AdaBoost/AdaBoost.RT selected features is superior to SVM margins ► Modality fusion overcomes deficiencies of 3D in upper face and 2D in lower face</abstract><pub>Elsevier B.V</pub><doi>10.1016/j.imavis.2011.11.008</doi><tpages>11</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0262-8856 |
ispartof | Image and vision computing, 2012-10, Vol.30 (10), p.774-784 |
issn | 0262-8856 1872-8138 |
language | eng |
recordid | cdi_proquest_miscellaneous_1136572596 |
source | ScienceDirect Journals |
subjects | 3D facial expression recognition Action unit intensity estimation AdaBoost.RT Curvature Facial Facial Action Coding System Feature selection Mathematical analysis Regression Regression analysis SVM regression Three dimensional Two dimensional |
title | Regression-based intensity estimation of facial action units |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-09-21T23%3A35%3A34IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Regression-based%20intensity%20estimation%20of%20facial%20action%20units&rft.jtitle=Image%20and%20vision%20computing&rft.au=Savran,%20Arman&rft.date=2012-10-01&rft.volume=30&rft.issue=10&rft.spage=774&rft.epage=784&rft.pages=774-784&rft.issn=0262-8856&rft.eissn=1872-8138&rft_id=info:doi/10.1016/j.imavis.2011.11.008&rft_dat=%3Cproquest_cross%3E1136572596%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c339t-9035130b88922d0cb5158b74bc3c23db134d9f690ac11af04f9643504e4fdc083%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=1136572596&rft_id=info:pmid/&rfr_iscdi=true |