Loading…

Detecting user attention to video segments using interval EEG features

•A method of detecting the top 20% of viewer attention to video segments is proposed.•This is the first study of detecting viewer attention during video viewing.•All subject-independent models unbiased to specific genres are evaluated.•The all-14-channel, single-channel, and selected multi-channel m...

Full description

Saved in:
Bibliographic Details
Published in:Expert systems with applications 2019-01, Vol.115, p.578-592
Main Authors: Moon, Jinyoung, Kwon, Yongjin, Park, Jongyoul, Yoon, Wan Chul
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c381t-8b498e8f827d3bb4535da628c7eb71fdf6e981dca493347c82f5ab0a5c5bf51a3
cites cdi_FETCH-LOGICAL-c381t-8b498e8f827d3bb4535da628c7eb71fdf6e981dca493347c82f5ab0a5c5bf51a3
container_end_page 592
container_issue
container_start_page 578
container_title Expert systems with applications
container_volume 115
creator Moon, Jinyoung
Kwon, Yongjin
Park, Jongyoul
Yoon, Wan Chul
description •A method of detecting the top 20% of viewer attention to video segments is proposed.•This is the first study of detecting viewer attention during video viewing.•All subject-independent models unbiased to specific genres are evaluated.•The all-14-channel, single-channel, and selected multi-channel models are included.•The interval band ratio features are the most suitable for all the types of models. To manage voluminous viewed videos, which US adults watch at a rate of more than five hours per day on average, an automatic method of detecting highly attended video segments during video viewing is required to access them for fine-grained sharing and rewatching. Most electroencephalography (EEG)-based studies of user state analysis have addressed the recognition of attention-related states in a specific task condition, such as drowsiness during driving, attention during learning, and mental fatigue during task execution. In contrast to attention in a specific task condition, both inattention and normal attention are meaningless to viewers in terms of managing viewed videos, while detecting high attention paid to video segments would make a valuable contribution to an automatic management system of viewed videos based on viewer attention. To the best of our knowledge, this is the first EEG-based study of detecting viewer attention paid to video segments. This study describes how to collect video-induced EEG and attention data for video segments from viewers without bias to specific genres and how to construct a subject-independent detection model for the top 20% of viewer attention. The attention detection model using the proposed interval EEG features from 14 channels achieved the best average F1 score of 39.79% with an average accuracy of 52.96%. Additionally, this paper proposes a channel-based feature selection method that considers both the performances of single-channel models and their physical locations for investigating the group of channels relevant to attention detection. The attention detection models using the interval EEG features from all four or some of the channels located in the fronto-central, parietal, temporal, and occipital lobes of the left hemisphere achieved the best F1 score of 39.60% with an average accuracy of 48.70%. It is shown that these models achieve better performance than models using the features from all four or some of their symmetric channels in the right hemisphere and models using the features from six channels located
doi_str_mv 10.1016/j.eswa.2018.08.016
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2131209547</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0957417418305244</els_id><sourcerecordid>2131209547</sourcerecordid><originalsourceid>FETCH-LOGICAL-c381t-8b498e8f827d3bb4535da628c7eb71fdf6e981dca493347c82f5ab0a5c5bf51a3</originalsourceid><addsrcrecordid>eNp9kF9LwzAUxYMoOKdfwKeCz625Tduk4IvMbQoDX_Q5pOnNSNnamaQVv70p81k4cOHyO_fPIeQeaAYUqscuQ_-tspyCyGgUVBdkAYKztOI1uyQLWpc8LYAX1-TG-45S4JTyBdm8YEAdbL9PRo8uUSFgH-zQJ2FIJtvikHjcH2PPR2DGbB_QTeqQrNfbxKAKo0N_S66MOni8-6tL8rlZf6xe09379m31vEs1ExBS0RS1QGFEzlvWNEXJylZVudAcGw6mNRXWAlqtipqxgmuRm1I1VJW6bEwJii3Jw3nuyQ1fI_ogu2F0fVwpc2CQxzcLHqn8TGk3eO_QyJOzR-V-JFA55yU7Oecl57wkjYIqmp7OJoz3Txad9Npir7G1LiYk28H-Z_8FKcx0Tw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2131209547</pqid></control><display><type>article</type><title>Detecting user attention to video segments using interval EEG features</title><source>ScienceDirect Freedom Collection</source><creator>Moon, Jinyoung ; Kwon, Yongjin ; Park, Jongyoul ; Yoon, Wan Chul</creator><creatorcontrib>Moon, Jinyoung ; Kwon, Yongjin ; Park, Jongyoul ; Yoon, Wan Chul</creatorcontrib><description>•A method of detecting the top 20% of viewer attention to video segments is proposed.•This is the first study of detecting viewer attention during video viewing.•All subject-independent models unbiased to specific genres are evaluated.•The all-14-channel, single-channel, and selected multi-channel models are included.•The interval band ratio features are the most suitable for all the types of models. To manage voluminous viewed videos, which US adults watch at a rate of more than five hours per day on average, an automatic method of detecting highly attended video segments during video viewing is required to access them for fine-grained sharing and rewatching. Most electroencephalography (EEG)-based studies of user state analysis have addressed the recognition of attention-related states in a specific task condition, such as drowsiness during driving, attention during learning, and mental fatigue during task execution. In contrast to attention in a specific task condition, both inattention and normal attention are meaningless to viewers in terms of managing viewed videos, while detecting high attention paid to video segments would make a valuable contribution to an automatic management system of viewed videos based on viewer attention. To the best of our knowledge, this is the first EEG-based study of detecting viewer attention paid to video segments. This study describes how to collect video-induced EEG and attention data for video segments from viewers without bias to specific genres and how to construct a subject-independent detection model for the top 20% of viewer attention. The attention detection model using the proposed interval EEG features from 14 channels achieved the best average F1 score of 39.79% with an average accuracy of 52.96%. Additionally, this paper proposes a channel-based feature selection method that considers both the performances of single-channel models and their physical locations for investigating the group of channels relevant to attention detection. The attention detection models using the interval EEG features from all four or some of the channels located in the fronto-central, parietal, temporal, and occipital lobes of the left hemisphere achieved the best F1 score of 39.60% with an average accuracy of 48.70%. It is shown that these models achieve better performance than models using the features from all four or some of their symmetric channels in the right hemisphere and models using the features from six channels located in the anterior-frontal and frontal lobes of the left and right hemispheres. This paper shows the feasibility of subject-independent and genre-independent attention detection models using a wireless EEG headset with optimized channels; these models can be applied to an intelligent video management system based on viewer attention in real-world scenarios.</description><identifier>ISSN: 0957-4174</identifier><identifier>EISSN: 1873-6793</identifier><identifier>DOI: 10.1016/j.eswa.2018.08.016</identifier><language>eng</language><publisher>New York: Elsevier Ltd</publisher><subject>Adults ; Automatic text analysis ; Channels ; Detection ; Electroencephalography ; Expert systems ; Hemispheres ; Interval EEG features ; Occipital lobes ; Segments ; User attention ; Video segments ; Video viewing</subject><ispartof>Expert systems with applications, 2019-01, Vol.115, p.578-592</ispartof><rights>2018 Elsevier Ltd</rights><rights>Copyright Elsevier BV Jan 2019</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c381t-8b498e8f827d3bb4535da628c7eb71fdf6e981dca493347c82f5ab0a5c5bf51a3</citedby><cites>FETCH-LOGICAL-c381t-8b498e8f827d3bb4535da628c7eb71fdf6e981dca493347c82f5ab0a5c5bf51a3</cites><orcidid>0000-0002-6616-824X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>315,786,790,27957,27958</link.rule.ids></links><search><creatorcontrib>Moon, Jinyoung</creatorcontrib><creatorcontrib>Kwon, Yongjin</creatorcontrib><creatorcontrib>Park, Jongyoul</creatorcontrib><creatorcontrib>Yoon, Wan Chul</creatorcontrib><title>Detecting user attention to video segments using interval EEG features</title><title>Expert systems with applications</title><description>•A method of detecting the top 20% of viewer attention to video segments is proposed.•This is the first study of detecting viewer attention during video viewing.•All subject-independent models unbiased to specific genres are evaluated.•The all-14-channel, single-channel, and selected multi-channel models are included.•The interval band ratio features are the most suitable for all the types of models. To manage voluminous viewed videos, which US adults watch at a rate of more than five hours per day on average, an automatic method of detecting highly attended video segments during video viewing is required to access them for fine-grained sharing and rewatching. Most electroencephalography (EEG)-based studies of user state analysis have addressed the recognition of attention-related states in a specific task condition, such as drowsiness during driving, attention during learning, and mental fatigue during task execution. In contrast to attention in a specific task condition, both inattention and normal attention are meaningless to viewers in terms of managing viewed videos, while detecting high attention paid to video segments would make a valuable contribution to an automatic management system of viewed videos based on viewer attention. To the best of our knowledge, this is the first EEG-based study of detecting viewer attention paid to video segments. This study describes how to collect video-induced EEG and attention data for video segments from viewers without bias to specific genres and how to construct a subject-independent detection model for the top 20% of viewer attention. The attention detection model using the proposed interval EEG features from 14 channels achieved the best average F1 score of 39.79% with an average accuracy of 52.96%. Additionally, this paper proposes a channel-based feature selection method that considers both the performances of single-channel models and their physical locations for investigating the group of channels relevant to attention detection. The attention detection models using the interval EEG features from all four or some of the channels located in the fronto-central, parietal, temporal, and occipital lobes of the left hemisphere achieved the best F1 score of 39.60% with an average accuracy of 48.70%. It is shown that these models achieve better performance than models using the features from all four or some of their symmetric channels in the right hemisphere and models using the features from six channels located in the anterior-frontal and frontal lobes of the left and right hemispheres. This paper shows the feasibility of subject-independent and genre-independent attention detection models using a wireless EEG headset with optimized channels; these models can be applied to an intelligent video management system based on viewer attention in real-world scenarios.</description><subject>Adults</subject><subject>Automatic text analysis</subject><subject>Channels</subject><subject>Detection</subject><subject>Electroencephalography</subject><subject>Expert systems</subject><subject>Hemispheres</subject><subject>Interval EEG features</subject><subject>Occipital lobes</subject><subject>Segments</subject><subject>User attention</subject><subject>Video segments</subject><subject>Video viewing</subject><issn>0957-4174</issn><issn>1873-6793</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><recordid>eNp9kF9LwzAUxYMoOKdfwKeCz625Tduk4IvMbQoDX_Q5pOnNSNnamaQVv70p81k4cOHyO_fPIeQeaAYUqscuQ_-tspyCyGgUVBdkAYKztOI1uyQLWpc8LYAX1-TG-45S4JTyBdm8YEAdbL9PRo8uUSFgH-zQJ2FIJtvikHjcH2PPR2DGbB_QTeqQrNfbxKAKo0N_S66MOni8-6tL8rlZf6xe09379m31vEs1ExBS0RS1QGFEzlvWNEXJylZVudAcGw6mNRXWAlqtipqxgmuRm1I1VJW6bEwJii3Jw3nuyQ1fI_ogu2F0fVwpc2CQxzcLHqn8TGk3eO_QyJOzR-V-JFA55yU7Oecl57wkjYIqmp7OJoz3Txad9Npir7G1LiYk28H-Z_8FKcx0Tw</recordid><startdate>201901</startdate><enddate>201901</enddate><creator>Moon, Jinyoung</creator><creator>Kwon, Yongjin</creator><creator>Park, Jongyoul</creator><creator>Yoon, Wan Chul</creator><general>Elsevier Ltd</general><general>Elsevier BV</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-6616-824X</orcidid></search><sort><creationdate>201901</creationdate><title>Detecting user attention to video segments using interval EEG features</title><author>Moon, Jinyoung ; Kwon, Yongjin ; Park, Jongyoul ; Yoon, Wan Chul</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c381t-8b498e8f827d3bb4535da628c7eb71fdf6e981dca493347c82f5ab0a5c5bf51a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Adults</topic><topic>Automatic text analysis</topic><topic>Channels</topic><topic>Detection</topic><topic>Electroencephalography</topic><topic>Expert systems</topic><topic>Hemispheres</topic><topic>Interval EEG features</topic><topic>Occipital lobes</topic><topic>Segments</topic><topic>User attention</topic><topic>Video segments</topic><topic>Video viewing</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Moon, Jinyoung</creatorcontrib><creatorcontrib>Kwon, Yongjin</creatorcontrib><creatorcontrib>Park, Jongyoul</creatorcontrib><creatorcontrib>Yoon, Wan Chul</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Expert systems with applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Moon, Jinyoung</au><au>Kwon, Yongjin</au><au>Park, Jongyoul</au><au>Yoon, Wan Chul</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Detecting user attention to video segments using interval EEG features</atitle><jtitle>Expert systems with applications</jtitle><date>2019-01</date><risdate>2019</risdate><volume>115</volume><spage>578</spage><epage>592</epage><pages>578-592</pages><issn>0957-4174</issn><eissn>1873-6793</eissn><abstract>•A method of detecting the top 20% of viewer attention to video segments is proposed.•This is the first study of detecting viewer attention during video viewing.•All subject-independent models unbiased to specific genres are evaluated.•The all-14-channel, single-channel, and selected multi-channel models are included.•The interval band ratio features are the most suitable for all the types of models. To manage voluminous viewed videos, which US adults watch at a rate of more than five hours per day on average, an automatic method of detecting highly attended video segments during video viewing is required to access them for fine-grained sharing and rewatching. Most electroencephalography (EEG)-based studies of user state analysis have addressed the recognition of attention-related states in a specific task condition, such as drowsiness during driving, attention during learning, and mental fatigue during task execution. In contrast to attention in a specific task condition, both inattention and normal attention are meaningless to viewers in terms of managing viewed videos, while detecting high attention paid to video segments would make a valuable contribution to an automatic management system of viewed videos based on viewer attention. To the best of our knowledge, this is the first EEG-based study of detecting viewer attention paid to video segments. This study describes how to collect video-induced EEG and attention data for video segments from viewers without bias to specific genres and how to construct a subject-independent detection model for the top 20% of viewer attention. The attention detection model using the proposed interval EEG features from 14 channels achieved the best average F1 score of 39.79% with an average accuracy of 52.96%. Additionally, this paper proposes a channel-based feature selection method that considers both the performances of single-channel models and their physical locations for investigating the group of channels relevant to attention detection. The attention detection models using the interval EEG features from all four or some of the channels located in the fronto-central, parietal, temporal, and occipital lobes of the left hemisphere achieved the best F1 score of 39.60% with an average accuracy of 48.70%. It is shown that these models achieve better performance than models using the features from all four or some of their symmetric channels in the right hemisphere and models using the features from six channels located in the anterior-frontal and frontal lobes of the left and right hemispheres. This paper shows the feasibility of subject-independent and genre-independent attention detection models using a wireless EEG headset with optimized channels; these models can be applied to an intelligent video management system based on viewer attention in real-world scenarios.</abstract><cop>New York</cop><pub>Elsevier Ltd</pub><doi>10.1016/j.eswa.2018.08.016</doi><tpages>15</tpages><orcidid>https://orcid.org/0000-0002-6616-824X</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0957-4174
ispartof Expert systems with applications, 2019-01, Vol.115, p.578-592
issn 0957-4174
1873-6793
language eng
recordid cdi_proquest_journals_2131209547
source ScienceDirect Freedom Collection
subjects Adults
Automatic text analysis
Channels
Detection
Electroencephalography
Expert systems
Hemispheres
Interval EEG features
Occipital lobes
Segments
User attention
Video segments
Video viewing
title Detecting user attention to video segments using interval EEG features
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-09-22T19%3A22%3A44IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Detecting%20user%20attention%20to%20video%20segments%20using%20interval%20EEG%20features&rft.jtitle=Expert%20systems%20with%20applications&rft.au=Moon,%20Jinyoung&rft.date=2019-01&rft.volume=115&rft.spage=578&rft.epage=592&rft.pages=578-592&rft.issn=0957-4174&rft.eissn=1873-6793&rft_id=info:doi/10.1016/j.eswa.2018.08.016&rft_dat=%3Cproquest_cross%3E2131209547%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c381t-8b498e8f827d3bb4535da628c7eb71fdf6e981dca493347c82f5ab0a5c5bf51a3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2131209547&rft_id=info:pmid/&rfr_iscdi=true