Loading…

Evaluation of the reliability and validity of computerized tests of attention

Different aspects of attention can be assessed through psychological tests to identify stable individual or group differences as well as alterations after interventions. Aiming for a wide applicability of attentional assessments, Psychology Experiment Building Language (PEBL) is an open-source softw...

Full description

Saved in:
Bibliographic Details
Published in:PloS one 2023-01, Vol.18 (1), p.e0281196-e0281196
Main Authors: Langner, Robert, Scharnowski, Frank, Ionta, Silvio, G Salmon, Carlos E, Piper, Brian J, Pamplona, Gustavo S P
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c692t-9e58e529b26a49ac8c592ee3ac1a7f9d9565113ee4cee83ef6b106e98b1d74fa3
cites cdi_FETCH-LOGICAL-c692t-9e58e529b26a49ac8c592ee3ac1a7f9d9565113ee4cee83ef6b106e98b1d74fa3
container_end_page e0281196
container_issue 1
container_start_page e0281196
container_title PloS one
container_volume 18
creator Langner, Robert
Scharnowski, Frank
Ionta, Silvio
G Salmon, Carlos E
Piper, Brian J
Pamplona, Gustavo S P
description Different aspects of attention can be assessed through psychological tests to identify stable individual or group differences as well as alterations after interventions. Aiming for a wide applicability of attentional assessments, Psychology Experiment Building Language (PEBL) is an open-source software system for designing and running computerized tasks that tax various attentional functions. Here, we evaluated the reliability and validity of computerized attention tasks as provided with the PEBL package: Continuous Performance Task (CPT), Switcher task, Psychomotor Vigilance Task (PVT), Mental Rotation task, and Attentional Network Test. For all tasks, we evaluated test-retest reliability using the intraclass correlation coefficient (ICC), as well as internal consistency through within-test correlations and split-half ICC. Across tasks, response time scores showed adequate reliability, whereas scores of performance accuracy, variability, and deterioration over time did not. Stability across application sites was observed for the CPT and Switcher task, but practice effects were observed for all tasks except the PVT. We substantiate convergent and discriminant validity for several task scores using between-task correlations and provide further evidence for construct validity via associations of task scores with attentional and motivational assessments. Taken together, our results provide necessary information to help design and interpret studies involving attention assessments.
doi_str_mv 10.1371/journal.pone.0281196
format article
fullrecord <record><control><sourceid>gale_plos_</sourceid><recordid>TN_cdi_plos_journals_2770256230</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A734887746</galeid><doaj_id>oai_doaj_org_article_93212518af834fceade63a39c31b6a9a</doaj_id><sourcerecordid>A734887746</sourcerecordid><originalsourceid>FETCH-LOGICAL-c692t-9e58e529b26a49ac8c592ee3ac1a7f9d9565113ee4cee83ef6b106e98b1d74fa3</originalsourceid><addsrcrecordid>eNqNkk9v1DAQxSMEoqXwDRBEQkJw2CW2E8e-IFVVgZWKKvHvak2cya4rb7zYTkX59DhsWm1QDyiHJOPfvMm8vCx7ToolYTV5d-UG34Nd7lyPy4IKQiR_kB0TyeiC04I9PHg-yp6EcFUUFROcP86OGK8LThg_zj6fX4MdIBrX567L4wZzj9ZAY6yJNzn0bZ4A044v6Vy77W6I6M1vbPOIIYaxCjFiP0o8zR51YAM-m-4n2fcP59_OPi0uLj-uzk4vFppLGhcSK4EVlQ3lUErQQleSIjLQBOpOtrLiFSEMsdSIgmHHG1JwlKIhbV12wE6yl3vdnXVBTU4EReu6oBWnrEjEak-0Dq7Uzpst-BvlwKi_BefXCnw02qJKJhFaEQGdYGWnEVrkDJjUjDQc5Djt_TRtaLbY6rSrBzsTnZ_0ZqPW7lpJIWhd8STwZhLw7ueQXFNbEzRaCz26Yf_dhPCKy4S--ge9f7uJWkNawPSdS3P1KKpOa1YKUdflOHZ5D5WuFrdGp9h0JtVnDW9nDYmJ-CuuYQhBrb5--X_28secfX3AbhBs3ARnhzEyYQ6We1B7F4LH7s5kUqgx9bduqDH1akp9antx-IPumm5jzv4AbzP9fw</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2770256230</pqid></control><display><type>article</type><title>Evaluation of the reliability and validity of computerized tests of attention</title><source>Publicly Available Content Database</source><source>PubMed Central</source><creator>Langner, Robert ; Scharnowski, Frank ; Ionta, Silvio ; G Salmon, Carlos E ; Piper, Brian J ; Pamplona, Gustavo S P</creator><contributor>De La Torre, Gabriel G.</contributor><creatorcontrib>Langner, Robert ; Scharnowski, Frank ; Ionta, Silvio ; G Salmon, Carlos E ; Piper, Brian J ; Pamplona, Gustavo S P ; De La Torre, Gabriel G.</creatorcontrib><description>Different aspects of attention can be assessed through psychological tests to identify stable individual or group differences as well as alterations after interventions. Aiming for a wide applicability of attentional assessments, Psychology Experiment Building Language (PEBL) is an open-source software system for designing and running computerized tasks that tax various attentional functions. Here, we evaluated the reliability and validity of computerized attention tasks as provided with the PEBL package: Continuous Performance Task (CPT), Switcher task, Psychomotor Vigilance Task (PVT), Mental Rotation task, and Attentional Network Test. For all tasks, we evaluated test-retest reliability using the intraclass correlation coefficient (ICC), as well as internal consistency through within-test correlations and split-half ICC. Across tasks, response time scores showed adequate reliability, whereas scores of performance accuracy, variability, and deterioration over time did not. Stability across application sites was observed for the CPT and Switcher task, but practice effects were observed for all tasks except the PVT. We substantiate convergent and discriminant validity for several task scores using between-task correlations and provide further evidence for construct validity via associations of task scores with attentional and motivational assessments. Taken together, our results provide necessary information to help design and interpret studies involving attention assessments.</description><identifier>ISSN: 1932-6203</identifier><identifier>EISSN: 1932-6203</identifier><identifier>DOI: 10.1371/journal.pone.0281196</identifier><identifier>PMID: 36706136</identifier><language>eng</language><publisher>United States: Public Library of Science</publisher><subject>Analysis ; Assessments ; Attention ; Biology and Life Sciences ; Cognitive tasks ; Consent ; Correlation ; Correlation coefficient ; Correlation coefficients ; Examinations ; Experiments ; Hypothesis testing ; Medicine and Health Sciences ; Mental task performance ; Network reliability ; Neuropsychological Tests ; Physical Sciences ; Psychological assessment ; Psychological tests ; Psychology ; Psychometrics ; Public software ; Quantitative psychology ; Reaction Time ; Real property ; Reliability analysis ; Reliability aspects ; Reproducibility of Results ; Research and Analysis Methods ; Response time ; Social Sciences ; Software ; Technology application ; Validity ; Valuation ; Vigilance ; Wakefulness</subject><ispartof>PloS one, 2023-01, Vol.18 (1), p.e0281196-e0281196</ispartof><rights>Copyright: This is an open access article, free of all copyright, and may be freely reproduced, distributed, transmitted, modified, built upon, or otherwise used by anyone for any lawful purpose. The work is made available under the Creative Commons CC0 public domain dedication.</rights><rights>COPYRIGHT 2023 Public Library of Science</rights><rights>This is an open access article, free of all copyright, and may be freely reproduced, distributed, transmitted, modified, built upon, or otherwise used by anyone for any lawful purpose. The work is made available under the Creative Commons CC0 public domain dedication: https://creativecommons.org/publicdomain/zero/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>This is an open access article, free of all copyright, and may be freely reproduced, distributed, transmitted, modified, built upon, or otherwise used by anyone for any lawful purpose. The work is made available under the Creative Commons CC0 public domain dedication: https://creativecommons.org/publicdomain/zero/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c692t-9e58e529b26a49ac8c592ee3ac1a7f9d9565113ee4cee83ef6b106e98b1d74fa3</citedby><cites>FETCH-LOGICAL-c692t-9e58e529b26a49ac8c592ee3ac1a7f9d9565113ee4cee83ef6b106e98b1d74fa3</cites><orcidid>0000-0002-0278-203X ; 0000-0002-3237-001X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/2770256230/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2770256230?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>230,315,733,786,790,891,25783,27957,27958,37047,37048,44625,53827,53829,75483</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/36706136$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><contributor>De La Torre, Gabriel G.</contributor><creatorcontrib>Langner, Robert</creatorcontrib><creatorcontrib>Scharnowski, Frank</creatorcontrib><creatorcontrib>Ionta, Silvio</creatorcontrib><creatorcontrib>G Salmon, Carlos E</creatorcontrib><creatorcontrib>Piper, Brian J</creatorcontrib><creatorcontrib>Pamplona, Gustavo S P</creatorcontrib><title>Evaluation of the reliability and validity of computerized tests of attention</title><title>PloS one</title><addtitle>PLoS One</addtitle><description>Different aspects of attention can be assessed through psychological tests to identify stable individual or group differences as well as alterations after interventions. Aiming for a wide applicability of attentional assessments, Psychology Experiment Building Language (PEBL) is an open-source software system for designing and running computerized tasks that tax various attentional functions. Here, we evaluated the reliability and validity of computerized attention tasks as provided with the PEBL package: Continuous Performance Task (CPT), Switcher task, Psychomotor Vigilance Task (PVT), Mental Rotation task, and Attentional Network Test. For all tasks, we evaluated test-retest reliability using the intraclass correlation coefficient (ICC), as well as internal consistency through within-test correlations and split-half ICC. Across tasks, response time scores showed adequate reliability, whereas scores of performance accuracy, variability, and deterioration over time did not. Stability across application sites was observed for the CPT and Switcher task, but practice effects were observed for all tasks except the PVT. We substantiate convergent and discriminant validity for several task scores using between-task correlations and provide further evidence for construct validity via associations of task scores with attentional and motivational assessments. Taken together, our results provide necessary information to help design and interpret studies involving attention assessments.</description><subject>Analysis</subject><subject>Assessments</subject><subject>Attention</subject><subject>Biology and Life Sciences</subject><subject>Cognitive tasks</subject><subject>Consent</subject><subject>Correlation</subject><subject>Correlation coefficient</subject><subject>Correlation coefficients</subject><subject>Examinations</subject><subject>Experiments</subject><subject>Hypothesis testing</subject><subject>Medicine and Health Sciences</subject><subject>Mental task performance</subject><subject>Network reliability</subject><subject>Neuropsychological Tests</subject><subject>Physical Sciences</subject><subject>Psychological assessment</subject><subject>Psychological tests</subject><subject>Psychology</subject><subject>Psychometrics</subject><subject>Public software</subject><subject>Quantitative psychology</subject><subject>Reaction Time</subject><subject>Real property</subject><subject>Reliability analysis</subject><subject>Reliability aspects</subject><subject>Reproducibility of Results</subject><subject>Research and Analysis Methods</subject><subject>Response time</subject><subject>Social Sciences</subject><subject>Software</subject><subject>Technology application</subject><subject>Validity</subject><subject>Valuation</subject><subject>Vigilance</subject><subject>Wakefulness</subject><issn>1932-6203</issn><issn>1932-6203</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><sourceid>DOA</sourceid><recordid>eNqNkk9v1DAQxSMEoqXwDRBEQkJw2CW2E8e-IFVVgZWKKvHvak2cya4rb7zYTkX59DhsWm1QDyiHJOPfvMm8vCx7ToolYTV5d-UG34Nd7lyPy4IKQiR_kB0TyeiC04I9PHg-yp6EcFUUFROcP86OGK8LThg_zj6fX4MdIBrX567L4wZzj9ZAY6yJNzn0bZ4A044v6Vy77W6I6M1vbPOIIYaxCjFiP0o8zR51YAM-m-4n2fcP59_OPi0uLj-uzk4vFppLGhcSK4EVlQ3lUErQQleSIjLQBOpOtrLiFSEMsdSIgmHHG1JwlKIhbV12wE6yl3vdnXVBTU4EReu6oBWnrEjEak-0Dq7Uzpst-BvlwKi_BefXCnw02qJKJhFaEQGdYGWnEVrkDJjUjDQc5Djt_TRtaLbY6rSrBzsTnZ_0ZqPW7lpJIWhd8STwZhLw7ueQXFNbEzRaCz26Yf_dhPCKy4S--ge9f7uJWkNawPSdS3P1KKpOa1YKUdflOHZ5D5WuFrdGp9h0JtVnDW9nDYmJ-CuuYQhBrb5--X_28secfX3AbhBs3ARnhzEyYQ6We1B7F4LH7s5kUqgx9bduqDH1akp9antx-IPumm5jzv4AbzP9fw</recordid><startdate>20230127</startdate><enddate>20230127</enddate><creator>Langner, Robert</creator><creator>Scharnowski, Frank</creator><creator>Ionta, Silvio</creator><creator>G Salmon, Carlos E</creator><creator>Piper, Brian J</creator><creator>Pamplona, Gustavo S P</creator><general>Public Library of Science</general><general>Public Library of Science (PLoS)</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>IOV</scope><scope>ISR</scope><scope>3V.</scope><scope>7QG</scope><scope>7QL</scope><scope>7QO</scope><scope>7RV</scope><scope>7SN</scope><scope>7SS</scope><scope>7T5</scope><scope>7TG</scope><scope>7TM</scope><scope>7U9</scope><scope>7X2</scope><scope>7X7</scope><scope>7XB</scope><scope>88E</scope><scope>8AO</scope><scope>8C1</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FH</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>ATCPS</scope><scope>AZQEC</scope><scope>BBNVY</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>BHPHI</scope><scope>C1K</scope><scope>CCPQU</scope><scope>D1I</scope><scope>DWQXO</scope><scope>FR3</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>GNUQQ</scope><scope>H94</scope><scope>HCIFZ</scope><scope>K9.</scope><scope>KB.</scope><scope>KB0</scope><scope>KL.</scope><scope>L6V</scope><scope>LK8</scope><scope>M0K</scope><scope>M0S</scope><scope>M1P</scope><scope>M7N</scope><scope>M7P</scope><scope>M7S</scope><scope>NAPCQ</scope><scope>P5Z</scope><scope>P62</scope><scope>P64</scope><scope>PATMY</scope><scope>PDBOC</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>PYCSY</scope><scope>RC3</scope><scope>7X8</scope><scope>5PM</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0002-0278-203X</orcidid><orcidid>https://orcid.org/0000-0002-3237-001X</orcidid></search><sort><creationdate>20230127</creationdate><title>Evaluation of the reliability and validity of computerized tests of attention</title><author>Langner, Robert ; Scharnowski, Frank ; Ionta, Silvio ; G Salmon, Carlos E ; Piper, Brian J ; Pamplona, Gustavo S P</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c692t-9e58e529b26a49ac8c592ee3ac1a7f9d9565113ee4cee83ef6b106e98b1d74fa3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Analysis</topic><topic>Assessments</topic><topic>Attention</topic><topic>Biology and Life Sciences</topic><topic>Cognitive tasks</topic><topic>Consent</topic><topic>Correlation</topic><topic>Correlation coefficient</topic><topic>Correlation coefficients</topic><topic>Examinations</topic><topic>Experiments</topic><topic>Hypothesis testing</topic><topic>Medicine and Health Sciences</topic><topic>Mental task performance</topic><topic>Network reliability</topic><topic>Neuropsychological Tests</topic><topic>Physical Sciences</topic><topic>Psychological assessment</topic><topic>Psychological tests</topic><topic>Psychology</topic><topic>Psychometrics</topic><topic>Public software</topic><topic>Quantitative psychology</topic><topic>Reaction Time</topic><topic>Real property</topic><topic>Reliability analysis</topic><topic>Reliability aspects</topic><topic>Reproducibility of Results</topic><topic>Research and Analysis Methods</topic><topic>Response time</topic><topic>Social Sciences</topic><topic>Software</topic><topic>Technology application</topic><topic>Validity</topic><topic>Valuation</topic><topic>Vigilance</topic><topic>Wakefulness</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Langner, Robert</creatorcontrib><creatorcontrib>Scharnowski, Frank</creatorcontrib><creatorcontrib>Ionta, Silvio</creatorcontrib><creatorcontrib>G Salmon, Carlos E</creatorcontrib><creatorcontrib>Piper, Brian J</creatorcontrib><creatorcontrib>Pamplona, Gustavo S P</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Gale In Context: Opposing Viewpoints</collection><collection>Gale In Context: Science</collection><collection>ProQuest Central (Corporate)</collection><collection>Animal Behavior Abstracts</collection><collection>Bacteriology Abstracts (Microbiology B)</collection><collection>Biotechnology Research Abstracts</collection><collection>ProQuest Nursing and Allied Health Journals</collection><collection>Ecology Abstracts</collection><collection>Entomology Abstracts (Full archive)</collection><collection>Immunology Abstracts</collection><collection>Meteorological &amp; Geoastrophysical Abstracts</collection><collection>Nucleic Acids Abstracts</collection><collection>Virology and AIDS Abstracts</collection><collection>Agricultural Science Collection</collection><collection>ProQuest_Health &amp; Medical Collection</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Medical Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Public Health Database (Proquest)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Natural Science Collection</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>Advanced Technologies &amp; Aerospace Database‎ (1962 - current)</collection><collection>ProQuest Agriculture &amp; Environmental Science Database</collection><collection>ProQuest Central Essentials</collection><collection>Biological Science Collection</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest Natural Science Collection</collection><collection>Environmental Sciences and Pollution Management</collection><collection>ProQuest One Community College</collection><collection>ProQuest Materials Science Collection</collection><collection>ProQuest Central Korea</collection><collection>Engineering Research Database</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Central Student</collection><collection>AIDS and Cancer Research Abstracts</collection><collection>SciTech Premium Collection (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest Health &amp; Medical Complete (Alumni)</collection><collection>ProQuest Materials Science Database</collection><collection>Nursing &amp; Allied Health Database (Alumni Edition)</collection><collection>Meteorological &amp; Geoastrophysical Abstracts - Academic</collection><collection>ProQuest Engineering Collection</collection><collection>Biological Sciences</collection><collection>Agricultural Science Database</collection><collection>Health &amp; Medical Collection (Alumni Edition)</collection><collection>Medical Database</collection><collection>Algology Mycology and Protozoology Abstracts (Microbiology C)</collection><collection>Biological Science Database</collection><collection>ProQuest Engineering Database</collection><collection>Nursing &amp; Allied Health Premium</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>Environmental Science Database</collection><collection>Materials Science Collection</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>Environmental Science Collection</collection><collection>Genetics Abstracts</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>PloS one</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Langner, Robert</au><au>Scharnowski, Frank</au><au>Ionta, Silvio</au><au>G Salmon, Carlos E</au><au>Piper, Brian J</au><au>Pamplona, Gustavo S P</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Evaluation of the reliability and validity of computerized tests of attention</atitle><jtitle>PloS one</jtitle><addtitle>PLoS One</addtitle><date>2023-01-27</date><risdate>2023</risdate><volume>18</volume><issue>1</issue><spage>e0281196</spage><epage>e0281196</epage><pages>e0281196-e0281196</pages><issn>1932-6203</issn><eissn>1932-6203</eissn><notes>ObjectType-Article-1</notes><notes>SourceType-Scholarly Journals-1</notes><notes>ObjectType-Feature-2</notes><notes>content type line 23</notes><notes>Competing Interests: The authors have declared that no competing interests exist.</notes><abstract>Different aspects of attention can be assessed through psychological tests to identify stable individual or group differences as well as alterations after interventions. Aiming for a wide applicability of attentional assessments, Psychology Experiment Building Language (PEBL) is an open-source software system for designing and running computerized tasks that tax various attentional functions. Here, we evaluated the reliability and validity of computerized attention tasks as provided with the PEBL package: Continuous Performance Task (CPT), Switcher task, Psychomotor Vigilance Task (PVT), Mental Rotation task, and Attentional Network Test. For all tasks, we evaluated test-retest reliability using the intraclass correlation coefficient (ICC), as well as internal consistency through within-test correlations and split-half ICC. Across tasks, response time scores showed adequate reliability, whereas scores of performance accuracy, variability, and deterioration over time did not. Stability across application sites was observed for the CPT and Switcher task, but practice effects were observed for all tasks except the PVT. We substantiate convergent and discriminant validity for several task scores using between-task correlations and provide further evidence for construct validity via associations of task scores with attentional and motivational assessments. Taken together, our results provide necessary information to help design and interpret studies involving attention assessments.</abstract><cop>United States</cop><pub>Public Library of Science</pub><pmid>36706136</pmid><doi>10.1371/journal.pone.0281196</doi><tpages>e0281196</tpages><orcidid>https://orcid.org/0000-0002-0278-203X</orcidid><orcidid>https://orcid.org/0000-0002-3237-001X</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1932-6203
ispartof PloS one, 2023-01, Vol.18 (1), p.e0281196-e0281196
issn 1932-6203
1932-6203
language eng
recordid cdi_plos_journals_2770256230
source Publicly Available Content Database; PubMed Central
subjects Analysis
Assessments
Attention
Biology and Life Sciences
Cognitive tasks
Consent
Correlation
Correlation coefficient
Correlation coefficients
Examinations
Experiments
Hypothesis testing
Medicine and Health Sciences
Mental task performance
Network reliability
Neuropsychological Tests
Physical Sciences
Psychological assessment
Psychological tests
Psychology
Psychometrics
Public software
Quantitative psychology
Reaction Time
Real property
Reliability analysis
Reliability aspects
Reproducibility of Results
Research and Analysis Methods
Response time
Social Sciences
Software
Technology application
Validity
Valuation
Vigilance
Wakefulness
title Evaluation of the reliability and validity of computerized tests of attention
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-09-22T08%3A43%3A58IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_plos_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Evaluation%20of%20the%20reliability%20and%20validity%20of%20computerized%20tests%20of%20attention&rft.jtitle=PloS%20one&rft.au=Langner,%20Robert&rft.date=2023-01-27&rft.volume=18&rft.issue=1&rft.spage=e0281196&rft.epage=e0281196&rft.pages=e0281196-e0281196&rft.issn=1932-6203&rft.eissn=1932-6203&rft_id=info:doi/10.1371/journal.pone.0281196&rft_dat=%3Cgale_plos_%3EA734887746%3C/gale_plos_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c692t-9e58e529b26a49ac8c592ee3ac1a7f9d9565113ee4cee83ef6b106e98b1d74fa3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2770256230&rft_id=info:pmid/36706136&rft_galeid=A734887746&rfr_iscdi=true