Loading…

Learning to Abstract Visuomotor Mappings using Meta-Reinforcement Learning

We investigated the human capacity to acquire multiple visuomotor mappings for de novo skills. Using a grid navigation paradigm, we tested whether contextual cues implemented as different "grid worlds", allow participants to learn two distinct key-mappings more efficiently. Our results ind...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2024-02
Main Authors: Velazquez-Vargas, Carlos A, Isaac Ray Christian, Taylor, Jordan A, Kumar, Sreejan
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Velazquez-Vargas, Carlos A
Isaac Ray Christian
Taylor, Jordan A
Kumar, Sreejan
description We investigated the human capacity to acquire multiple visuomotor mappings for de novo skills. Using a grid navigation paradigm, we tested whether contextual cues implemented as different "grid worlds", allow participants to learn two distinct key-mappings more efficiently. Our results indicate that when contextual information is provided, task performance is significantly better. The same held true for meta-reinforcement learning agents that differed in whether or not they receive contextual information when performing the task. We evaluated their accuracy in predicting human performance in the task and analyzed their internal representations. The results indicate that contextual cues allow the formation of separate representations in space and time when using different visuomotor mappings, whereas the absence of them favors sharing one representation. While both strategies can allow learning of multiple visuomotor mappings, we showed contextual cues provide a computational advantage in terms of how many mappings can be learned.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2922661456</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2922661456</sourcerecordid><originalsourceid>FETCH-proquest_journals_29226614563</originalsourceid><addsrcrecordid>eNqNjLEKwjAUAIMgWLT_EHAutC9t1FFEEbGLiGuJ5VVabFLzXv7fCro73XDHTUQESmXJOgeYiZioS9MU9AqKQkXidEbjbWsfkp3c3om9qVneWgqud-y8LM0wjJpkoE9VIpvkgq1tnK-xR8vyd1iIaWOehPGXc7E87K-7YzJ49wpIXHUueDuqCjYAWmd5odV_1Rv49D0t</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2922661456</pqid></control><display><type>article</type><title>Learning to Abstract Visuomotor Mappings using Meta-Reinforcement Learning</title><source>Publicly Available Content Database</source><creator>Velazquez-Vargas, Carlos A ; Isaac Ray Christian ; Taylor, Jordan A ; Kumar, Sreejan</creator><creatorcontrib>Velazquez-Vargas, Carlos A ; Isaac Ray Christian ; Taylor, Jordan A ; Kumar, Sreejan</creatorcontrib><description>We investigated the human capacity to acquire multiple visuomotor mappings for de novo skills. Using a grid navigation paradigm, we tested whether contextual cues implemented as different "grid worlds", allow participants to learn two distinct key-mappings more efficiently. Our results indicate that when contextual information is provided, task performance is significantly better. The same held true for meta-reinforcement learning agents that differed in whether or not they receive contextual information when performing the task. We evaluated their accuracy in predicting human performance in the task and analyzed their internal representations. The results indicate that contextual cues allow the formation of separate representations in space and time when using different visuomotor mappings, whereas the absence of them favors sharing one representation. While both strategies can allow learning of multiple visuomotor mappings, we showed contextual cues provide a computational advantage in terms of how many mappings can be learned.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Human performance ; Performance prediction ; Representations</subject><ispartof>arXiv.org, 2024-02</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2922661456?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>783,787,25767,37026,44604</link.rule.ids></links><search><creatorcontrib>Velazquez-Vargas, Carlos A</creatorcontrib><creatorcontrib>Isaac Ray Christian</creatorcontrib><creatorcontrib>Taylor, Jordan A</creatorcontrib><creatorcontrib>Kumar, Sreejan</creatorcontrib><title>Learning to Abstract Visuomotor Mappings using Meta-Reinforcement Learning</title><title>arXiv.org</title><description>We investigated the human capacity to acquire multiple visuomotor mappings for de novo skills. Using a grid navigation paradigm, we tested whether contextual cues implemented as different "grid worlds", allow participants to learn two distinct key-mappings more efficiently. Our results indicate that when contextual information is provided, task performance is significantly better. The same held true for meta-reinforcement learning agents that differed in whether or not they receive contextual information when performing the task. We evaluated their accuracy in predicting human performance in the task and analyzed their internal representations. The results indicate that contextual cues allow the formation of separate representations in space and time when using different visuomotor mappings, whereas the absence of them favors sharing one representation. While both strategies can allow learning of multiple visuomotor mappings, we showed contextual cues provide a computational advantage in terms of how many mappings can be learned.</description><subject>Human performance</subject><subject>Performance prediction</subject><subject>Representations</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNjLEKwjAUAIMgWLT_EHAutC9t1FFEEbGLiGuJ5VVabFLzXv7fCro73XDHTUQESmXJOgeYiZioS9MU9AqKQkXidEbjbWsfkp3c3om9qVneWgqud-y8LM0wjJpkoE9VIpvkgq1tnK-xR8vyd1iIaWOehPGXc7E87K-7YzJ49wpIXHUueDuqCjYAWmd5odV_1Rv49D0t</recordid><startdate>20240205</startdate><enddate>20240205</enddate><creator>Velazquez-Vargas, Carlos A</creator><creator>Isaac Ray Christian</creator><creator>Taylor, Jordan A</creator><creator>Kumar, Sreejan</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240205</creationdate><title>Learning to Abstract Visuomotor Mappings using Meta-Reinforcement Learning</title><author>Velazquez-Vargas, Carlos A ; Isaac Ray Christian ; Taylor, Jordan A ; Kumar, Sreejan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_29226614563</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Human performance</topic><topic>Performance prediction</topic><topic>Representations</topic><toplevel>online_resources</toplevel><creatorcontrib>Velazquez-Vargas, Carlos A</creatorcontrib><creatorcontrib>Isaac Ray Christian</creatorcontrib><creatorcontrib>Taylor, Jordan A</creatorcontrib><creatorcontrib>Kumar, Sreejan</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Database (Proquest)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Velazquez-Vargas, Carlos A</au><au>Isaac Ray Christian</au><au>Taylor, Jordan A</au><au>Kumar, Sreejan</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Learning to Abstract Visuomotor Mappings using Meta-Reinforcement Learning</atitle><jtitle>arXiv.org</jtitle><date>2024-02-05</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>We investigated the human capacity to acquire multiple visuomotor mappings for de novo skills. Using a grid navigation paradigm, we tested whether contextual cues implemented as different "grid worlds", allow participants to learn two distinct key-mappings more efficiently. Our results indicate that when contextual information is provided, task performance is significantly better. The same held true for meta-reinforcement learning agents that differed in whether or not they receive contextual information when performing the task. We evaluated their accuracy in predicting human performance in the task and analyzed their internal representations. The results indicate that contextual cues allow the formation of separate representations in space and time when using different visuomotor mappings, whereas the absence of them favors sharing one representation. While both strategies can allow learning of multiple visuomotor mappings, we showed contextual cues provide a computational advantage in terms of how many mappings can be learned.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-02
issn 2331-8422
language eng
recordid cdi_proquest_journals_2922661456
source Publicly Available Content Database
subjects Human performance
Performance prediction
Representations
title Learning to Abstract Visuomotor Mappings using Meta-Reinforcement Learning
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-11-02T00%3A28%3A59IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Learning%20to%20Abstract%20Visuomotor%20Mappings%20using%20Meta-Reinforcement%20Learning&rft.jtitle=arXiv.org&rft.au=Velazquez-Vargas,%20Carlos%20A&rft.date=2024-02-05&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2922661456%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_29226614563%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2922661456&rft_id=info:pmid/&rfr_iscdi=true