Loading…

Deep reinforcement learning methods for structure-guided processing path optimization

A major goal of materials design is to find material structures with desired properties and in a second step to find a processing path to reach one of these structures. In this paper, we propose and investigate a deep reinforcement learning approach for the optimization of processing paths. The goal...

Full description

Saved in:
Bibliographic Details
Published in:Journal of intelligent manufacturing 2022-01, Vol.33 (1), p.333-352
Main Authors: Dornheim, Johannes, Morand, Lukas, Zeitvogel, Samuel, Iraki, Tarek, Link, Norbert, Helm, Dirk
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c385t-2847e0e7420ad28a9f63c84d85eb79b17a0b90c1c74977c8b341a5fe0c0068ad3
cites cdi_FETCH-LOGICAL-c385t-2847e0e7420ad28a9f63c84d85eb79b17a0b90c1c74977c8b341a5fe0c0068ad3
container_end_page 352
container_issue 1
container_start_page 333
container_title Journal of intelligent manufacturing
container_volume 33
creator Dornheim, Johannes
Morand, Lukas
Zeitvogel, Samuel
Iraki, Tarek
Link, Norbert
Helm, Dirk
description A major goal of materials design is to find material structures with desired properties and in a second step to find a processing path to reach one of these structures. In this paper, we propose and investigate a deep reinforcement learning approach for the optimization of processing paths. The goal is to find optimal processing paths in the material structure space that lead to target-structures, which have been identified beforehand to result in desired material properties. There exists a target set containing one or multiple different structures, bearing the desired properties. Our proposed methods can find an optimal path from a start structure to a single target structure, or optimize the processing paths to one of the equivalent target-structures in the set. In the latter case, the algorithm learns during processing to simultaneously identify the best reachable target structure and the optimal path to it. The proposed methods belong to the family of model-free deep reinforcement learning algorithms. They are guided by structure representations as features of the process state and by a reward signal, which is formulated based on a distance function in the structure space. Model-free reinforcement learning algorithms learn through trial and error while interacting with the process. Thereby, they are not restricted to information from a priori sampled processing data and are able to adapt to the specific process. The optimization itself is model-free and does not require any prior knowledge about the process itself. We instantiate and evaluate the proposed methods by optimizing paths of a generic metal forming process. We show the ability of both methods to find processing paths leading close to target structures and the ability of the extended method to identify target-structures that can be reached effectively and efficiently and to focus on these targets for sample efficient processing path optimization.
doi_str_mv 10.1007/s10845-021-01805-z
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2616781528</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2616781528</sourcerecordid><originalsourceid>FETCH-LOGICAL-c385t-2847e0e7420ad28a9f63c84d85eb79b17a0b90c1c74977c8b341a5fe0c0068ad3</originalsourceid><addsrcrecordid>eNp9kM1LxDAQxYMouH78A4JQ8BydpE2THmX9hAUv7jmk6XQ3y25Tk_Tg_vV2raAnT29gfu_N8Ai5YnDLAORdZKAKQYEzCkyBoPsjMmNCcqpYIY7_zKfkLMYNAFSqZDOyfEDss4Cua32wuMMuZVs0oXPdKtthWvsmZuMqiykMNg0B6WpwDTZZH7zFGA9cb9I6831yO7c3yfnugpy0Zhvx8kfPyfLp8X3-Qhdvz6_z-wW1uRKJclVIBJQFB9NwZaq2zK0qGiWwllXNpIG6AsusLCoprarzghnRIliAUpkmPyc3U-74zMeAMemNH0I3ntS8ZKVUTHA1UnyibPAxBmx1H9zOhE_NQB_q01N9eqxPf9en96Mpn0xxhLsVht_of13Xkwut71zUB4nJB82VhErkX6Nnfow</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2616781528</pqid></control><display><type>article</type><title>Deep reinforcement learning methods for structure-guided processing path optimization</title><source>EBSCOhost Business Source Ultimate</source><source>ABI/INFORM Global</source><source>Springer Link</source><creator>Dornheim, Johannes ; Morand, Lukas ; Zeitvogel, Samuel ; Iraki, Tarek ; Link, Norbert ; Helm, Dirk</creator><creatorcontrib>Dornheim, Johannes ; Morand, Lukas ; Zeitvogel, Samuel ; Iraki, Tarek ; Link, Norbert ; Helm, Dirk</creatorcontrib><description>A major goal of materials design is to find material structures with desired properties and in a second step to find a processing path to reach one of these structures. In this paper, we propose and investigate a deep reinforcement learning approach for the optimization of processing paths. The goal is to find optimal processing paths in the material structure space that lead to target-structures, which have been identified beforehand to result in desired material properties. There exists a target set containing one or multiple different structures, bearing the desired properties. Our proposed methods can find an optimal path from a start structure to a single target structure, or optimize the processing paths to one of the equivalent target-structures in the set. In the latter case, the algorithm learns during processing to simultaneously identify the best reachable target structure and the optimal path to it. The proposed methods belong to the family of model-free deep reinforcement learning algorithms. They are guided by structure representations as features of the process state and by a reward signal, which is formulated based on a distance function in the structure space. Model-free reinforcement learning algorithms learn through trial and error while interacting with the process. Thereby, they are not restricted to information from a priori sampled processing data and are able to adapt to the specific process. The optimization itself is model-free and does not require any prior knowledge about the process itself. We instantiate and evaluate the proposed methods by optimizing paths of a generic metal forming process. We show the ability of both methods to find processing paths leading close to target structures and the ability of the extended method to identify target-structures that can be reached effectively and efficiently and to focus on these targets for sample efficient processing path optimization.</description><identifier>ISSN: 1572-8145</identifier><identifier>ISSN: 0956-5515</identifier><identifier>EISSN: 1572-8145</identifier><identifier>DOI: 10.1007/s10845-021-01805-z</identifier><language>eng</language><publisher>New York, NY: Springer US</publisher><subject>Algorithms ; Business and Management ; Control ; Control, Robotics, Mechatronics ; Data processing ; Deep learning ; Identification methods ; Machine learning ; Machines ; Manufacturing ; Manufacturing, Machines, Tools, Processes ; Material properties ; Mechatronics ; Metal forming ; Optimization ; Processes ; Production ; Robotics ; Signal processing ; Target recognition</subject><ispartof>Journal of intelligent manufacturing, 2022-01, Vol.33 (1), p.333-352</ispartof><rights>The Author(s) 2021</rights><rights>The Author(s) 2021. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c385t-2847e0e7420ad28a9f63c84d85eb79b17a0b90c1c74977c8b341a5fe0c0068ad3</citedby><cites>FETCH-LOGICAL-c385t-2847e0e7420ad28a9f63c84d85eb79b17a0b90c1c74977c8b341a5fe0c0068ad3</cites><orcidid>0000-0002-2752-0554</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/2616781528/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$H</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2616781528?pq-origsite=primo$$EHTML$$P50$$Gproquest$$H</linktohtml><link.rule.ids>315,786,790,11715,27957,27958,36095,44398,75252</link.rule.ids></links><search><creatorcontrib>Dornheim, Johannes</creatorcontrib><creatorcontrib>Morand, Lukas</creatorcontrib><creatorcontrib>Zeitvogel, Samuel</creatorcontrib><creatorcontrib>Iraki, Tarek</creatorcontrib><creatorcontrib>Link, Norbert</creatorcontrib><creatorcontrib>Helm, Dirk</creatorcontrib><title>Deep reinforcement learning methods for structure-guided processing path optimization</title><title>Journal of intelligent manufacturing</title><addtitle>J Intell Manuf</addtitle><description>A major goal of materials design is to find material structures with desired properties and in a second step to find a processing path to reach one of these structures. In this paper, we propose and investigate a deep reinforcement learning approach for the optimization of processing paths. The goal is to find optimal processing paths in the material structure space that lead to target-structures, which have been identified beforehand to result in desired material properties. There exists a target set containing one or multiple different structures, bearing the desired properties. Our proposed methods can find an optimal path from a start structure to a single target structure, or optimize the processing paths to one of the equivalent target-structures in the set. In the latter case, the algorithm learns during processing to simultaneously identify the best reachable target structure and the optimal path to it. The proposed methods belong to the family of model-free deep reinforcement learning algorithms. They are guided by structure representations as features of the process state and by a reward signal, which is formulated based on a distance function in the structure space. Model-free reinforcement learning algorithms learn through trial and error while interacting with the process. Thereby, they are not restricted to information from a priori sampled processing data and are able to adapt to the specific process. The optimization itself is model-free and does not require any prior knowledge about the process itself. We instantiate and evaluate the proposed methods by optimizing paths of a generic metal forming process. We show the ability of both methods to find processing paths leading close to target structures and the ability of the extended method to identify target-structures that can be reached effectively and efficiently and to focus on these targets for sample efficient processing path optimization.</description><subject>Algorithms</subject><subject>Business and Management</subject><subject>Control</subject><subject>Control, Robotics, Mechatronics</subject><subject>Data processing</subject><subject>Deep learning</subject><subject>Identification methods</subject><subject>Machine learning</subject><subject>Machines</subject><subject>Manufacturing</subject><subject>Manufacturing, Machines, Tools, Processes</subject><subject>Material properties</subject><subject>Mechatronics</subject><subject>Metal forming</subject><subject>Optimization</subject><subject>Processes</subject><subject>Production</subject><subject>Robotics</subject><subject>Signal processing</subject><subject>Target recognition</subject><issn>1572-8145</issn><issn>0956-5515</issn><issn>1572-8145</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>M0C</sourceid><recordid>eNp9kM1LxDAQxYMouH78A4JQ8BydpE2THmX9hAUv7jmk6XQ3y25Tk_Tg_vV2raAnT29gfu_N8Ai5YnDLAORdZKAKQYEzCkyBoPsjMmNCcqpYIY7_zKfkLMYNAFSqZDOyfEDss4Cua32wuMMuZVs0oXPdKtthWvsmZuMqiykMNg0B6WpwDTZZH7zFGA9cb9I6831yO7c3yfnugpy0Zhvx8kfPyfLp8X3-Qhdvz6_z-wW1uRKJclVIBJQFB9NwZaq2zK0qGiWwllXNpIG6AsusLCoprarzghnRIliAUpkmPyc3U-74zMeAMemNH0I3ntS8ZKVUTHA1UnyibPAxBmx1H9zOhE_NQB_q01N9eqxPf9en96Mpn0xxhLsVht_of13Xkwut71zUB4nJB82VhErkX6Nnfow</recordid><startdate>20220101</startdate><enddate>20220101</enddate><creator>Dornheim, Johannes</creator><creator>Morand, Lukas</creator><creator>Zeitvogel, Samuel</creator><creator>Iraki, Tarek</creator><creator>Link, Norbert</creator><creator>Helm, Dirk</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>OT2</scope><scope>C6C</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7TB</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>88E</scope><scope>8AL</scope><scope>8AO</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FJ</scope><scope>8FK</scope><scope>8FL</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FR3</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GHDGH</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>K9.</scope><scope>L.-</scope><scope>L6V</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>M0S</scope><scope>M7S</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>Q9U</scope><orcidid>https://orcid.org/0000-0002-2752-0554</orcidid></search><sort><creationdate>20220101</creationdate><title>Deep reinforcement learning methods for structure-guided processing path optimization</title><author>Dornheim, Johannes ; Morand, Lukas ; Zeitvogel, Samuel ; Iraki, Tarek ; Link, Norbert ; Helm, Dirk</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c385t-2847e0e7420ad28a9f63c84d85eb79b17a0b90c1c74977c8b341a5fe0c0068ad3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Algorithms</topic><topic>Business and Management</topic><topic>Control</topic><topic>Control, Robotics, Mechatronics</topic><topic>Data processing</topic><topic>Deep learning</topic><topic>Identification methods</topic><topic>Machine learning</topic><topic>Machines</topic><topic>Manufacturing</topic><topic>Manufacturing, Machines, Tools, Processes</topic><topic>Material properties</topic><topic>Mechatronics</topic><topic>Metal forming</topic><topic>Optimization</topic><topic>Processes</topic><topic>Production</topic><topic>Robotics</topic><topic>Signal processing</topic><topic>Target recognition</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Dornheim, Johannes</creatorcontrib><creatorcontrib>Morand, Lukas</creatorcontrib><creatorcontrib>Zeitvogel, Samuel</creatorcontrib><creatorcontrib>Iraki, Tarek</creatorcontrib><creatorcontrib>Link, Norbert</creatorcontrib><creatorcontrib>Helm, Dirk</creatorcontrib><collection>EconStor</collection><collection>SpringerOpen</collection><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection</collection><collection>Medical Database (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>Engineering Research Database</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ProQuest Health &amp; Medical Complete (Alumni)</collection><collection>ABI/INFORM Professional Advanced</collection><collection>ProQuest Engineering Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Health &amp; Medical Collection (Alumni Edition)</collection><collection>Engineering Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>One Business (ProQuest)</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>ProQuest Central Basic</collection><jtitle>Journal of intelligent manufacturing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Dornheim, Johannes</au><au>Morand, Lukas</au><au>Zeitvogel, Samuel</au><au>Iraki, Tarek</au><au>Link, Norbert</au><au>Helm, Dirk</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Deep reinforcement learning methods for structure-guided processing path optimization</atitle><jtitle>Journal of intelligent manufacturing</jtitle><stitle>J Intell Manuf</stitle><date>2022-01-01</date><risdate>2022</risdate><volume>33</volume><issue>1</issue><spage>333</spage><epage>352</epage><pages>333-352</pages><issn>1572-8145</issn><issn>0956-5515</issn><eissn>1572-8145</eissn><abstract>A major goal of materials design is to find material structures with desired properties and in a second step to find a processing path to reach one of these structures. In this paper, we propose and investigate a deep reinforcement learning approach for the optimization of processing paths. The goal is to find optimal processing paths in the material structure space that lead to target-structures, which have been identified beforehand to result in desired material properties. There exists a target set containing one or multiple different structures, bearing the desired properties. Our proposed methods can find an optimal path from a start structure to a single target structure, or optimize the processing paths to one of the equivalent target-structures in the set. In the latter case, the algorithm learns during processing to simultaneously identify the best reachable target structure and the optimal path to it. The proposed methods belong to the family of model-free deep reinforcement learning algorithms. They are guided by structure representations as features of the process state and by a reward signal, which is formulated based on a distance function in the structure space. Model-free reinforcement learning algorithms learn through trial and error while interacting with the process. Thereby, they are not restricted to information from a priori sampled processing data and are able to adapt to the specific process. The optimization itself is model-free and does not require any prior knowledge about the process itself. We instantiate and evaluate the proposed methods by optimizing paths of a generic metal forming process. We show the ability of both methods to find processing paths leading close to target structures and the ability of the extended method to identify target-structures that can be reached effectively and efficiently and to focus on these targets for sample efficient processing path optimization.</abstract><cop>New York, NY</cop><pub>Springer US</pub><doi>10.1007/s10845-021-01805-z</doi><tpages>20</tpages><orcidid>https://orcid.org/0000-0002-2752-0554</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1572-8145
ispartof Journal of intelligent manufacturing, 2022-01, Vol.33 (1), p.333-352
issn 1572-8145
0956-5515
1572-8145
language eng
recordid cdi_proquest_journals_2616781528
source EBSCOhost Business Source Ultimate; ABI/INFORM Global; Springer Link
subjects Algorithms
Business and Management
Control
Control, Robotics, Mechatronics
Data processing
Deep learning
Identification methods
Machine learning
Machines
Manufacturing
Manufacturing, Machines, Tools, Processes
Material properties
Mechatronics
Metal forming
Optimization
Processes
Production
Robotics
Signal processing
Target recognition
title Deep reinforcement learning methods for structure-guided processing path optimization
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-09-21T17%3A50%3A58IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Deep%20reinforcement%20learning%20methods%20for%20structure-guided%20processing%20path%20optimization&rft.jtitle=Journal%20of%20intelligent%20manufacturing&rft.au=Dornheim,%20Johannes&rft.date=2022-01-01&rft.volume=33&rft.issue=1&rft.spage=333&rft.epage=352&rft.pages=333-352&rft.issn=1572-8145&rft.eissn=1572-8145&rft_id=info:doi/10.1007/s10845-021-01805-z&rft_dat=%3Cproquest_cross%3E2616781528%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c385t-2847e0e7420ad28a9f63c84d85eb79b17a0b90c1c74977c8b341a5fe0c0068ad3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2616781528&rft_id=info:pmid/&rfr_iscdi=true