Loading…
Multi-strategy monarch butterfly optimization algorithm for discounted {0-1} knapsack problem
As an expanded classical 0-1 knapsack problem (0-1 KP), the discounted {0-1} knapsack problem (DKP) is proposed based on the concept of discount in the commercial world. The DKP contains a set of item groups where each group includes three items, whereas no more than one item in each group can be pa...
Saved in:
Published in: | Neural computing & applications 2018-11, Vol.30 (10), p.3019-3036 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c364t-5a3d4297254d99a7565a8fcd23480ec688eef9f3aef7ec857c83882290f317b93 |
---|---|
cites | cdi_FETCH-LOGICAL-c364t-5a3d4297254d99a7565a8fcd23480ec688eef9f3aef7ec857c83882290f317b93 |
container_end_page | 3036 |
container_issue | 10 |
container_start_page | 3019 |
container_title | Neural computing & applications |
container_volume | 30 |
creator | Feng, Yanhong Wang, Gai-Ge Li, Wenbin Li, Ning |
description | As an expanded classical 0-1 knapsack problem (0-1 KP), the discounted {0-1} knapsack problem (DKP) is proposed based on the concept of discount in the commercial world. The DKP contains a set of item groups where each group includes three items, whereas no more than one item in each group can be packed in the knapsack, which makes it more complex and challenging than 0-1 KP. At present, the main two algorithms for solving the DKP include exact algorithms and approximate algorithms. However, there are some topics which need to be further discussed, i.e., the improvement of the solution quality. In this paper, a novel multi-strategy monarch butterfly optimization (MMBO) algorithm for DKP is proposed. In MMBO, two effective strategies, neighborhood mutation with crowding and Gaussian perturbation, are introduced into MMBO. Experimental analyses show that the first strategy can enhance the global search ability, while the second strategy can strengthen local search ability and prevent premature convergence during the evolution process. Based on this, MBO is combined with each strategy, denoted as NCMBO and GMMBO, respectively. We compared MMBO with other six methods, including NCMBO, GMMBO, MBO, FirEGA, SecEGA and elephant herding optimization. The experimental results on three types of large-scale DKP instances show that NCMBO, GMMBO and MMBO are all suitable for solving DKP. In addition, MMBO outperforms other six methods and can achieve a good approximate solution with its approximation ratio close to 1 on almost all the DKP instances. |
doi_str_mv | 10.1007/s00521-017-2903-1 |
format | article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2130837737</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2130837737</sourcerecordid><originalsourceid>FETCH-LOGICAL-c364t-5a3d4297254d99a7565a8fcd23480ec688eef9f3aef7ec857c83882290f317b93</originalsourceid><addsrcrecordid>eNp1kLtOwzAUhi0EEqXwAGyWmA2-xs6IKm5SEQuMyHIdu02bxMF2hoJ4d1IFiYnpDP9N5wPgkuBrgrG8SRgLShAmEtESM0SOwIxwxhDDQh2DGS75qBacnYKzlLYYY14oMQPvz0OTa5RyNNmt97ANnYl2A1dDzi76Zg9Dn-u2_jS5Dh00zTrEOm9a6EOEVZ1sGLrsKviFEfmGu870ydgd7GNYNa49ByfeNMld_N45eLu_e108ouXLw9PidoksK3hGwrCK01JSwauyNFIUwihvK8q4ws4WSjnnS8-M89JZJaRVTCk6PuoZkauSzcHV1DvufgwuZb0NQ-zGSU0Jw4pJyeToIpPLxpBSdF73sW5N3GuC9YGinijqkaI-UNRkzNApk0Zvt3bxr_n_0A_MtHXL</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2130837737</pqid></control><display><type>article</type><title>Multi-strategy monarch butterfly optimization algorithm for discounted {0-1} knapsack problem</title><source>Springer Link</source><creator>Feng, Yanhong ; Wang, Gai-Ge ; Li, Wenbin ; Li, Ning</creator><creatorcontrib>Feng, Yanhong ; Wang, Gai-Ge ; Li, Wenbin ; Li, Ning</creatorcontrib><description>As an expanded classical 0-1 knapsack problem (0-1 KP), the discounted {0-1} knapsack problem (DKP) is proposed based on the concept of discount in the commercial world. The DKP contains a set of item groups where each group includes three items, whereas no more than one item in each group can be packed in the knapsack, which makes it more complex and challenging than 0-1 KP. At present, the main two algorithms for solving the DKP include exact algorithms and approximate algorithms. However, there are some topics which need to be further discussed, i.e., the improvement of the solution quality. In this paper, a novel multi-strategy monarch butterfly optimization (MMBO) algorithm for DKP is proposed. In MMBO, two effective strategies, neighborhood mutation with crowding and Gaussian perturbation, are introduced into MMBO. Experimental analyses show that the first strategy can enhance the global search ability, while the second strategy can strengthen local search ability and prevent premature convergence during the evolution process. Based on this, MBO is combined with each strategy, denoted as NCMBO and GMMBO, respectively. We compared MMBO with other six methods, including NCMBO, GMMBO, MBO, FirEGA, SecEGA and elephant herding optimization. The experimental results on three types of large-scale DKP instances show that NCMBO, GMMBO and MMBO are all suitable for solving DKP. In addition, MMBO outperforms other six methods and can achieve a good approximate solution with its approximation ratio close to 1 on almost all the DKP instances.</description><identifier>ISSN: 0941-0643</identifier><identifier>EISSN: 1433-3058</identifier><identifier>DOI: 10.1007/s00521-017-2903-1</identifier><language>eng</language><publisher>London: Springer London</publisher><subject>Algorithms ; Approximation ; Artificial Intelligence ; Computational Biology/Bioinformatics ; Computational Science and Engineering ; Computer Science ; Data Mining and Knowledge Discovery ; Image Processing and Computer Vision ; Knapsack problem ; Optimization ; Original Article ; Perturbation methods ; Probability and Statistics in Computer Science ; Strategy</subject><ispartof>Neural computing & applications, 2018-11, Vol.30 (10), p.3019-3036</ispartof><rights>The Natural Computing Applications Forum 2017</rights><rights>Copyright Springer Science & Business Media 2018</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c364t-5a3d4297254d99a7565a8fcd23480ec688eef9f3aef7ec857c83882290f317b93</citedby><cites>FETCH-LOGICAL-c364t-5a3d4297254d99a7565a8fcd23480ec688eef9f3aef7ec857c83882290f317b93</cites><orcidid>0000-0002-3295-8972</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>315,786,790,27957,27958</link.rule.ids></links><search><creatorcontrib>Feng, Yanhong</creatorcontrib><creatorcontrib>Wang, Gai-Ge</creatorcontrib><creatorcontrib>Li, Wenbin</creatorcontrib><creatorcontrib>Li, Ning</creatorcontrib><title>Multi-strategy monarch butterfly optimization algorithm for discounted {0-1} knapsack problem</title><title>Neural computing & applications</title><addtitle>Neural Comput & Applic</addtitle><description>As an expanded classical 0-1 knapsack problem (0-1 KP), the discounted {0-1} knapsack problem (DKP) is proposed based on the concept of discount in the commercial world. The DKP contains a set of item groups where each group includes three items, whereas no more than one item in each group can be packed in the knapsack, which makes it more complex and challenging than 0-1 KP. At present, the main two algorithms for solving the DKP include exact algorithms and approximate algorithms. However, there are some topics which need to be further discussed, i.e., the improvement of the solution quality. In this paper, a novel multi-strategy monarch butterfly optimization (MMBO) algorithm for DKP is proposed. In MMBO, two effective strategies, neighborhood mutation with crowding and Gaussian perturbation, are introduced into MMBO. Experimental analyses show that the first strategy can enhance the global search ability, while the second strategy can strengthen local search ability and prevent premature convergence during the evolution process. Based on this, MBO is combined with each strategy, denoted as NCMBO and GMMBO, respectively. We compared MMBO with other six methods, including NCMBO, GMMBO, MBO, FirEGA, SecEGA and elephant herding optimization. The experimental results on three types of large-scale DKP instances show that NCMBO, GMMBO and MMBO are all suitable for solving DKP. In addition, MMBO outperforms other six methods and can achieve a good approximate solution with its approximation ratio close to 1 on almost all the DKP instances.</description><subject>Algorithms</subject><subject>Approximation</subject><subject>Artificial Intelligence</subject><subject>Computational Biology/Bioinformatics</subject><subject>Computational Science and Engineering</subject><subject>Computer Science</subject><subject>Data Mining and Knowledge Discovery</subject><subject>Image Processing and Computer Vision</subject><subject>Knapsack problem</subject><subject>Optimization</subject><subject>Original Article</subject><subject>Perturbation methods</subject><subject>Probability and Statistics in Computer Science</subject><subject>Strategy</subject><issn>0941-0643</issn><issn>1433-3058</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><recordid>eNp1kLtOwzAUhi0EEqXwAGyWmA2-xs6IKm5SEQuMyHIdu02bxMF2hoJ4d1IFiYnpDP9N5wPgkuBrgrG8SRgLShAmEtESM0SOwIxwxhDDQh2DGS75qBacnYKzlLYYY14oMQPvz0OTa5RyNNmt97ANnYl2A1dDzi76Zg9Dn-u2_jS5Dh00zTrEOm9a6EOEVZ1sGLrsKviFEfmGu870ydgd7GNYNa49ByfeNMld_N45eLu_e108ouXLw9PidoksK3hGwrCK01JSwauyNFIUwihvK8q4ws4WSjnnS8-M89JZJaRVTCk6PuoZkauSzcHV1DvufgwuZb0NQ-zGSU0Jw4pJyeToIpPLxpBSdF73sW5N3GuC9YGinijqkaI-UNRkzNApk0Zvt3bxr_n_0A_MtHXL</recordid><startdate>20181101</startdate><enddate>20181101</enddate><creator>Feng, Yanhong</creator><creator>Wang, Gai-Ge</creator><creator>Li, Wenbin</creator><creator>Li, Ning</creator><general>Springer London</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0002-3295-8972</orcidid></search><sort><creationdate>20181101</creationdate><title>Multi-strategy monarch butterfly optimization algorithm for discounted {0-1} knapsack problem</title><author>Feng, Yanhong ; Wang, Gai-Ge ; Li, Wenbin ; Li, Ning</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c364t-5a3d4297254d99a7565a8fcd23480ec688eef9f3aef7ec857c83882290f317b93</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Algorithms</topic><topic>Approximation</topic><topic>Artificial Intelligence</topic><topic>Computational Biology/Bioinformatics</topic><topic>Computational Science and Engineering</topic><topic>Computer Science</topic><topic>Data Mining and Knowledge Discovery</topic><topic>Image Processing and Computer Vision</topic><topic>Knapsack problem</topic><topic>Optimization</topic><topic>Original Article</topic><topic>Perturbation methods</topic><topic>Probability and Statistics in Computer Science</topic><topic>Strategy</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Feng, Yanhong</creatorcontrib><creatorcontrib>Wang, Gai-Ge</creatorcontrib><creatorcontrib>Li, Wenbin</creatorcontrib><creatorcontrib>Li, Ning</creatorcontrib><collection>CrossRef</collection><jtitle>Neural computing & applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Feng, Yanhong</au><au>Wang, Gai-Ge</au><au>Li, Wenbin</au><au>Li, Ning</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Multi-strategy monarch butterfly optimization algorithm for discounted {0-1} knapsack problem</atitle><jtitle>Neural computing & applications</jtitle><stitle>Neural Comput & Applic</stitle><date>2018-11-01</date><risdate>2018</risdate><volume>30</volume><issue>10</issue><spage>3019</spage><epage>3036</epage><pages>3019-3036</pages><issn>0941-0643</issn><eissn>1433-3058</eissn><abstract>As an expanded classical 0-1 knapsack problem (0-1 KP), the discounted {0-1} knapsack problem (DKP) is proposed based on the concept of discount in the commercial world. The DKP contains a set of item groups where each group includes three items, whereas no more than one item in each group can be packed in the knapsack, which makes it more complex and challenging than 0-1 KP. At present, the main two algorithms for solving the DKP include exact algorithms and approximate algorithms. However, there are some topics which need to be further discussed, i.e., the improvement of the solution quality. In this paper, a novel multi-strategy monarch butterfly optimization (MMBO) algorithm for DKP is proposed. In MMBO, two effective strategies, neighborhood mutation with crowding and Gaussian perturbation, are introduced into MMBO. Experimental analyses show that the first strategy can enhance the global search ability, while the second strategy can strengthen local search ability and prevent premature convergence during the evolution process. Based on this, MBO is combined with each strategy, denoted as NCMBO and GMMBO, respectively. We compared MMBO with other six methods, including NCMBO, GMMBO, MBO, FirEGA, SecEGA and elephant herding optimization. The experimental results on three types of large-scale DKP instances show that NCMBO, GMMBO and MMBO are all suitable for solving DKP. In addition, MMBO outperforms other six methods and can achieve a good approximate solution with its approximation ratio close to 1 on almost all the DKP instances.</abstract><cop>London</cop><pub>Springer London</pub><doi>10.1007/s00521-017-2903-1</doi><tpages>18</tpages><orcidid>https://orcid.org/0000-0002-3295-8972</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0941-0643 |
ispartof | Neural computing & applications, 2018-11, Vol.30 (10), p.3019-3036 |
issn | 0941-0643 1433-3058 |
language | eng |
recordid | cdi_proquest_journals_2130837737 |
source | Springer Link |
subjects | Algorithms Approximation Artificial Intelligence Computational Biology/Bioinformatics Computational Science and Engineering Computer Science Data Mining and Knowledge Discovery Image Processing and Computer Vision Knapsack problem Optimization Original Article Perturbation methods Probability and Statistics in Computer Science Strategy |
title | Multi-strategy monarch butterfly optimization algorithm for discounted {0-1} knapsack problem |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-09-21T20%3A37%3A31IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Multi-strategy%20monarch%20butterfly%20optimization%20algorithm%20for%20discounted%20%7B0-1%7D%20knapsack%20problem&rft.jtitle=Neural%20computing%20&%20applications&rft.au=Feng,%20Yanhong&rft.date=2018-11-01&rft.volume=30&rft.issue=10&rft.spage=3019&rft.epage=3036&rft.pages=3019-3036&rft.issn=0941-0643&rft.eissn=1433-3058&rft_id=info:doi/10.1007/s00521-017-2903-1&rft_dat=%3Cproquest_cross%3E2130837737%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c364t-5a3d4297254d99a7565a8fcd23480ec688eef9f3aef7ec857c83882290f317b93%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2130837737&rft_id=info:pmid/&rfr_iscdi=true |