Robust Orthogonal-View 2-D/3-D Rigid Registration for Minimally Invasive Surgery

Intra-operative target pose estimation is fundamental in minimally invasive surgery (MIS) to guiding surgical robots. This task can be fulfilled by the 2-D/3-D rigid registration, which aligns the anatomical structures between intra-operative 2-D fluoroscopy and the pre-operative 3-D computed tomogr...

Full description

Saved in:
Bibliographic Details
Published in:Micromachines (Basel) 2021-07, Vol.12 (7), p.844
Main Authors: An, Zhou, Ma, Honghai, Liu, Lilu, Wang, Yue, Lu, Haojian, Zhou, Chunlin, Xiong, Rong, Hu, Jian
Format: Article
Language:eng
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
recordid cdi_doaj_primary_oai_doaj_org_article_3fadaa4f7207424f9b272aecca33b27b
title Robust Orthogonal-View 2-D/3-D Rigid Registration for Minimally Invasive Surgery
format Article
creator An, Zhou
Ma, Honghai
Liu, Lilu
Wang, Yue
Lu, Haojian
Zhou, Chunlin
Xiong, Rong
Hu, Jian
subjects 2-D/3-D registration
Accuracy
Algorithms
Computed tomography
Datasets
deep learning
Efficiency
Failure rates
Fluoroscopy
Image reconstruction
Laparoscopy
Machine learning
Methods
multi-view
Optimization
reconstruction
Registration
rigid
Robotic surgery
Three dimensional imaging
ispartof Micromachines (Basel), 2021-07, Vol.12 (7), p.844
description Intra-operative target pose estimation is fundamental in minimally invasive surgery (MIS) to guiding surgical robots. This task can be fulfilled by the 2-D/3-D rigid registration, which aligns the anatomical structures between intra-operative 2-D fluoroscopy and the pre-operative 3-D computed tomography (CT) with annotated target information. Although this technique has been researched for decades, it is still challenging to achieve accuracy, robustness and efficiency simultaneously. In this paper, a novel orthogonal-view 2-D/3-D rigid registration framework is proposed which combines the dense reconstruction based on deep learning and the GPU-accelerated 3-D/3-D rigid registration. First, we employ the X2CT-GAN to reconstruct a target CT from two orthogonal fluoroscopy images. After that, the generated target CT and pre-operative CT are input into the 3-D/3-D rigid registration part, which potentially needs a few iterations to converge the global optima. For further efficiency improvement, we make the 3-D/3-D registration algorithm parallel and apply a GPU to accelerate this part. For evaluation, a novel tool is employed to preprocess the public head CT dataset CQ500 and a CT-DRR dataset is presented as the benchmark. The proposed method achieves 1.65 ± 1.41 mm in mean target registration error(mTRE), 20% in the gross failure rate(GFR) and 1.8 s in running time. Our method outperforms the state-of-the-art methods in most test cases. It is promising to apply the proposed method in localization and nano manipulation of micro surgical robot for highly precise MIS.
language eng
source Open Access: PubMed Central; Publicly Available Content Database
identifier ISSN: 2072-666X
fulltext fulltext
issn 2072-666X
2072-666X
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-05-27T17%3A17%3A46IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Robust%20Orthogonal-View%202-D/3-D%20Rigid%20Registration%20for%20Minimally%20Invasive%20Surgery&rft.jtitle=Micromachines%20(Basel)&rft.au=An,%20Zhou&rft.date=2021-07-20&rft.volume=12&rft.issue=7&rft.spage=844&rft.pages=844-&rft.issn=2072-666X&rft.eissn=2072-666X&rft_id=info:doi/10.3390/mi12070844&rft_dat=%3Cproquest_doaj_%3E2559425412%3C/proquest_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c449t-283e5545be07d24a19c4ed9f3340596a8e99eb23f239026f2c88bbef4b0be0823%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2554611567&rft_id=info:pmid/34357254
container_title Micromachines (Basel)
container_volume 12
container_issue 7
container_start_page 844
container_end_page
fullrecord <record><control><sourceid>proquest_doaj_</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_3fadaa4f7207424f9b272aecca33b27b</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><doaj_id>oai_doaj_org_article_3fadaa4f7207424f9b272aecca33b27b</doaj_id><sourcerecordid>2559425412</sourcerecordid><originalsourceid>FETCH-LOGICAL-c449t-283e5545be07d24a19c4ed9f3340596a8e99eb23f239026f2c88bbef4b0be0823</originalsourceid><addsrcrecordid>eNpdUV1L5DAUDbKior74Cwq-LAvVNB9t8yIsuuqAizJ-4FtI0puaodO4STsy_96MI66ah5tD7snh3HMROijwEaUCH89dQXCFa8Y20E5CJC_L8vHHJ7yN9mOc4XSqSqSyhbYpo7winO2gm6nXYxyy6zA8-db3qssfHLxkJD87pvlZNnWta7IptC4OQQ3O95n1IfvrejdXXbfMJv1CRbeA7HYMLYTlHtq0qouw_37vovvzP3enl_nV9cXk9PdVbhgTQ05qCpwzrgFXDWGqEIZBIyylDHNRqhqEAE2oJWlIUlpi6lprsEzj9KUmdBdN1rqNVzP5HJKdsJReOfn24EMrVRic6UBSqxqlmK1SJIwwKzSpiAJjFKUJ6qR1stZ6HvUcGgN9mrX7Ivq107sn2fqFrCmmolyZ-fkuEPy_EeIg5y4a6DrVgx-jJJwLlvIuVtTDb9SZH0PK_Y3FyqLgZZVYv9YsE3yMAeyHmQLL1d7l_73TV3SOnXA</addsrcrecordid><sourcetype>Open Website</sourcetype><isCDI>true</isCDI><recordtype>article</recordtype><pqid>2554611567</pqid></control><display><type>article</type><title>Robust Orthogonal-View 2-D/3-D Rigid Registration for Minimally Invasive Surgery</title><source>Open Access: PubMed Central</source><source>Publicly Available Content Database</source><creator>An, Zhou ; Ma, Honghai ; Liu, Lilu ; Wang, Yue ; Lu, Haojian ; Zhou, Chunlin ; Xiong, Rong ; Hu, Jian</creator><creatorcontrib>An, Zhou ; Ma, Honghai ; Liu, Lilu ; Wang, Yue ; Lu, Haojian ; Zhou, Chunlin ; Xiong, Rong ; Hu, Jian</creatorcontrib><description>Intra-operative target pose estimation is fundamental in minimally invasive surgery (MIS) to guiding surgical robots. This task can be fulfilled by the 2-D/3-D rigid registration, which aligns the anatomical structures between intra-operative 2-D fluoroscopy and the pre-operative 3-D computed tomography (CT) with annotated target information. Although this technique has been researched for decades, it is still challenging to achieve accuracy, robustness and efficiency simultaneously. In this paper, a novel orthogonal-view 2-D/3-D rigid registration framework is proposed which combines the dense reconstruction based on deep learning and the GPU-accelerated 3-D/3-D rigid registration. First, we employ the X2CT-GAN to reconstruct a target CT from two orthogonal fluoroscopy images. After that, the generated target CT and pre-operative CT are input into the 3-D/3-D rigid registration part, which potentially needs a few iterations to converge the global optima. For further efficiency improvement, we make the 3-D/3-D registration algorithm parallel and apply a GPU to accelerate this part. For evaluation, a novel tool is employed to preprocess the public head CT dataset CQ500 and a CT-DRR dataset is presented as the benchmark. The proposed method achieves 1.65 ± 1.41 mm in mean target registration error(mTRE), 20% in the gross failure rate(GFR) and 1.8 s in running time. Our method outperforms the state-of-the-art methods in most test cases. It is promising to apply the proposed method in localization and nano manipulation of micro surgical robot for highly precise MIS.</description><identifier>ISSN: 2072-666X</identifier><identifier>EISSN: 2072-666X</identifier><identifier>DOI: 10.3390/mi12070844</identifier><identifier>PMID: 34357254</identifier><language>eng</language><publisher>Basel: MDPI AG</publisher><subject>2-D/3-D registration ; Accuracy ; Algorithms ; Computed tomography ; Datasets ; deep learning ; Efficiency ; Failure rates ; Fluoroscopy ; Image reconstruction ; Laparoscopy ; Machine learning ; Methods ; multi-view ; Optimization ; reconstruction ; Registration ; rigid ; Robotic surgery ; Three dimensional imaging</subject><ispartof>Micromachines (Basel), 2021-07, Vol.12 (7), p.844</ispartof><rights>2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>2021 by the authors. 2021</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c449t-283e5545be07d24a19c4ed9f3340596a8e99eb23f239026f2c88bbef4b0be0823</citedby><cites>FETCH-LOGICAL-c449t-283e5545be07d24a19c4ed9f3340596a8e99eb23f239026f2c88bbef4b0be0823</cites><orcidid>0000-0002-1393-3040 ; 0000-0003-2937-0702</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/2554611567/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2554611567?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>230,315,734,787,791,892,25799,27985,27986,37077,37078,44955,54176,54178,76120</link.rule.ids></links><search><creatorcontrib>An, Zhou</creatorcontrib><creatorcontrib>Ma, Honghai</creatorcontrib><creatorcontrib>Liu, Lilu</creatorcontrib><creatorcontrib>Wang, Yue</creatorcontrib><creatorcontrib>Lu, Haojian</creatorcontrib><creatorcontrib>Zhou, Chunlin</creatorcontrib><creatorcontrib>Xiong, Rong</creatorcontrib><creatorcontrib>Hu, Jian</creatorcontrib><title>Robust Orthogonal-View 2-D/3-D Rigid Registration for Minimally Invasive Surgery</title><title>Micromachines (Basel)</title><description>Intra-operative target pose estimation is fundamental in minimally invasive surgery (MIS) to guiding surgical robots. This task can be fulfilled by the 2-D/3-D rigid registration, which aligns the anatomical structures between intra-operative 2-D fluoroscopy and the pre-operative 3-D computed tomography (CT) with annotated target information. Although this technique has been researched for decades, it is still challenging to achieve accuracy, robustness and efficiency simultaneously. In this paper, a novel orthogonal-view 2-D/3-D rigid registration framework is proposed which combines the dense reconstruction based on deep learning and the GPU-accelerated 3-D/3-D rigid registration. First, we employ the X2CT-GAN to reconstruct a target CT from two orthogonal fluoroscopy images. After that, the generated target CT and pre-operative CT are input into the 3-D/3-D rigid registration part, which potentially needs a few iterations to converge the global optima. For further efficiency improvement, we make the 3-D/3-D registration algorithm parallel and apply a GPU to accelerate this part. For evaluation, a novel tool is employed to preprocess the public head CT dataset CQ500 and a CT-DRR dataset is presented as the benchmark. The proposed method achieves 1.65 ± 1.41 mm in mean target registration error(mTRE), 20% in the gross failure rate(GFR) and 1.8 s in running time. Our method outperforms the state-of-the-art methods in most test cases. It is promising to apply the proposed method in localization and nano manipulation of micro surgical robot for highly precise MIS.</description><subject>2-D/3-D registration</subject><subject>Accuracy</subject><subject>Algorithms</subject><subject>Computed tomography</subject><subject>Datasets</subject><subject>deep learning</subject><subject>Efficiency</subject><subject>Failure rates</subject><subject>Fluoroscopy</subject><subject>Image reconstruction</subject><subject>Laparoscopy</subject><subject>Machine learning</subject><subject>Methods</subject><subject>multi-view</subject><subject>Optimization</subject><subject>reconstruction</subject><subject>Registration</subject><subject>rigid</subject><subject>Robotic surgery</subject><subject>Three dimensional imaging</subject><issn>2072-666X</issn><issn>2072-666X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><sourceid>DOA</sourceid><recordid>eNpdUV1L5DAUDbKior74Cwq-LAvVNB9t8yIsuuqAizJ-4FtI0puaodO4STsy_96MI66ah5tD7snh3HMROijwEaUCH89dQXCFa8Y20E5CJC_L8vHHJ7yN9mOc4XSqSqSyhbYpo7winO2gm6nXYxyy6zA8-db3qssfHLxkJD87pvlZNnWta7IptC4OQQ3O95n1IfvrejdXXbfMJv1CRbeA7HYMLYTlHtq0qouw_37vovvzP3enl_nV9cXk9PdVbhgTQ05qCpwzrgFXDWGqEIZBIyylDHNRqhqEAE2oJWlIUlpi6lprsEzj9KUmdBdN1rqNVzP5HJKdsJReOfn24EMrVRic6UBSqxqlmK1SJIwwKzSpiAJjFKUJ6qR1stZ6HvUcGgN9mrX7Ivq107sn2fqFrCmmolyZ-fkuEPy_EeIg5y4a6DrVgx-jJJwLlvIuVtTDb9SZH0PK_Y3FyqLgZZVYv9YsE3yMAeyHmQLL1d7l_73TV3SOnXA</recordid><startdate>20210720</startdate><enddate>20210720</enddate><creator>An, Zhou</creator><creator>Ma, Honghai</creator><creator>Liu, Lilu</creator><creator>Wang, Yue</creator><creator>Lu, Haojian</creator><creator>Zhou, Chunlin</creator><creator>Xiong, Rong</creator><creator>Hu, Jian</creator><general>MDPI AG</general><general>MDPI</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>7TB</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FR3</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>L7M</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PTHSS</scope><scope>7X8</scope><scope>5PM</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0002-1393-3040</orcidid><orcidid>https://orcid.org/0000-0003-2937-0702</orcidid></search><sort><creationdate>20210720</creationdate><title>Robust Orthogonal-View 2-D/3-D Rigid Registration for Minimally Invasive Surgery</title><author>An, Zhou ; Ma, Honghai ; Liu, Lilu ; Wang, Yue ; Lu, Haojian ; Zhou, Chunlin ; Xiong, Rong ; Hu, Jian</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c449t-283e5545be07d24a19c4ed9f3340596a8e99eb23f239026f2c88bbef4b0be0823</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>2-D/3-D registration</topic><topic>Accuracy</topic><topic>Algorithms</topic><topic>Computed tomography</topic><topic>Datasets</topic><topic>deep learning</topic><topic>Efficiency</topic><topic>Failure rates</topic><topic>Fluoroscopy</topic><topic>Image reconstruction</topic><topic>Laparoscopy</topic><topic>Machine learning</topic><topic>Methods</topic><topic>multi-view</topic><topic>Optimization</topic><topic>reconstruction</topic><topic>Registration</topic><topic>rigid</topic><topic>Robotic surgery</topic><topic>Three dimensional imaging</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>An, Zhou</creatorcontrib><creatorcontrib>Ma, Honghai</creatorcontrib><creatorcontrib>Liu, Lilu</creatorcontrib><creatorcontrib>Wang, Yue</creatorcontrib><creatorcontrib>Lu, Haojian</creatorcontrib><creatorcontrib>Zhou, Chunlin</creatorcontrib><creatorcontrib>Xiong, Rong</creatorcontrib><creatorcontrib>Hu, Jian</creatorcontrib><collection>CrossRef</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>Engineering Research Database</collection><collection>SciTech Premium Collection (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest Engineering Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>Engineering collection</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><collection>Open Access: DOAJ - Directory of Open Access Journals</collection><jtitle>Micromachines (Basel)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>An, Zhou</au><au>Ma, Honghai</au><au>Liu, Lilu</au><au>Wang, Yue</au><au>Lu, Haojian</au><au>Zhou, Chunlin</au><au>Xiong, Rong</au><au>Hu, Jian</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Robust Orthogonal-View 2-D/3-D Rigid Registration for Minimally Invasive Surgery</atitle><jtitle>Micromachines (Basel)</jtitle><date>2021-07-20</date><risdate>2021</risdate><volume>12</volume><issue>7</issue><spage>844</spage><pages>844-</pages><issn>2072-666X</issn><eissn>2072-666X</eissn><notes>ObjectType-Article-1</notes><notes>SourceType-Scholarly Journals-1</notes><notes>ObjectType-Feature-2</notes><notes>content type line 23</notes><abstract>Intra-operative target pose estimation is fundamental in minimally invasive surgery (MIS) to guiding surgical robots. This task can be fulfilled by the 2-D/3-D rigid registration, which aligns the anatomical structures between intra-operative 2-D fluoroscopy and the pre-operative 3-D computed tomography (CT) with annotated target information. Although this technique has been researched for decades, it is still challenging to achieve accuracy, robustness and efficiency simultaneously. In this paper, a novel orthogonal-view 2-D/3-D rigid registration framework is proposed which combines the dense reconstruction based on deep learning and the GPU-accelerated 3-D/3-D rigid registration. First, we employ the X2CT-GAN to reconstruct a target CT from two orthogonal fluoroscopy images. After that, the generated target CT and pre-operative CT are input into the 3-D/3-D rigid registration part, which potentially needs a few iterations to converge the global optima. For further efficiency improvement, we make the 3-D/3-D registration algorithm parallel and apply a GPU to accelerate this part. For evaluation, a novel tool is employed to preprocess the public head CT dataset CQ500 and a CT-DRR dataset is presented as the benchmark. The proposed method achieves 1.65 ± 1.41 mm in mean target registration error(mTRE), 20% in the gross failure rate(GFR) and 1.8 s in running time. Our method outperforms the state-of-the-art methods in most test cases. It is promising to apply the proposed method in localization and nano manipulation of micro surgical robot for highly precise MIS.</abstract><cop>Basel</cop><pub>MDPI AG</pub><pmid>34357254</pmid><doi>10.3390/mi12070844</doi><orcidid>https://orcid.org/0000-0002-1393-3040</orcidid><orcidid>https://orcid.org/0000-0003-2937-0702</orcidid><oa>free_for_read</oa></addata></record>