Loading…

Lens-to-Lens Bokeh Effect Transformation. NTIRE 2023 Challenge Report

We present the new Bokeh Effect Transformation Dataset (BETD), and review the proposed solutions for this novel task at the NTIRE 2023 Bokeh Effect Transformation Challenge. Recent advancements of mobile photography aim to reach the visual quality of full-frame cameras. Now, a goal in computational...

Full description

Saved in:
Bibliographic Details
Main Authors: Conde, Marcos V., Kolmet, Manuel, Seizinger, Tim, Bishop, Tom E., Timofte, Radu, Kong, Xiangyu, Zhang, Dafeng, Wu, Jinlong, Wang, Fan, Peng, Juewen, Pan, Zhiyu, Liu, Chengxin, Luo, Xianrui, Sun, Huiqiang, Shen, Liao, Cao, Zhiguo, Xian, Ke, Liu, Chaowei, Chen, Zigeng, Yang, Xingyi, Liu, Songhua, Jing, Yongcheng, Mi, Michael Bi, Wang, Xinchao, Yang, Zhihao, Lian, Wenyi, Lai, Siyuan, Zhang, Haichuan, Hoang, Trung, Yazdani, Amirsaeed, Monga, Vishal, Luo, Ziwei, Gustafsson, Fredrik K., Zhao, Zheng, Sjolund, Jens, Schon, Thomas B., Zhao, Yuxuan, Chen, Baoliang, Xu, Yiqing, JiXiangNiu
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page 1659
container_issue
container_start_page 1643
container_title
container_volume
creator Conde, Marcos V.
Kolmet, Manuel
Seizinger, Tim
Bishop, Tom E.
Timofte, Radu
Kong, Xiangyu
Zhang, Dafeng
Wu, Jinlong
Wang, Fan
Peng, Juewen
Pan, Zhiyu
Liu, Chengxin
Luo, Xianrui
Sun, Huiqiang
Shen, Liao
Cao, Zhiguo
Xian, Ke
Liu, Chaowei
Chen, Zigeng
Yang, Xingyi
Liu, Songhua
Jing, Yongcheng
Mi, Michael Bi
Wang, Xinchao
Yang, Zhihao
Lian, Wenyi
Lai, Siyuan
Zhang, Haichuan
Hoang, Trung
Yazdani, Amirsaeed
Monga, Vishal
Luo, Ziwei
Gustafsson, Fredrik K.
Zhao, Zheng
Sjolund, Jens
Schon, Thomas B.
Zhao, Yuxuan
Chen, Baoliang
Xu, Yiqing
JiXiangNiu
description We present the new Bokeh Effect Transformation Dataset (BETD), and review the proposed solutions for this novel task at the NTIRE 2023 Bokeh Effect Transformation Challenge. Recent advancements of mobile photography aim to reach the visual quality of full-frame cameras. Now, a goal in computational photography is to optimize the Bokeh effect itself, which is the aesthetic quality of the blur in out-of-focus areas of an image. Photographers create this aesthetic effect by benefiting from the lens optical properties.The aim of this work is to design a neural network capable of converting the the Bokeh effect of one lens to the effect of another lens without harming the sharp foreground regions in the image. For a given input image, knowing the target lens type, we render or transform the Bokeh effect accordingly to the lens properties. We build the BETD using two full-frame Sony cameras, and diverse lens setups.To the best of our knowledge, we are the first attempt to solve this novel task, and we provide the first BETD dataset and benchmark for it. The challenge had 99 registered participants. The submitted methods gauge the state-of-the-art in Bokeh effect rendering and transformation.
doi_str_mv 10.1109/CVPRW59228.2023.00166
format conference_proceeding
fullrecord <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_10208960</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10208960</ieee_id><sourcerecordid>10208960</sourcerecordid><originalsourceid>FETCH-LOGICAL-i1562-374df04e57f31976078e190eca891ce5d388de0da39c649b9e0ae401362b290a3</originalsourceid><addsrcrecordid>eNotjctKw0AUQEdBsNT8gcL8wMQ7M5nHXWqItlBUQtRlmSZ3bDRNSpKNf69FV2dzOIexGwmplIC3-dtL-W5QKZ8qUDoFkNaesQQdem1Ag8pQn7OFkhaEM9JesmSaPuHXA28M6gUrNtRPYh7Eifx--KI9L2KkeubVGPopDuMhzO3Qp_ypWpcFP414vg9dR_0H8ZKOwzhfsYsYuomSfy7Z60NR5SuxeX5c53cb0UpjldAuayJkZFzUEp0F50kiUB08yppMo71vCJqgsbYZ7pAgUAZSW7VTCEEv2fVftyWi7XFsD2H83kpQ4NGC_gEPs0pr</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Lens-to-Lens Bokeh Effect Transformation. NTIRE 2023 Challenge Report</title><source>IEEE Xplore All Conference Series</source><creator>Conde, Marcos V. ; Kolmet, Manuel ; Seizinger, Tim ; Bishop, Tom E. ; Timofte, Radu ; Kong, Xiangyu ; Zhang, Dafeng ; Wu, Jinlong ; Wang, Fan ; Peng, Juewen ; Pan, Zhiyu ; Liu, Chengxin ; Luo, Xianrui ; Sun, Huiqiang ; Shen, Liao ; Cao, Zhiguo ; Xian, Ke ; Liu, Chaowei ; Chen, Zigeng ; Yang, Xingyi ; Liu, Songhua ; Jing, Yongcheng ; Mi, Michael Bi ; Wang, Xinchao ; Yang, Zhihao ; Lian, Wenyi ; Lai, Siyuan ; Zhang, Haichuan ; Hoang, Trung ; Yazdani, Amirsaeed ; Monga, Vishal ; Luo, Ziwei ; Gustafsson, Fredrik K. ; Zhao, Zheng ; Sjolund, Jens ; Schon, Thomas B. ; Zhao, Yuxuan ; Chen, Baoliang ; Xu, Yiqing ; JiXiangNiu</creator><creatorcontrib>Conde, Marcos V. ; Kolmet, Manuel ; Seizinger, Tim ; Bishop, Tom E. ; Timofte, Radu ; Kong, Xiangyu ; Zhang, Dafeng ; Wu, Jinlong ; Wang, Fan ; Peng, Juewen ; Pan, Zhiyu ; Liu, Chengxin ; Luo, Xianrui ; Sun, Huiqiang ; Shen, Liao ; Cao, Zhiguo ; Xian, Ke ; Liu, Chaowei ; Chen, Zigeng ; Yang, Xingyi ; Liu, Songhua ; Jing, Yongcheng ; Mi, Michael Bi ; Wang, Xinchao ; Yang, Zhihao ; Lian, Wenyi ; Lai, Siyuan ; Zhang, Haichuan ; Hoang, Trung ; Yazdani, Amirsaeed ; Monga, Vishal ; Luo, Ziwei ; Gustafsson, Fredrik K. ; Zhao, Zheng ; Sjolund, Jens ; Schon, Thomas B. ; Zhao, Yuxuan ; Chen, Baoliang ; Xu, Yiqing ; JiXiangNiu</creatorcontrib><description>We present the new Bokeh Effect Transformation Dataset (BETD), and review the proposed solutions for this novel task at the NTIRE 2023 Bokeh Effect Transformation Challenge. Recent advancements of mobile photography aim to reach the visual quality of full-frame cameras. Now, a goal in computational photography is to optimize the Bokeh effect itself, which is the aesthetic quality of the blur in out-of-focus areas of an image. Photographers create this aesthetic effect by benefiting from the lens optical properties.The aim of this work is to design a neural network capable of converting the the Bokeh effect of one lens to the effect of another lens without harming the sharp foreground regions in the image. For a given input image, knowing the target lens type, we render or transform the Bokeh effect accordingly to the lens properties. We build the BETD using two full-frame Sony cameras, and diverse lens setups.To the best of our knowledge, we are the first attempt to solve this novel task, and we provide the first BETD dataset and benchmark for it. The challenge had 99 registered participants. The submitted methods gauge the state-of-the-art in Bokeh effect rendering and transformation.</description><identifier>EISSN: 2160-7516</identifier><identifier>EISBN: 9798350302493</identifier><identifier>DOI: 10.1109/CVPRW59228.2023.00166</identifier><identifier>CODEN: IEEPAD</identifier><language>eng</language><publisher>IEEE</publisher><subject>Benchmark testing ; Neural networks ; Photography ; Rendering (computer graphics) ; Training ; Transforms ; Visualization</subject><ispartof>2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2023, p.1643-1659</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10208960$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>310,311,786,790,795,796,27958,54906,55283</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10208960$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Conde, Marcos V.</creatorcontrib><creatorcontrib>Kolmet, Manuel</creatorcontrib><creatorcontrib>Seizinger, Tim</creatorcontrib><creatorcontrib>Bishop, Tom E.</creatorcontrib><creatorcontrib>Timofte, Radu</creatorcontrib><creatorcontrib>Kong, Xiangyu</creatorcontrib><creatorcontrib>Zhang, Dafeng</creatorcontrib><creatorcontrib>Wu, Jinlong</creatorcontrib><creatorcontrib>Wang, Fan</creatorcontrib><creatorcontrib>Peng, Juewen</creatorcontrib><creatorcontrib>Pan, Zhiyu</creatorcontrib><creatorcontrib>Liu, Chengxin</creatorcontrib><creatorcontrib>Luo, Xianrui</creatorcontrib><creatorcontrib>Sun, Huiqiang</creatorcontrib><creatorcontrib>Shen, Liao</creatorcontrib><creatorcontrib>Cao, Zhiguo</creatorcontrib><creatorcontrib>Xian, Ke</creatorcontrib><creatorcontrib>Liu, Chaowei</creatorcontrib><creatorcontrib>Chen, Zigeng</creatorcontrib><creatorcontrib>Yang, Xingyi</creatorcontrib><creatorcontrib>Liu, Songhua</creatorcontrib><creatorcontrib>Jing, Yongcheng</creatorcontrib><creatorcontrib>Mi, Michael Bi</creatorcontrib><creatorcontrib>Wang, Xinchao</creatorcontrib><creatorcontrib>Yang, Zhihao</creatorcontrib><creatorcontrib>Lian, Wenyi</creatorcontrib><creatorcontrib>Lai, Siyuan</creatorcontrib><creatorcontrib>Zhang, Haichuan</creatorcontrib><creatorcontrib>Hoang, Trung</creatorcontrib><creatorcontrib>Yazdani, Amirsaeed</creatorcontrib><creatorcontrib>Monga, Vishal</creatorcontrib><creatorcontrib>Luo, Ziwei</creatorcontrib><creatorcontrib>Gustafsson, Fredrik K.</creatorcontrib><creatorcontrib>Zhao, Zheng</creatorcontrib><creatorcontrib>Sjolund, Jens</creatorcontrib><creatorcontrib>Schon, Thomas B.</creatorcontrib><creatorcontrib>Zhao, Yuxuan</creatorcontrib><creatorcontrib>Chen, Baoliang</creatorcontrib><creatorcontrib>Xu, Yiqing</creatorcontrib><creatorcontrib>JiXiangNiu</creatorcontrib><title>Lens-to-Lens Bokeh Effect Transformation. NTIRE 2023 Challenge Report</title><title>2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)</title><addtitle>CVPRW</addtitle><description>We present the new Bokeh Effect Transformation Dataset (BETD), and review the proposed solutions for this novel task at the NTIRE 2023 Bokeh Effect Transformation Challenge. Recent advancements of mobile photography aim to reach the visual quality of full-frame cameras. Now, a goal in computational photography is to optimize the Bokeh effect itself, which is the aesthetic quality of the blur in out-of-focus areas of an image. Photographers create this aesthetic effect by benefiting from the lens optical properties.The aim of this work is to design a neural network capable of converting the the Bokeh effect of one lens to the effect of another lens without harming the sharp foreground regions in the image. For a given input image, knowing the target lens type, we render or transform the Bokeh effect accordingly to the lens properties. We build the BETD using two full-frame Sony cameras, and diverse lens setups.To the best of our knowledge, we are the first attempt to solve this novel task, and we provide the first BETD dataset and benchmark for it. The challenge had 99 registered participants. The submitted methods gauge the state-of-the-art in Bokeh effect rendering and transformation.</description><subject>Benchmark testing</subject><subject>Neural networks</subject><subject>Photography</subject><subject>Rendering (computer graphics)</subject><subject>Training</subject><subject>Transforms</subject><subject>Visualization</subject><issn>2160-7516</issn><isbn>9798350302493</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2023</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotjctKw0AUQEdBsNT8gcL8wMQ7M5nHXWqItlBUQtRlmSZ3bDRNSpKNf69FV2dzOIexGwmplIC3-dtL-W5QKZ8qUDoFkNaesQQdem1Ag8pQn7OFkhaEM9JesmSaPuHXA28M6gUrNtRPYh7Eifx--KI9L2KkeubVGPopDuMhzO3Qp_ypWpcFP414vg9dR_0H8ZKOwzhfsYsYuomSfy7Z60NR5SuxeX5c53cb0UpjldAuayJkZFzUEp0F50kiUB08yppMo71vCJqgsbYZ7pAgUAZSW7VTCEEv2fVftyWi7XFsD2H83kpQ4NGC_gEPs0pr</recordid><startdate>202306</startdate><enddate>202306</enddate><creator>Conde, Marcos V.</creator><creator>Kolmet, Manuel</creator><creator>Seizinger, Tim</creator><creator>Bishop, Tom E.</creator><creator>Timofte, Radu</creator><creator>Kong, Xiangyu</creator><creator>Zhang, Dafeng</creator><creator>Wu, Jinlong</creator><creator>Wang, Fan</creator><creator>Peng, Juewen</creator><creator>Pan, Zhiyu</creator><creator>Liu, Chengxin</creator><creator>Luo, Xianrui</creator><creator>Sun, Huiqiang</creator><creator>Shen, Liao</creator><creator>Cao, Zhiguo</creator><creator>Xian, Ke</creator><creator>Liu, Chaowei</creator><creator>Chen, Zigeng</creator><creator>Yang, Xingyi</creator><creator>Liu, Songhua</creator><creator>Jing, Yongcheng</creator><creator>Mi, Michael Bi</creator><creator>Wang, Xinchao</creator><creator>Yang, Zhihao</creator><creator>Lian, Wenyi</creator><creator>Lai, Siyuan</creator><creator>Zhang, Haichuan</creator><creator>Hoang, Trung</creator><creator>Yazdani, Amirsaeed</creator><creator>Monga, Vishal</creator><creator>Luo, Ziwei</creator><creator>Gustafsson, Fredrik K.</creator><creator>Zhao, Zheng</creator><creator>Sjolund, Jens</creator><creator>Schon, Thomas B.</creator><creator>Zhao, Yuxuan</creator><creator>Chen, Baoliang</creator><creator>Xu, Yiqing</creator><creator>JiXiangNiu</creator><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope></search><sort><creationdate>202306</creationdate><title>Lens-to-Lens Bokeh Effect Transformation. NTIRE 2023 Challenge Report</title><author>Conde, Marcos V. ; Kolmet, Manuel ; Seizinger, Tim ; Bishop, Tom E. ; Timofte, Radu ; Kong, Xiangyu ; Zhang, Dafeng ; Wu, Jinlong ; Wang, Fan ; Peng, Juewen ; Pan, Zhiyu ; Liu, Chengxin ; Luo, Xianrui ; Sun, Huiqiang ; Shen, Liao ; Cao, Zhiguo ; Xian, Ke ; Liu, Chaowei ; Chen, Zigeng ; Yang, Xingyi ; Liu, Songhua ; Jing, Yongcheng ; Mi, Michael Bi ; Wang, Xinchao ; Yang, Zhihao ; Lian, Wenyi ; Lai, Siyuan ; Zhang, Haichuan ; Hoang, Trung ; Yazdani, Amirsaeed ; Monga, Vishal ; Luo, Ziwei ; Gustafsson, Fredrik K. ; Zhao, Zheng ; Sjolund, Jens ; Schon, Thomas B. ; Zhao, Yuxuan ; Chen, Baoliang ; Xu, Yiqing ; JiXiangNiu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i1562-374df04e57f31976078e190eca891ce5d388de0da39c649b9e0ae401362b290a3</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Benchmark testing</topic><topic>Neural networks</topic><topic>Photography</topic><topic>Rendering (computer graphics)</topic><topic>Training</topic><topic>Transforms</topic><topic>Visualization</topic><toplevel>online_resources</toplevel><creatorcontrib>Conde, Marcos V.</creatorcontrib><creatorcontrib>Kolmet, Manuel</creatorcontrib><creatorcontrib>Seizinger, Tim</creatorcontrib><creatorcontrib>Bishop, Tom E.</creatorcontrib><creatorcontrib>Timofte, Radu</creatorcontrib><creatorcontrib>Kong, Xiangyu</creatorcontrib><creatorcontrib>Zhang, Dafeng</creatorcontrib><creatorcontrib>Wu, Jinlong</creatorcontrib><creatorcontrib>Wang, Fan</creatorcontrib><creatorcontrib>Peng, Juewen</creatorcontrib><creatorcontrib>Pan, Zhiyu</creatorcontrib><creatorcontrib>Liu, Chengxin</creatorcontrib><creatorcontrib>Luo, Xianrui</creatorcontrib><creatorcontrib>Sun, Huiqiang</creatorcontrib><creatorcontrib>Shen, Liao</creatorcontrib><creatorcontrib>Cao, Zhiguo</creatorcontrib><creatorcontrib>Xian, Ke</creatorcontrib><creatorcontrib>Liu, Chaowei</creatorcontrib><creatorcontrib>Chen, Zigeng</creatorcontrib><creatorcontrib>Yang, Xingyi</creatorcontrib><creatorcontrib>Liu, Songhua</creatorcontrib><creatorcontrib>Jing, Yongcheng</creatorcontrib><creatorcontrib>Mi, Michael Bi</creatorcontrib><creatorcontrib>Wang, Xinchao</creatorcontrib><creatorcontrib>Yang, Zhihao</creatorcontrib><creatorcontrib>Lian, Wenyi</creatorcontrib><creatorcontrib>Lai, Siyuan</creatorcontrib><creatorcontrib>Zhang, Haichuan</creatorcontrib><creatorcontrib>Hoang, Trung</creatorcontrib><creatorcontrib>Yazdani, Amirsaeed</creatorcontrib><creatorcontrib>Monga, Vishal</creatorcontrib><creatorcontrib>Luo, Ziwei</creatorcontrib><creatorcontrib>Gustafsson, Fredrik K.</creatorcontrib><creatorcontrib>Zhao, Zheng</creatorcontrib><creatorcontrib>Sjolund, Jens</creatorcontrib><creatorcontrib>Schon, Thomas B.</creatorcontrib><creatorcontrib>Zhao, Yuxuan</creatorcontrib><creatorcontrib>Chen, Baoliang</creatorcontrib><creatorcontrib>Xu, Yiqing</creatorcontrib><creatorcontrib>JiXiangNiu</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Xplore</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Conde, Marcos V.</au><au>Kolmet, Manuel</au><au>Seizinger, Tim</au><au>Bishop, Tom E.</au><au>Timofte, Radu</au><au>Kong, Xiangyu</au><au>Zhang, Dafeng</au><au>Wu, Jinlong</au><au>Wang, Fan</au><au>Peng, Juewen</au><au>Pan, Zhiyu</au><au>Liu, Chengxin</au><au>Luo, Xianrui</au><au>Sun, Huiqiang</au><au>Shen, Liao</au><au>Cao, Zhiguo</au><au>Xian, Ke</au><au>Liu, Chaowei</au><au>Chen, Zigeng</au><au>Yang, Xingyi</au><au>Liu, Songhua</au><au>Jing, Yongcheng</au><au>Mi, Michael Bi</au><au>Wang, Xinchao</au><au>Yang, Zhihao</au><au>Lian, Wenyi</au><au>Lai, Siyuan</au><au>Zhang, Haichuan</au><au>Hoang, Trung</au><au>Yazdani, Amirsaeed</au><au>Monga, Vishal</au><au>Luo, Ziwei</au><au>Gustafsson, Fredrik K.</au><au>Zhao, Zheng</au><au>Sjolund, Jens</au><au>Schon, Thomas B.</au><au>Zhao, Yuxuan</au><au>Chen, Baoliang</au><au>Xu, Yiqing</au><au>JiXiangNiu</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Lens-to-Lens Bokeh Effect Transformation. NTIRE 2023 Challenge Report</atitle><btitle>2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)</btitle><stitle>CVPRW</stitle><date>2023-06</date><risdate>2023</risdate><spage>1643</spage><epage>1659</epage><pages>1643-1659</pages><eissn>2160-7516</eissn><eisbn>9798350302493</eisbn><coden>IEEPAD</coden><abstract>We present the new Bokeh Effect Transformation Dataset (BETD), and review the proposed solutions for this novel task at the NTIRE 2023 Bokeh Effect Transformation Challenge. Recent advancements of mobile photography aim to reach the visual quality of full-frame cameras. Now, a goal in computational photography is to optimize the Bokeh effect itself, which is the aesthetic quality of the blur in out-of-focus areas of an image. Photographers create this aesthetic effect by benefiting from the lens optical properties.The aim of this work is to design a neural network capable of converting the the Bokeh effect of one lens to the effect of another lens without harming the sharp foreground regions in the image. For a given input image, knowing the target lens type, we render or transform the Bokeh effect accordingly to the lens properties. We build the BETD using two full-frame Sony cameras, and diverse lens setups.To the best of our knowledge, we are the first attempt to solve this novel task, and we provide the first BETD dataset and benchmark for it. The challenge had 99 registered participants. The submitted methods gauge the state-of-the-art in Bokeh effect rendering and transformation.</abstract><pub>IEEE</pub><doi>10.1109/CVPRW59228.2023.00166</doi><tpages>17</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier EISSN: 2160-7516
ispartof 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2023, p.1643-1659
issn 2160-7516
language eng
recordid cdi_ieee_primary_10208960
source IEEE Xplore All Conference Series
subjects Benchmark testing
Neural networks
Photography
Rendering (computer graphics)
Training
Transforms
Visualization
title Lens-to-Lens Bokeh Effect Transformation. NTIRE 2023 Challenge Report
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-09-22T23%3A28%3A29IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Lens-to-Lens%20Bokeh%20Effect%20Transformation.%20NTIRE%202023%20Challenge%20Report&rft.btitle=2023%20IEEE/CVF%20Conference%20on%20Computer%20Vision%20and%20Pattern%20Recognition%20Workshops%20(CVPRW)&rft.au=Conde,%20Marcos%20V.&rft.date=2023-06&rft.spage=1643&rft.epage=1659&rft.pages=1643-1659&rft.eissn=2160-7516&rft.coden=IEEPAD&rft_id=info:doi/10.1109/CVPRW59228.2023.00166&rft.eisbn=9798350302493&rft_dat=%3Cieee_CHZPO%3E10208960%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i1562-374df04e57f31976078e190eca891ce5d388de0da39c649b9e0ae401362b290a3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10208960&rfr_iscdi=true