Loading…

License plate detection and recognition based on YOLOv3 and ILPRNET

This paper is concerned with the detection and recognition of Chinese license plates in complex backgrounds. Most applications are currently focused on good conditions. In complex natural scenes such as CCPD-DB, CCPD-FN, CCPD-Rotate, CCPD-Tile, CCPD-Weather, and CCPD-Challenge from the Chinese City...

Full description

Saved in:
Bibliographic Details
Published in:Signal, image and video processing image and video processing, 2022-03, Vol.16 (2), p.473-480
Main Authors: Zou, Yongjie, Zhang, Yongjun, Yan, Jun, Jiang, Xiaoxu, Huang, Tengjie, Fan, Haisheng, Cui, Zhongwei
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This paper is concerned with the detection and recognition of Chinese license plates in complex backgrounds. Most applications are currently focused on good conditions. In complex natural scenes such as CCPD-DB, CCPD-FN, CCPD-Rotate, CCPD-Tile, CCPD-Weather, and CCPD-Challenge from the Chinese City Parking Dataset (CCPD), inaccurate localization and poor character recognition accuracy issues appear towards existing license plates. Therefore, this paper proposes a two-stage license plate recognition algorithm based on YOLOv3 and Improved License Plate Recognition Net (ILPRNET). In the first stage, YOLOv3 is adopted to detect the position of the license plate and then extract the license plate. In the second stage, the ILPRNET license plate recognition network is used to perform localization of license plate characters and the 2D attentional-based license plate recognizer with an CNN encoder is capable of recognizing license plates accurately. The test results indicate that our proposed algorithm performs well in a variety of complex scenarios. Especially in sub-datasets like CCPD-Base, CCPD-DB, CCPD-FN, CCPD-Weather, and CCPD-Challenge, the recognition accuracy achieved 99.2%, 98.1%, 98.5%, 97.8%, and 86.2%, respectively.
ISSN:1863-1703
1863-1711
DOI:10.1007/s11760-021-01981-8