Loading…

Task-Specific Pruning: Efficient Parameter Reduction in Multi-task Object Detection Models

This study explores the optimization effect of task-specific pruning in object detection models. Object detection models are designed to handle classification and regression tasks, which are fundamentally distinct. However, the relationship between these two tasks, particularly regarding loss calcul...

Full description

Saved in:
Bibliographic Details
Main Authors: Ke, Wei-Hsun, Tseng, Yu-Wen, Cheng, Wen-Huang
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This study explores the optimization effect of task-specific pruning in object detection models. Object detection models are designed to handle classification and regression tasks, which are fundamentally distinct. However, the relationship between these two tasks, particularly regarding loss calculation, remains an unresolved issue in both model training and pruning. To address this, we propose a task-specific pruning method that combines the advantages of pruning-during-training with established sparsity learning techniques. Through our experiments, we demonstrate that task-specific pruning is the most effective approach for reducing parameter count while maintaining high accuracy in multi-task deep neural networks, across different configurations. Specifically, when applied to the Scaled-Yolov4-p5 model using the MS COCO 2017 dataset, our approach achieves a target sparsity of 90% with only a marginal 1% drop in mAP.
ISSN:2640-0103
DOI:10.1109/APSIPAASC58517.2023.10317436