Loading…

Hand Gesture Recognition Based on Various Deep Learning YOLO Models

Some varieties of sign languages are used by deaf or hard-of-hearing people worldwide to interact with others more effectively, consequently sign language's automatic translation is expressive and important. Significant improvements in computer vision have been made recently, notably in tasks b...

Full description

Saved in:
Bibliographic Details
Published in:International journal of advanced computer science & applications 2023, Vol.14 (4)
Main Authors: Mesbahi, Soukaina Chraa, Mahraz, Mohamed Adnane, Riffi, Jamal, Tairi, Hamid
Format: Article
Language:English
Subjects:
Citations: Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Some varieties of sign languages are used by deaf or hard-of-hearing people worldwide to interact with others more effectively, consequently sign language's automatic translation is expressive and important. Significant improvements in computer vision have been made recently, notably in tasks based on object detection using deep learning. By locating things in visual photos or videos, the genuine cutting-edge one-step object detection approach greatly provides exceptional detection accuracy. With the help of messaging or video calling, this study suggests a technique to get beyond these obstacles and enhance communication for such persons, regardless of their disability. To recognize motions and classes, we provide an enhanced model based on Yolo (You Look Only Once) V3, V4, V4-tiny, and V5. The dataset is clustered using the suggested algorithm, requiring only manual annotation of a reduced number of classes and analysis for patterns that aid in target prediction. The suggested method outperforms the current object detection approaches based on the YOLO model, according to experimental results.
ISSN:2158-107X
2156-5570
DOI:10.14569/IJACSA.2023.0140435