ISSN 2394-5125
 


    Revolutionizing Sign Language Interpretation with CNN Technology (2020)


    Dr.R.Swathi
    JCR. 2020: 3138-3147

    Abstract

    Sign Language Recognition (SLR) is a crucial technology aimed at bridging communication gaps between deaf-mute individuals and those who can hear and speak. However, due to the complexity and wide variety of hand gestures in sign language, existing SLR methods rely on hand-crafted features to describe sign motion, which can be challenging to adapt to the diverse range of gestures. In response to this challenge, we propose a novel convolutional neural network (CNN) that automatically extracts discriminative spatial-temporal features from raw video streams, reducing the need for manual feature engineering. To enhance performance, our approach utilizes multiple video streams with color and depth information. The CNN takes as input the combination of color, depth, and trajectory data, including cues and body joint locations. We evaluate our model on a real dataset obtained through Microsoft Kinect and demonstrate its superior performance compared to traditional methods relying on manually designed features

    Description

    » PDF

    Volume & Issue

    Volume 7 Issue-6

    Keywords