ISSN 2394-5125
 


    Speech and Facial Expression Based Emotion Detection using A Deep Learning Model (2019)


    A Rajini, Sathish Parvatham, Konatham Priyanka
    JCR. 2019: 298-308

    Abstract

    Over the recent years much advancement is made in terms of artificial intelligence, machine learning, human-machine interaction etc. Voice interaction with the machine or giving command to it to perform a specific task is increasingly popular. Many consumer electronics are integrated with SIRI, Alexa, Cortana, Google assist etc. But machines have limitation that they cannot interact with a person like a human conversational partner. It cannot recognize Human Emotion and react to them. Emotion Recognition from speech is a cutting-edge research topic in the Human machines Interaction field. There is a demand to design a more rugged man-machine communication system, as machines are indispensable to our lives. Many researchers are working currently on speech emotion recognition (SER) to improve the man machines interaction. To achieve this goal, a computer should be able to recognize emotional states and react to them in the same way as we humans do. The effectiveness of the SER system depends on quality of extracted features and the type of classifiers used. In this project we tried to identify four basic emotions: anger, sadness, neutral, happiness from speech. This work uses convolutional neural network (CNN) to identify different emotions using Mel Frequency Cepstral Coefficient (MFCC) as features extraction technique from speech. Finally, the simulations revealed that the proposed MFCC-CNN resulted in superior performance as compared to existing model’s model.

    Description

    » PDF

    Volume & Issue

    Volume 6 Issue-7

    Keywords