Fusing Handcrafted features with Convolutional Neural Network features for American Sign Language Classification.
Keywords:
Sign language, gesture recognition, uncontrolled, hand-crafted, Comparative tests,Abstract
Sign language is the unavoidable medium for communicating with each other for deaf students. Ordinary peoples do not study sign language to communicate with deaf peoples. So there is a need for an interpreter to interpret the meaning of the sign to normal people. The effectiveness of hand gesture recognition has been dramatically affected by numerous concerns that are not yet resolved, such as uncontrolled signing conditions, perspective light diversity, and partial occlusion. This paper proposed a hybrid framework for recognizing sign language, which contains deep neural network features (CNN features) and hand-crafted features containing texture features from the local binary pattern and greylevel co-occurrence matrix. In this framework for segmentation purposes, use deeplabv3 semantic segmentation network. A serial based feature fusion and entropy-based feature selection are used to select the appropriate features. Comparative tests demonstrate that significant increases in accuracy are possible by fusing both the features. Finally, these features are given to an SVM classifier to classify the signs. The proposed model obtained 99.16 accuracy.