Modeling of Human Upper Body for Sign Language Recognition

Abstract
Sign Language Recognition systems require not only the hand motion trajectory to be classified but also facial features, Human Upper Body (HUB) and hand position with respect to other HUB parts. Head, face, forehead, shoulders and chest are very crucial parts that can carry a lot of positioning information of hand gestures in gesture classification. In this paper as the main contribution, a fast and robust search algorithm for HUB parts based on head size has been introduced for real time implementations. Scaling the extracted parts during body orientation was attained using partial estimation of face size. Tracking the extracted parts for front and side view was achieved using CAMSHIFT [24]. The outcome of the system makes it applicable for real-time applications such as Sign Languages Recognition (SLR) systems.
Description
Keywords
Human upper body detection, Scaling, Tracking using CAMSHIFT, Sign Language Recognition
Citation
Bilal, S., Akmeliawati, R., Shafie, A. A., & El Salami, M. J. (2011, December). Modeling of Human Upper Body for Sign Language Recognition. In The 5th International Conference on Automation, Robotics and Applications (pp. 104-108). IEEE.