Bilal, SaraAkmeliawati, RiniShafie, Amir A.Salami, Momoh-Jimoh E2019-08-142019-08-142011-12-06Bilal, S., Akmeliawati, R., Shafie, A. A., & El Salami, M. J. (2011, December). Modeling of Human Upper Body for Sign Language Recognition. In The 5th International Conference on Automation, Robotics and Applications (pp. 104-108). IEEE.http://repository.elizadeuniversity.edu.ng/handle/20.500.12398/509Sign Language Recognition systems require not only the hand motion trajectory to be classified but also facial features, Human Upper Body (HUB) and hand position with respect to other HUB parts. Head, face, forehead, shoulders and chest are very crucial parts that can carry a lot of positioning information of hand gestures in gesture classification. In this paper as the main contribution, a fast and robust search algorithm for HUB parts based on head size has been introduced for real time implementations. Scaling the extracted parts during body orientation was attained using partial estimation of face size. Tracking the extracted parts for front and side view was achieved using CAMSHIFT [24]. The outcome of the system makes it applicable for real-time applications such as Sign Languages Recognition (SLR) systems.enHuman upper body detectionScalingTracking using CAMSHIFTSign Language RecognitionModeling of Human Upper Body for Sign Language RecognitionArticle