Browsing by Author "Bilal, Sara"
Now showing 1 - 6 of 6
Results Per Page
Sort Options
Item Dynamic approach for real-time skin detection(Springer Berlin Heidelberg, 2015-06-01) Bilal, Sara; Akmeliawati, Rini; Salami, Momoh-Jimoh E.; Shafie, Amir A.Human face and hand detection, recognition and tracking are important research areas for many computer interaction applications. Face and hand are considered as human skin blobs, which fall in a compact region of colour spaces. Limitations arise from the fact that human skin has common properties and can be defined in various colour spaces after applying colour normalization. The model therefore, has to accept a wide range of colours, making it more susceptible to noise. We have addressed this problem and propose that the skin colour could be defined separately for every person. This is expected to reduce the errors. To detect human skin colour pixels and to decrease the number of false alarms, a prior face or hand detection model has been developed using Haar-like and AdaBoost technique. To decrease the cost of computational time, a fast search algorithm for skin detection is proposed. The level of performance reached in terms of detection accuracy and processing time allows this approach to be an adequate choice for real-time skin blob tracking.Item Hidden Markov model for human to computer interaction: a study on human hand gesture recognition(Springer Netherlands, 2013-12-01) Bilal, Sara; Akmeliawati, Rini; Shafie, Amir A.; Salami, Momoh-Jimoh E.Human hand recognition plays an important role in a wide range of applications ranging from sign language translators, gesture recognition, augmented reality, surveillance and medical image processing to various Human Computer Interaction (HCI) domains. Human hand is a complex articulated object consisting of many connected parts and joints. Therefore, for applications that involve HCI one can find many challenges to establish a system with high detection and recognition accuracy for hand posture and/or gesture. Hand posture is defined as a static hand configuration without any movement involved. Meanwhile, hand gesture is a sequence of hand postures connected by continuous motions. During the past decades, many approaches have been presented for hand posture and/or gesture recognition. In this paper, we provide a survey on approaches which are based on Hidden Markov Models (HMM) for hand posture and gesture recognition for HCI applications.Item Hidden Markov model for human to computer interaction: a study on human hand gesture recognition(Springer Netherlands, 2013-12-01) Bilal, Sara; Akmeliawati, Rini; Shafie, Amir A.; Salami, Momoh-Jimoh E.Human hand recognition plays an important role in a wide range of applications ranging from sign language translators, gesture recognition, augmented reality, surveillance and medical image processing to various Human Computer Interaction (HCI) domains. Human hand is a complex articulated object consisting of many connected parts and joints. Therefore, for applications that involve HCI one can find many challenges to establish a system with high detection and recognition accuracy for hand posture and/or gesture. Hand posture is defined as a static hand configuration without anymovement involved. Meanwhile, hand gesture is a sequence of hand postures connected by continuous motions. During the past decades, many approaches have been presented for hand posture and/or gesture recognition. In this paper, we provide a survey on approaches which are based on Hidden Markov Models (HMM) for hand posture and gesture recognition for HCI applications.Item Human Upper Body Pose Region Estimation(Springer, Berlin, Heidelberg, 2013) Bilal, Sara; Akmeliawati, Rini; Shafie, Amir A.; Salami, Momoh-Jimoh E.The objective of this chapter is to estimate 2D human pose for action recognition and especially for sign language recognition systems which require not only the hand motion trajectory to be classified but also facial features, Human Upper Body (HUB) and hand position with respect to other HUB parts. We propose an approach that progressively reduces the search space for body parts and can greatly improve chance to estimate the HUB pose. This involves two contributions: (a) a fast and robust search algorithm for HUB parts based on head size has been introduced for real time implementations. (b) Scaling the extracted parts during body orientation was attained using partial estimation of face size. The outcome of the system makes it applicable for real-time applications such as sign languages recognition systems. The method is fully automatic and self-initializing using a Haar-like face region. The tracking the HUB pose is based on the face detection algorithm. Our evaluation was done mainly using 50 images from INRIA Person Dataset.Item Modeling of Human Upper Body for Sign Language Recognition(IEEE, 2011-12-06) Bilal, Sara; Akmeliawati, Rini; Shafie, Amir A.; Salami, Momoh-Jimoh ESign Language Recognition systems require not only the hand motion trajectory to be classified but also facial features, Human Upper Body (HUB) and hand position with respect to other HUB parts. Head, face, forehead, shoulders and chest are very crucial parts that can carry a lot of positioning information of hand gestures in gesture classification. In this paper as the main contribution, a fast and robust search algorithm for HUB parts based on head size has been introduced for real time implementations. Scaling the extracted parts during body orientation was attained using partial estimation of face size. Tracking the extracted parts for front and side view was achieved using CAMSHIFT [24]. The outcome of the system makes it applicable for real-time applications such as Sign Languages Recognition (SLR) systems.Item Vision-based hand posture detection and recognition for Sign Language—A study(IEEE, 2011-05-17) Bilal, Sara; Akmeliawati, Rini; Salami, Momoh-Jimoh E.; Shafie, Amir A.Unlike general gestures, Sign Languages (SLs) are highly structured so that it provides an appealing test bed for understanding more general principles for hand shape, location and motion trajectory. Hand posture shape in other words static gestures detection and recognition is crucial in SLs and plays an important role within the duration of the motion trajectory. Vision-based hand shape recognition can be accomplished using three approaches 3D hand modelling, appearance-based methods and hand shape analysis. In this survey paper, we show that extracting features from hand shape is so essential during recognition stage for applications such as SL translators.