Accurate and Efficient Gesture Spotting via Pruning and Subgesture Reasoning
- 1k Downloads
Gesture spotting is the challenging task of locating the start and end frames of the video stream that correspond to a gesture of interest, while at the same time rejecting non-gesture motion patterns. This paper proposes a new gesture spotting and recognition algorithm that is based on the continuous dynamic programming (CDP) algorithm, and runs in real-time. To make gesture spotting efficient a pruning method is proposed that allows the system to evaluate a relatively small number of hypotheses compared to CDP. Pruning is implemented by a set of model-dependent classifiers, that are learned from training examples. To make gesture spotting more accurate a subgesture reasoning process is proposed that models the fact that some gesture models can falsely match parts of other longer gestures. In our experiments, the proposed method with pruning and subgesture modeling is an order of magnitude faster and 18% more accurate compared to the original CDP algorithm.
KeywordsGesture Recognition Hand Gesture Hand Gesture Recognition False Match Input Frame
Unable to display preview. Download preview PDF.
- 1.Triesch, J., von der Malsburg, C.: A gesture interface for human-robot-interaction. Automatic Face and Gesture Recognition, 546–551 (1998)Google Scholar
- 2.Freeman, W., Weissman, C.: Television control by hand gestures. Technical Report 1994-024, MERL (1994) Google Scholar
- 3.Lee, H., Kim, J.: An HMM-based threshold model approach for gesture recognition. PAMI 21, 961–973 (1999)Google Scholar
- 4.Freeman, W., Roth, M.: Computer vision for computer games. Automatic Face and Gesture Recognition, 100–105 (1996)Google Scholar
- 6.Morguet, P., Lang, M.: Spotting dynamic hand gestures in video image sequences using hidden Markov models. In: ICIP, pp. 193–197 (1998)Google Scholar
- 8.Rose, R.: Word spotting from continuous speech utterances. In: Automatic Speech and Speaker Recognition - Advanced Topics, pp. 303–330. Kluwer Academic Publishers, Dordrecht (1996)Google Scholar
- 9.Kahol, K., Tripathi, P., Panchanathan, S.: Automated gesture segmentation from dance sequences. Automatic Face and Gesture Recognition, 883–888 (2004)Google Scholar
- 10.Starner, T., Pentland, A.: Real-time american sign language recognition from video using hidden Markov models. In: SCV 1995, pp. 265–270 (1995)Google Scholar
- 11.Darrell, T., Pentland, A.: Space-time gestures. In: Proc. CVPR, pp. 335–340 (1993)Google Scholar
- 14.Palm: Grafitti alphabet, http://www.palmone.com/us/products/input/
- 16.Yuan, Q., Sclaroff, S., Athistos, V.: Automatic 2D hand tracking in video sequences. In: WACV (2005)Google Scholar