- Konstantinos G. Derpanis
- Richard P. Wildes
- John K. Tsotsos
A common approach to modeling hand gestures (e.g., American Sign Language) in the computer vision literature is to build explicit models for each gesture in the lexicon. A limitation of this approach is that the modeling of the gestures does not scale up to large languages. For example, a recent dictionary of American Sign Language (ASL) documented over 4500 sign. It turns out that like speech, ASL can be linguistically described in terms of a small number of basic parts, termed phonemes. The parts that comprise ASL gestures can be broadly categorized as: location (``Where on the body is the gesture made?''), handshape (``How are the hand(s) articulated?'') and movements (``How do the hand(s) move?''). Basing a recognition system on a phonemic decomposition provides a powerful paradigm, since the number of phonemes to be modeled are relatively small compared to the number of gestures at the lexical level. Our recent efforts have concentrated on modeling and recognizing the phonemic movements of ASL. Our most current approach, extracts kinematic features from the apparent motion as observed from a single camera and combines them to yield distinctive signatures for 14 single-handed rigid phonemic movements of ASL. The approach has been instantiated in software and evaluated on a database of 592 gesture sequences with an overall recognition rate of 97.13%.
- K.G. Derpanis, R.P. Wildes and J.K. Tsotsos, Hand Gesture Recognition within a Linguistics-Based Framework, European Conference on Computer Vision (ECCV), 2004 pp 282-296
- K.G. Derpanis, R.P. Wildes and J.K. Tsotsos, Vision-Based Gesture Recognition within a Linguistics-Based Framework, York University Technical Report CS-2004-02, July 12, 2004
- K.G. Derpanis, R.P. Wildes and J.K. Tsotsos, Vision-Based Gesture Recognition within a Linguistics-Based Framework, MSc Thesis, York University, 2003