"A meek endeavor to the triumph" by Sampath Jayarathna

Saturday, September 04, 2010

Reading #1: Gesture Recognition

Comments on others:
Francisco Vides  
Jianjie (JJ) Zhang

             This paper starts with providing fundamental difference between the sketch and a gesture and the usefulness of gesture recognition to recognize a sketch. In contrast, the author states to avoid use of gesture based techniques in sketching due to possible constraining users from brainstorming and design. Furthermore, the author states that gesture recognitions work well in concert as editing or action commands in a sketch recognition system.
           Furthermore, author discusses mainly the work of Dean Rubine’s gesture recognition method used for sketches in 1991, Long’s method which extended Rubine’s features to 22 and Wobbrock’s $1 recognizer. Author distinguish the Dean Rubine’s work as first and most recognized gesture recognition method for sketches with 13 stroke features and a linear classifier to feature classification. Author describes Christopher Long’s work on gesture recognition called the system Quill in 1996, which is similar to Rubine’s but slightly modified to have less reliance on time. Furthermore, the author states Jacob Wobbrock’s work on implementing a simple template matcher in 2007 that recognizes gestures slightly better than the feature based method. It is called the $1 recognizer because is it easy to implement.

            The author presented a number of methods for performing gesture recognition in a single roof survey. Much of the authors’ concentration is on Rubine’s work, giving a clear meaning and importance of each of the feature set than the Rubine’s paper (I feel Rubin’s paper bit complicated to understand than the authors explanation, I guess that’s what authors’ intension was in writing this as a survey paper).  
              In my opinion I would like to see the features mapped to a common ground with a same value scaled to its characteristics. Say, something like a fuzziness 0-1. If it is a line, how slanted, how straight etc. And also some sort of a sensitive analysis to see which features makes the recognition accuracy better, may be a ranking order of the identified features may be a good idea.

Find the paper here.

1 comment:

liwenzhe said...

When Rubine was training the classifier, actually he uses the idea of which feature was more important than other features for one gesture class. But anyway, I think figuaring out which feature makes gesture recognition more accurate is very important. Especailly when we design the feature set for certain domain.I am always confused about this issue when i have to choose features from many available feature set, in which way to test this issue also consufed me..