Comments on Others:
Summary
The paper describes a novel gesture recognizer using a nearest neighbor approach (I assume KNN), to recognize unknown gesture (testing sample or runtime sample) based on their features (or similarity) to the known gestures (training samples). The author states that simple, template-based recognizers are much more usable (advantageous) than its peers (sophisticated gesture recognition algorithms) in situations where personalized gesture interaction required.
The Protractor preprocessing of gestures said to transform the 2D trajectory of the gesture to a uniform vector representation and claimed to be applicable to both invariant and orientation sensitive gesture sets with different aspect ratios. The paper further describes Protractor design which is similar to several of steps involved in the $1 recognizer including preprocessing with N=16 samples as opposed to N=64, and noise removal using the resampling. But the author sates that the Protractor does not rescale as of $1 recognizer therefore it makes it to even recognize narrow gestures such as horizontal or vertical lines.
Discussion
The paper discusses a simple but effective way of developing a gesture recognizer using an available feature classification algorithm, nearest neighbor. Even though claim of Protractor’s superiority compared to other gesture recognition is stated, in my opinion more performance evaluation needed to keep it in the firm ground (comparing it just to $1 recognizer is simply not enough to claim its superiority over other algorithms). But I kind a biased and like the authors idea of comparing it to $1 recognizer, as I mentioned this as an advantage in my discussion on $1 recognizer paper. This may be a good point to consider when creating gesture recognizers, comparing it to both Protractor and $1 recognizer (both claims to be simple to implement). Also I agree that nearest neighbor algorithm is something pretty simple to implement using eucledia distance between the testing and training samples. But, based on my experience, the KNN tends to give mix set of results and it mostly depends on how you choose the feature sets for the training samples and quality of the feature set.
The paper describes the optimal angular distance calculation which is used to compare with the nearest neighbor algorithm and use it to obtain the maximum similarity (inverse minimum cosine distance).
The paper discusses nearest neighbor approach with K=1, but lack what will be the situation when the K value is changing (say 3, 5, 7 etc). As an example when Protractor was given 3 templates and finding assigning the highest similarity among 3 templates to the unknown one is not discusses as in the case of 5 or 7 or etc samples. With my personal experience, k=3 makes a better solution than k=1 where the comparison is happened between 3 known samples against 1 unknown sample (may be highly depends on the content area). Also missing is the description on how the performance of the Protractor when the template sample increases.
Find the paper here.
1 comment:
I agree somewhat. Protractor could have been tested against more recognizers. However, since Protractor is an improvement of $1, and the results of the $1 comparison to Rubine and DTW were addressed in Wobbrock's paper, perhaps Li determined that to be sufficient evidence to support his argument.
Post a Comment