"A meek endeavor to the triumph" by Sampath Jayarathna

Monday, September 20, 2010

Sokushinbutsu (Self-Mummification) - Buddhism, 1/1000 ways to die………….

I thought this self immolation video is too emotional, (it was done to protest and bring attention to the world the prosecution of Buddhists), how WRONG AM I?????

Sokushinbutsu were Buddhist monks or priests who caused their own deaths in a way that resulted in their mummification. This practice reportedly took place almost exclusively in northern Japan around the Yamagata Prefecture. It is believed that many hundreds of monks tried, but only between 16 and 24 such mummifications have been discovered to date. The practice is not advocated or practiced today by any Buddhist sect. The practice was thought to be extinct in modern Japan, but a recent example was discovered in Tokyo in July 2010.


For 1,000 days (a little less than three years) the priests would eat a special diet consisting only of nuts and seeds, while taking part in a regimen of rigorous physical activity that stripped them of their body fat. They then ate only bark and roots for another thousand days and began drinking a poisonous tea made from the sap of the Urushi tree, normally used to lacquer bowls.

This caused vomiting and a rapid loss of bodily fluids, and most importantly, it made the body too poisonous to be eaten by maggots. Finally, a self-mummifying monk would lock himself in a stone tomb barely larger than his body, where he would not move from the lotus position. His only connection to the outside world was an air tube and a bell. Each day he rang a bell to let those outside know that he was still alive.

When the bell stopped ringing, the tube was removed and the tomb sealed. After the tomb was sealed, the other monks in the temple would wait another 1,000 days, and open the tomb to see if the mummification was successful. 

If the monk had been successfully mummified, they were immediately seen as a Buddha and put in the temple for viewing. Usually, though, there was just a decomposed body. Although they weren't viewed as a true Buddha if they weren't mummified, they were still admired and revered for their dedication and spirit.






Saturday, September 11, 2010

Reading #8. A Lightweight Multistroke Recognizer for User Interface Prototypes

Comments on Others:

Youyou Wang 

Summary
           The paper describes $N recognizer, a lightweight, concise multi-stroke recognizer, which is a significant extension to the $1 uni-stroke recognizer.  The $N said to be capable of user defined complex gesture identification, customization, and operate at speeds supporting fluid interaction (not sure what it means though???). $N is an extension of $1 to overcome limitations of that version, such as uni-stroke nature, failure to recognize 1D gestures, and rotation invariance. More specifically, the $N recognizer is implemented to use with rapid prototyping as way of quickly incorporating it to user interfaces with small loc size.
           The intuitive idea behind the $N development is to support rapid prototyping applications by eliminating the need of permutations of multistroke gesture and enable to enter only a single version of the gesture and use it for recognition. This is done by creating unistroke permutations of the multistroke at the define time and then using those for comparison at the run time. The $N uses automatic differentiating using a threshold to identify 1D and 2D gestures so that 1D’s can be preserved its aspect ratio.
           The paper also describes $N limitations, such as provisions for scale and position, and not using gesture features. 

Discussion
            First the use of name $N kind a amusing, may be they took the idea of $1 literally and replace 1 by N to represent the idea of multi-stroke possibility in their algorithm recognition compared to $1 recognizers uni-stroke nature. But I guess $1 recognizer authors used that name to represent both how simple it is, less expensive (shorter implementation time and small size) and uni-stroked. But when it comes to $N, the idea comes to readers mind is this thing N times expensive to implement (that much complex) than $1 recognizer (don’t laugh, that’s how I feel).
           One of the best goal of the $N as I consider is the ability to employ recognition with minimal input support (just a single multistroke entry) compared to other available recognizers including $1.  The use of just the Euclidian distance for comparison between the candidate stroke to permutations unistroke is questionable, and in my opinion not the correct technique to do so. May be a RMSE value between the candidate and the unistroke permutation may be a good idea to verify the accuracy and may be to increase the performance.
           The $N requires separate step to do the recognition of 1D from 2D, and in my opinion this can be avoided by using size invariant algorithmic design, may be a comparison based on the basic segment structure (example: a square consists of similar lengthen 2 horizontal and 2 vertical lines, and a triangle consists of 2 slanted and 1 horizontal lines).

Find the paper here. 

Reading #7. Sketch Based Interfaces: Early Processing for Sketch Understanding

Comments on Others:

Chris Aikens

Summary
           The paper discusses a novel approach of developing an interface which feels as natural as a paper but smart enough to understand the user’s intension and identifying sketches to a geometric shape. More specifically the work is intended to provide natural sketching and have the sketches understood. Also the authors state their intension of this work by giving user a system with unrestricted capabilities, such as drawing a rectangle clock-wise or counter clock-wise or drawing it with multiple strokes and still which is able to identify by the computer, like how people perceive things in the real world, the geometric shape rather how it was drawn.  Also note that the authors specified their domain as mechanical engineering design which dives added difficulty and complexity into sketching in this domain. Therefore the paper states that preprocessing steps like finding corners, fitting both lines and curves particularly important.
           The system design consists of approximation, beautification and basic recognition in the preprocessing stage. According to the paper, approximation fits pixels to lines and curves, Beautification modifies the output from approximation to make it visually more appealing, and finally Basic Recognition do the interpretation of the strokes, like sequence of 4 lines as a rectangle or a square. Subsequent recognition into complex structures handles by some other system in the design. The approximation consist of vertex detection, which is based on Average based filtering to find extrema corresponding to vertices while avoiding those due to noise. 

Discussion
           This paper particular grabs my interests due to its nature of regarding sketches as graphic objects and doing preprocessing before recognition to break them into basic objects (lines, curves etc). I consider this technique a sort of best way of tackling sketch recognition problems (just my personal opinion).
            The paper discusses vertex detection by the average based filtering which is on curvature graph and speed graph thresholds. I’m just wondering is there any other sophisticated ways to find the vertexes than this particular technique? In my experience, examining pixel distribution (neighboring pixels) is a hard problem; I’m interested in knowing something simple (Dr. Hammond mentioned tools to do this, am I missing something here?). Also I’m skeptical about using just the mean of the curvature data or speed data, a single higher order noise signal may ruin the whole detection process.   Also the Beautification is based on adjusting the slopes, but may be thickening the line and then skeltoning it would be a better choice, because slope adjustments may weaken the intended stroke appearance.
              In my personal opinion, the paper is lacking the required recognition design as a description which makes it more abstract to readers when it comes to the so called “higher level recognizer”.     

Find the paper here

Wednesday, September 08, 2010

Reading #6: Protractor: A Fast and Accurate Gesture Recognizer

Comments on Others:

chris aikens

Summary
            The paper describes a novel gesture recognizer using a nearest neighbor approach (I assume KNN), to recognize unknown gesture (testing sample or runtime sample) based on their features (or similarity) to the known gestures (training samples). The author states that simple, template-based recognizers are much more usable (advantageous) than its peers (sophisticated gesture recognition algorithms) in situations where personalized gesture interaction required.
           The Protractor preprocessing of gestures said to transform the 2D trajectory of the gesture to a uniform vector representation and claimed to be applicable to both invariant and orientation sensitive gesture sets with different aspect ratios. The paper further describes Protractor design which is similar to several of steps involved in the $1 recognizer including preprocessing with N=16 samples as opposed to N=64, and noise removal using the resampling. But the author sates that the Protractor does not rescale as of $1 recognizer therefore it makes it to even recognize narrow gestures such as horizontal or vertical lines.   

Discussion
           The paper discusses a simple but effective way of developing a gesture recognizer using an available feature classification algorithm, nearest neighbor. Even though claim of Protractor’s superiority compared to other gesture recognition is stated, in my opinion more performance evaluation needed to keep it in the firm ground (comparing it just to $1 recognizer is simply not enough to claim its superiority over other algorithms).   But I kind a biased and like the authors idea of comparing it to $1 recognizer, as I mentioned this as an advantage in my discussion on $1 recognizer paper. This may be a good point to consider when creating gesture recognizers, comparing it to both Protractor and $1 recognizer (both claims to be simple to implement). Also I agree that nearest neighbor algorithm is something pretty simple to implement using eucledia distance between the testing and training samples. But, based on my experience, the KNN tends to give mix set of results and it mostly depends on how you choose the feature sets for the training samples and quality of the feature set.
           The paper describes the optimal angular distance calculation which is used to compare with the nearest neighbor algorithm and use it to obtain the maximum similarity (inverse minimum cosine distance).
           The paper discusses nearest neighbor approach with K=1, but lack what will be the situation when the K value is changing (say 3, 5, 7 etc). As an example when Protractor was given 3 templates and finding assigning the highest similarity among 3 templates to the unknown one is not discusses as in the case of 5 or 7 or etc samples. With my personal experience, k=3 makes a better solution than k=1 where the comparison is happened between 3 known samples against 1 unknown sample (may be highly depends on the content area). Also missing is the description on how the performance of the Protractor when the template sample increases.

Find the paper here

Tuesday, September 07, 2010

Reading #5: Gestures without Libraries, Toolkits or Training: A $1 Recognizer for User Interface Prototypes

Comments on Others:

Wenzhe Li

Summary
           The paper states that “$1 recognizer” (I guess $1 represents how easy, cheap, and usable with less loc to program) was developed to enable novice programmers to incorporate gestures into their UI prototypes. The paper includes a detailed pseudo code, so that programmers can easily test the gesture recognizer through their interface design. The authors correctly states that the making of gesture recognition a topic of interests to experts in the field of AI, not experts in HCI with primarily works on interactive level (not completely agree!). Therefore, authors assume that this perhaps limited the opportunity to slip in gesture recognition into UI design. The paper also presents why Ad-hoc recognizers prevents applications due to limitations in defining own gestures at run time,  and when the gesture set is very large.
           The $1 recognizer is essentially limited to uni-stroke due to the authors’ interest in recognizing paths delineated by users interactively. The authors state comparatively, $1 recognizer is better because sophisticated methods like HMMs, ANNs, statistical classifiers require extensive training before practical use, and also they are difficult to program and debug. The authors describe, even the popular Rubine’s linear classifier requires advance computation before its use.
           The paper also states 8 criteria’s defined upon the recognizer to make it simple (These criteria’s also considered as goals of this methodology). The paper describes its algorithm implementation is 4 step process. As limitations, authors states that $1 algorithm is rotation, scale and position invariant (circle and oval both similar). Also gestures are not possible differentiate based on time. The paper states the testing procedure with comparison to Rubin classifier and Dynamic Time Warping (DTW).   

Discussion
              This paper explains a simple gesture recognizer which can be easily program and plugged in to a UI design and support gestures through the UI. The authors believe that perhaps the area of interests of HCI experts makes it limited for the UI designs to be incorporated with gesture recognition. In my opinion, I’m not completely agreed to that, may be limitations on the existing technology hinder the opportunities to do more research on HMM, ANN etc. and to apply that knowledge to UI usability and development research.
           I’m not sure why authors state that HMMs require extensive sample training and complex coding and debugging. With my personal experience on developing HMM based eye movement detection algorithm, I feel authors bit mislead just by the content before applying it to real world applications (HMM is reasonably fast with online applications though it require some sort of a training before hand). Also, I’m just wondering the use of N=64 for the resampling. May be using a digitization (select the path and make adjacent pixels 1 or zero depends on the next pixel and direction)
           Overall, the paper presents easy to read/understand concept on simple gesture recognizer, and I believe is a plus for gesture/sketch recognition community to implement it and compare it with their own algorithms (Because authors suggests that their work is comparatively competitive in terms of results in previously used complex algorithms, Rubine and DTW).

Find the paper here.

Reading #4: Sketchpad: A Man-Made Graphical Communication System

Comments on Others:

Wenzhe Li

Summary
         This paper describes earliest (more importantly, first), man-machine-interaction (MMI) technology called “Sketchpad” by Ivan E. Sutherland (According to Dr. Hammond, Sutherland was recognized as the father of the sketch recognition). The sketch pad was capable of drawing sketches according the movement of a light pen (for position trace) and using a push button to say a specific function (turn on and off switches) to execute. This way the sketchpad system said to be eliminated typed statements (so far we are still using typing, but its’ OK the paper dates back to 1964 with the original idea) in favor of line drawings.
            The authors demonstrated its intended use of light pen with a novel idea of using pseudo pen location to detect where the intended location that the pen needs to be aiming at. The paper explains its displays are using straight line segments, circle arcs and single points as the basic (or fundamental) design symbols for its sketchpad displays. The authors also discusses the use of display abstraction to support in drawing properties, like similar length, similar size etc using constrains. 

Discussion

             The sketchpad is an important finding for the sketch recognition domain (I mean the idea of sketch/gesture recognition),   and I believe it makes totally a new era of human computer interaction. How important this area of research going to be for our day today life or to the near future, readers can look at your own pocket (touch phones, touch pads, touch media players etc). For the moment we are using some sort of gesture input for these devices, and we can expect whole new way of interactions (May be you can look at “Microsoft's Vision of the Future” video just to get a clue).
           It is an interesting idea of having a pseudo pen location for segments where a light pen is aiming at. But as of my opinion, when it comes to heavily multi-stroked gestures/sketches, the system tends to fail due to cross-mapping of pseudo pen location (it is possible to map the pen location to a wrong place due to multi-strokes already sketched on). I’m not clear how the authors solved this problem in their related work (any thoughts???).
     
     Most of the future works of the paper are in successful outcomes (in my personal opinion), and the concept of sketching using an input device is outdated (for the purpose of creating an electrical circuit or complex drawing etc.), there are already number of software products to do all these drawing stuff (like smartdraw).   

Find the paper here

Monday, September 06, 2010

Reading #3: “Those Look Similar!” Issues in Automating Gesture Design Advice

Comments on others:

Wenzhe Li

Summary

            This paper describes an interface design tool that uses unsolicited advice to help designers of pen-based user interfaces to create pen gestures which are dissimilar. The tool said to be designed such a way that it warns designers when their gestures will be perceived to be similar and advise how to make their gestures less similar. The authors state their gesture design tool “quill” which will advises designers on how to improve their gestures. To detect when people will perceive gestures as similar, quill said to be equipped with experimentally-derived model of human-perceived gesture similarity. This paper states its similarity experiment with outcomes of 99.8% for not similar gestures and 22.4% for similar ones with a gesture pair to perceive as similar or not. The paper also discusses interface and implementation challenges on quill.

Discussion

             The paper discusses an interesting topic of helping interface designers to create less similar gestures with the pen based user interfaces. The paper states cut and paste as gestures with similar abstract operations which often confuse designers when assigning gestures. I’m not sure how true the statement is, may be authors wants to compare cut and copy, not cut and paste!!!
           With that much lower accuracy rate for identifying two similar gestures (22.4%), it is hard to understood that quill is so much accurate or powerful enough to predict when two gestures are similar (in my opinion the similarity experiment result accuracy is not even in the mid rage which is unacceptable). Also I feel the readers may want more detailed explanation on similarity experiment (as the backbone of the quill, by the way that’s what provide intelligence to quail) to get a fair idea of how the quill works.
           The paper discusses difficulties in providing advice, specially at what time, as soon or later. But the authors state that they found that the unsolicited advice is the best way though there are difficulties with the timing in situations like brainstorming. May be the solution is too simple to think (or stupid), but why don’t we create a condition for brainstorming (may be user can press a brainstorm button to stop advice, or stop advice button!! Genius or what).

Find the paper here.

Sunday, September 05, 2010

Reading #2: Specifying Gestures by Example (Rubine)

Comments on others:

Yue Li


Summary:

This paper describes possibility of creating automatic gesture recognizers from example gestures by removing the need for hand coding. The GRANDMA (Gesture Recognizers Automated in a Novel Direct Manipulation Architecture) toolkit developed for rapidly adding gestures to direct manipulation interfaces and trainable single stroke gesture recognizer used by GRANDMA are also explained. The paper also describes powerful combination of gesturing and direct manipulation in the two-phase interaction technique. The author suggests the requirement of 15 training samples (or examples) per gesture class which is selected empirically. The author explains a simple preprocessing applied towards the inputs by removing time stamped coordinates which are 3 pixels apart from the previous input point. Furthermore, the author describes the feature set was empirically determined to work well on a number of different gesture sets. The gesture feature classification is handled by a linear classifier over the features. In addition Eager recognition to recognize gestures as soon as they are unambiguous and Multi Finger recognition are also discussed.

 

Discussion

  According to the author, the GRANDMA took kit is essentially a single stroke based gesture system and states that this allows shorter timeouts to be used and therefore avoids segmentation problem. Still the author states the possibility of both timeouts and mouse button release, and this makes readers wonder that if button release action is a possibility then why its so hard to incorporate the multi-stroke gesture recognition to the GRANDMA toolkit. In my opinion, it is voluble for users to show that even the multi-stroke gesture recognition is possible through the GRANDMA toolkit though it is faster (or less problematic and simple) to use only single-stroked gesture recognition.
          The author suggests than 15 training samples are enough per gesture class and this essentially makes readers wonder that the use of terminology in the GRANDMA, "Automation". Author refers it as the possibility of automatic gesture recognition vs. need for hand coding, and in my opinion if the recognizer still requires some sort of manual input (gestures) for the system to be functional (just my opinion).
         The elimination of jiggle is another point to consider, the author states a simple preprocessing to remove input apart from 3 pixels from the previous input. In my opinion this may hinder the users intension as well as the actual input gesture. I believe a strong preprocessing; something like a binarization and then skeltonation would be much more appealing. The use of specific features set need to be independently verified, may be a sensitivity analysis and ranking order based on the analysis would be a better choice. Its quite nice if there’s some results from different gesture classifications, something like KNN compared to linear classifier.

Find the paper here.

Saturday, September 04, 2010

Reading #1: Gesture Recognition

Comments on others:
Francisco Vides  
Jianjie (JJ) Zhang

Summary:
             This paper starts with providing fundamental difference between the sketch and a gesture and the usefulness of gesture recognition to recognize a sketch. In contrast, the author states to avoid use of gesture based techniques in sketching due to possible constraining users from brainstorming and design. Furthermore, the author states that gesture recognitions work well in concert as editing or action commands in a sketch recognition system.
           Furthermore, author discusses mainly the work of Dean Rubine’s gesture recognition method used for sketches in 1991, Long’s method which extended Rubine’s features to 22 and Wobbrock’s $1 recognizer. Author distinguish the Dean Rubine’s work as first and most recognized gesture recognition method for sketches with 13 stroke features and a linear classifier to feature classification. Author describes Christopher Long’s work on gesture recognition called the system Quill in 1996, which is similar to Rubine’s but slightly modified to have less reliance on time. Furthermore, the author states Jacob Wobbrock’s work on implementing a simple template matcher in 2007 that recognizes gestures slightly better than the feature based method. It is called the $1 recognizer because is it easy to implement.

Discussion:
            The author presented a number of methods for performing gesture recognition in a single roof survey. Much of the authors’ concentration is on Rubine’s work, giving a clear meaning and importance of each of the feature set than the Rubine’s paper (I feel Rubin’s paper bit complicated to understand than the authors explanation, I guess that’s what authors’ intension was in writing this as a survey paper).  
              In my opinion I would like to see the features mapped to a common ground with a same value scaled to its characteristics. Say, something like a fuzziness 0-1. If it is a line, how slanted, how straight etc. And also some sort of a sensitive analysis to see which features makes the recognition accuracy better, may be a ranking order of the identified features may be a good idea.

Find the paper here.

Thursday, September 02, 2010

About me...............

E-mail address : UKSJayarathna(AT)gmail(DOT)com

Graduate standing : First year PhD (MS from Texas State University-San Marcos)

Why are you taking this class? My undergraduate research background is “Offline Character Recognition”, so I feel comfortable taking a course similar to my research interests. Also I hope the course content will give me some interesting clues to some of my pet research ideas. 

What experience do you bring to this class?
This is my 6th year in the HCI research field, and I’m specifically interested on “Biologically inspired mathematical models”. I want to apply some of AI techniques to sketch recognition, say Fuzzy Logic. Any ideas people?  

What do you expect to be doing in 10 years?
Doing research on anything I like (hopefully something related to my PhD) (ii) working with graduate students (iii) teaching classes (I love teaching) (iv) applying for grants (v) flying around to work with other researchers and to give talks on my research

What do you think will be the next biggest technological advancement in computer science?
I’m expecting eye tracking technology (not the biggest though) to kicks in within next couple of month’s/year’s time. At least the notebook guys are having trouble with finding ways to keep their toys expensive (fingerprinting is already there, and some sort of voice/face recognition systems too),  so we have something realistic like in “Minority Report”.

What was your favorite course when you were an undergraduate?
“Remote Sensing & Geographic Information Systems (GIS)”, I ended up getting A+ for it.

What is your favorite movie and why?
The Man from Earth”. I like movies which makes you to go back see what they are saying is true and end up feeling amazed how they organize and blend real world facts into the movie. Something like Da Vinci Code, and National Treasure.  May be I'm a bit biased to the "The Man from Earth", its producer Eric D. Wilkinson personally commented and thanked on my blog post about his movie.

If you could travel back in time, who would you like to meet and why?
This is kind a common answer I guess. But I desperately need to thank my Dad for everything he did for me.    

Give some interesting fact about yourself
Read my Bio here.