A new year has just started and you may wonder what will be the next input technology that can be used to control mobile devices’ interface as a replacement for existing resistive/capacitive touches. While we have been hearing ClearPad 3000 multi-touch technology that can support up to ten fingers simultaneously for tablet PC, just recently a group of researchers in Tokyo University has demonstrated an early prototype that can translate human gesture movements into computer command without the need of touching the interface physically.
The technology, with named of “vision-based Input interface” is based on front camera sensor module for image capturing which can be converted to electrical signals using sophisticated analysis algorithm. In order to capture finger movement precisely, it utilized an advanced camera that can scan at extremely high speed with around 154fps (frame per second) to transform any hand gestures without miss to make the control methods a much more pleasurable one. Just imagine, now users no longer need to physically touch on the tiny LCD screen but instead, just draw some signs on the air and those gestures will be recognized and captured with respective action such as web browsing, text message typing and phone calling can be done accurately.
No exact timeline on when this will be implemented on actual end products yet, but there could be some technical challenges on how to actually shrink this tiny module into conventional mobile device without increasing its product cost significantly. Nevertheless, this is indeed a great milestone with actual working prototype demonstrated by the team led by Professor Masatoshi and Dr. Takashi targeted for future handheld products.