HCI Using Leap Motion Camera
Deepakkumar S. Shukla , Prathamesh P. Jagtap, Kanchan T. Thokal, Pranita A. Murkute "HCI Using Leap Motion Camera". International Journal of Computer Trends and Technology (IJCTT) V43(3):156-159, January 2017. ISSN:2231-2803. www.ijcttjournal.org. Published by Seventh Sense Research Group.
Abstract -
In this paper, we describe the use of Leap motion camera using the gesture control. A simple and effective use of leap technology is used for greater efficiency. Leap motion camera renders the use of hand gestures combining different combinations of fingers and palm motion. Previously this technology was rendered using the Google 3D camera, which was not only expensive but also not accurate. In this project we will try to use the complete potential of leap motion by applying three applications that are controlling the mouse, controlling the game and a buggy robot. The general architecture consists of a PC which should have 1 GB RAM running on windows. For controlling game, mouse and a buggy we will be using JAVA ROBOT API. The buggy will have remote code written in embedded C. Buggy will consist of device controller, heat sink, transformer, motor and a device driver which will be connected in serial connection RS-232. The algorithm used will be Euclidean distance means and cosine similarity algorithm, this algorithm helps to identify and execute the gestures provided in the leap motion. There are total twelve element inputs which can be passed using fingers and the palm. The combination of gestures can be increased according to our use. The working of a Google 3D camera with reference to colored caps were in leap motion this doesn’t affect. Accuracy of gesture recognition in leap motion increases up to 80 percent roughly. The cost of the 3D Google camera goes up to 22,000 approximately, comparing the cost of lip motion is 3,000 making it more cost efficient. Earlier the gestures were recognized using colored caps over the tip of the finger, making it hard to take the input, even after taking the input the processing needed more time making late response by the code. In leap motion colored caps not used to make it easier, the processing is fast with the leap motion camera as it is based on real time input/output system.
References
[1] Anupam Agrawal, Rohit Raj and Shubha Porwal , "Vision-based Multimodal Human-Computer Interaction using Hand and Head Gestures",Proceedings of 2013 IEEE Conference on Information and Communication Technologies (ICT 2013).
[2] Thomas Hahn, "Future Human Computer Interaction with special focus on input and output techniques”, accessed on 2010,March 26.
[3] Thomas B. Moeslund and Lau Nrgaard, "A Brief Overview of Hand Gestures used in wearable Human Computer Interfaces”, Technical report: CVMT 03-02, ISSN: 1601-3646.
[4] Mehrube Mehrubeoglu*, Linh Manh Pham, Hung Thieu Le, Ramchander Muddu, and Dongseok Ryu, "Real Time Eye Tracking Using a Smart Camera".
[5] Vladimir I. Pavlovic, Rajeev Sharma and Thomas S. Huang, "Visual Interpretation of Hand Gestures for Human-Computer Interaction: A Review",IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 19, NO. 7, JULY 1997.
[6] Shitala Prasad,Piyush Kumar and Kumari Priyanka Sinha, "A Wireless Dynamic Gesture User Interface for HCI Using Hand Data Glove".
Keywords
3D Hand Gesture, HCI, Embedded, Leap Motion.