Scientists Develop AI System For High Precision Hand Gesture Recognition


Artificial Intelligence is transforming every walk of life, letting people rethink how to analyze data, integrate information and use the resulting insights to improve decision making.

Scientists from Nanyang Technological University (NTU), Singapore have come with an AI system that recognizes hand gestures by combining skin-like electronics with computer vision. This novel AI-powered gesture recognition system was initially visual-only and later improved by integrating inputs from wearable sensors. The wearable sensors recreate the skin’s sensing capability called somatosensory.

The major drawback for the precision of gesture recognition is the low quality of data arriving from wearable sensors due to their bulkiness, poor lighting, visually blocked objects reduced contact with the user. Other challenges may occur from the integration of visual and sensory data as they assist incompatible datasets that must be processed individually and then merged at the end, which are inefficient and heads to slower response times. The NTU team developed the bio-inspired data fusion system that uses skin-like stretchable strain sensors produced from single-walled carbon nanotubes, and an artificial intelligence approach that matches the way, how the skin senses and vision are controlled together in the brain.

This system can understand human gestures more precisely and effectively than existing methods. The bio-inspired AI system developed by merging three neural network approaches in one system uses a convolutional neural network. It is a machine learning technique for early visual processing, a multilayer neural network for quick somatosensory information processing and a sparse neural network to combine the somatosensory and visual information. The team created a transparent, stretchable strain sensor that adheres to the skin and is not visible via camera images.

The team experimented with their bio-inspired AI system using a robot-controlled by hand gestures and controlled it through a maze in bad environmental conditions with high precision and accuracy. It was capable to control the robot through the maze with zero errors. The NTU research team is now moving forward for developing an augmented and virtual reality system based on the AI system developed, which can be used in areas where high-precision recognition and proper control are needed. There are unlimited applications for these technologies in the marketplace to aid the future.


Please enter your comment!
Please enter your name here