Аннотации:
Hand gesture recognition becomes a popular topic of deep learning and
provides many application fields for bridging the human-computer barrier
and has a positive impact on our daily life. The primary idea of our project is
a static gesture acquisition from depth camera and to process the input
images to train the deep convolutional neural network pre-trained on
ImageNet dataset. Proposed system consists of gesture capture device (Intel®
RealSense™ depth camera D435), pre-processing and image segmentation
algorithms, feature extraction algorithm and object classification. For preprocessing and image segmentation algorithms computer vision methods
from the OpenCV and Intel Real Sense libraries are used. The subsystem for
features extracting and gestures classification is based on the modified VGG16 by using the TensorFlow&Keras deep learning framework. Performance
of the static gestures recognition system is evaluated using maching learning
metrics. Experimental results show that the proposed model, trained on a
database of 2000 images, provides high recognition accuracy both at the
training and testing stages.