scholarly journals Design and Implementation of Deep Learning Based Contactless Authentication System Using Hand Gestures

Electronics ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 182
Author(s):  
Aveen Dayal ◽  
Naveen Paluru ◽  
Linga Reddy Cenkeramaddi ◽  
Soumya J. ◽  
Phaneendra K. Yalavarthy

Hand gestures based sign language digits have several contactless applications. Applications include communication for impaired people, such as elderly and disabled people, health-care applications, automotive user interfaces, and security and surveillance. This work presents the design and implementation of a complete end-to-end deep learning based edge computing system that can verify a user contactlessly using ‘authentication code’. The ‘authentication code’ is an ‘n’ digit numeric code and the digits are hand gestures of sign language digits. We propose a memory-efficient deep learning model to classify the hand gestures of the sign language digits. The proposed deep learning model is based on the bottleneck module which is inspired by the deep residual networks. The model achieves classification accuracy of 99.1% on the publicly available sign language digits dataset. The model is deployed on a Raspberry pi 4 Model B edge computing system to serve as an edge device for user verification. The edge computing system consists of two steps, it first takes input from the camera attached to it in real-time and stores it in the buffer. In the second step, the model classifies the digit with the inference rate of 280 ms, by taking the first image in the buffer as input.

2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Leow Wei Qin ◽  
Muneer Ahmad ◽  
Ihsan Ali ◽  
Rafia Mumtaz ◽  
Syed Mohammad Hassan Zaidi ◽  
...  

Achievement of precision measurement is highly desired in a current industrial revolution where a significant increase in living standards increased municipal solid waste. The current industry 4.0 standards require accurate and efficient edge computing sensors towards solid waste classification. Thus, if waste is not managed properly, it would bring about an adverse impact on health, the economy, and the global environment. All stakeholders need to realize their roles and responsibilities for solid waste generation and recycling. To ensure recycling can be successful, the waste should be correctly and efficiently separated. The performance of edge computing devices is directly proportional to computational complexity in the context of nonorganic waste classification. Existing research on waste classification was done using CNN architecture, e.g., AlexNet, which contains about 62,378,344 parameters, and over 729 million floating operations (FLOPs) are required to classify a single image. As a result, it is too heavy and not suitable for computing applications that require inexpensive computational complexities. This research proposes an enhanced lightweight deep learning model for solid waste classification developed using MobileNetV2, efficient for lightweight applications including edge computing devices and other mobile applications. The proposed model outperforms the existing similar models achieving an accuracy of 82.48% and 83.46% with Softmax and support vector machine (SVM) classifiers, respectively. Although MobileNetV2 may provide a lower accuracy if compared to CNN architecture which is larger and heavier, the accuracy is still comparable, and it is more practical for edge computing devices and mobile applications.


2020 ◽  
Vol 2020 ◽  
pp. 1-9 ◽  
Author(s):  
M. M. Kamruzzaman

Sign language encompasses the movement of the arms and hands as a means of communication for people with hearing disabilities. An automated sign recognition system requires two main courses of action: the detection of particular features and the categorization of particular input data. In the past, many approaches for classifying and detecting sign languages have been put forward for improving system performance. However, the recent progress in the computer vision field has geared us towards the further exploration of hand signs/gestures’ recognition with the aid of deep neural networks. The Arabic sign language has witnessed unprecedented research activities to recognize hand signs and gestures using the deep learning model. A vision-based system by applying CNN for the recognition of Arabic hand sign-based letters and translating them into Arabic speech is proposed in this paper. The proposed system will automatically detect hand sign letters and speaks out the result with the Arabic language with a deep learning model. This system gives 90% accuracy to recognize the Arabic hand sign-based letters which assures it as a highly dependable system. The accuracy can be further improved by using more advanced hand gestures recognizing devices such as Leap Motion or Xbox Kinect. After recognizing the Arabic hand sign-based letters, the outcome will be fed to the text into the speech engine which produces the audio of the Arabic language as an output.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Feng Wen ◽  
Zixuan Zhang ◽  
Tianyiyi He ◽  
Chengkuo Lee

AbstractSign language recognition, especially the sentence recognition, is of great significance for lowering the communication barrier between the hearing/speech impaired and the non-signers. The general glove solutions, which are employed to detect motions of our dexterous hands, only achieve recognizing discrete single gestures (i.e., numbers, letters, or words) instead of sentences, far from satisfying the meet of the signers’ daily communication. Here, we propose an artificial intelligence enabled sign language recognition and communication system comprising sensing gloves, deep learning block, and virtual reality interface. Non-segmentation and segmentation assisted deep learning model achieves the recognition of 50 words and 20 sentences. Significantly, the segmentation approach splits entire sentence signals into word units. Then the deep learning model recognizes all word elements and reversely reconstructs and recognizes sentences. Furthermore, new/never-seen sentences created by new-order word elements recombination can be recognized with an average correct rate of 86.67%. Finally, the sign language recognition results are projected into virtual space and translated into text and audio, allowing the remote and bidirectional communication between signers and non-signers.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1064
Author(s):  
I Nyoman Kusuma Wardana ◽  
Julian W. Gardner ◽  
Suhaib A. Fahmy

Accurate air quality monitoring requires processing of multi-dimensional, multi-location sensor data, which has previously been considered in centralised machine learning models. These are often unsuitable for resource-constrained edge devices. In this article, we address this challenge by: (1) designing a novel hybrid deep learning model for hourly PM2.5 pollutant prediction; (2) optimising the obtained model for edge devices; and (3) examining model performance running on the edge devices in terms of both accuracy and latency. The hybrid deep learning model in this work comprises a 1D Convolutional Neural Network (CNN) and a Long Short-Term Memory (LSTM) to predict hourly PM2.5 concentration. The results show that our proposed model outperforms other deep learning models, evaluated by calculating RMSE and MAE errors. The proposed model was optimised for edge devices, the Raspberry Pi 3 Model B+ (RPi3B+) and Raspberry Pi 4 Model B (RPi4B). This optimised model reduced file size to a quarter of the original, with further size reduction achieved by implementing different post-training quantisation. In total, 8272 hourly samples were continuously fed to the edge device, with the RPi4B executing the model twice as fast as the RPi3B+ in all quantisation modes. Full-integer quantisation produced the lowest execution time, with latencies of 2.19 s and 4.73 s for RPi4B and RPi3B+, respectively.


2021 ◽  
Vol 21 (9) ◽  
pp. 10445-10453
Author(s):  
Daniel S. Breland ◽  
Simen B. Skriubakken ◽  
Aveen Dayal ◽  
Ajit Jha ◽  
Phaneendra K. Yalavarthy ◽  
...  

IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 27267-27276 ◽  
Author(s):  
Endah Kristiani ◽  
Chao-Tung Yang ◽  
Chin-Yin Huang

Sign in / Sign up

Export Citation Format

Share Document