A deep-learning real-time visual SLAM system based on multi-task feature extraction network and self-supervised feature points

Measurement ◽  
2021 ◽  
Vol 168 ◽  
pp. 108403
Author(s):  
Guangqiang Li ◽  
Lei Yu ◽  
Shumin Fei
2020 ◽  
Vol 39 (4) ◽  
pp. 5699-5711
Author(s):  
Shirong Long ◽  
Xuekong Zhao

The smart teaching mode overcomes the shortcomings of traditional teaching online and offline, but there are certain deficiencies in the real-time feature extraction of teachers and students. In view of this, this study uses the particle swarm image recognition and deep learning technology to process the intelligent classroom video teaching image and extracts the classroom task features in real time and sends them to the teacher. In order to overcome the shortcomings of the premature convergence of the standard particle swarm optimization algorithm, an improved strategy for multiple particle swarm optimization algorithms is proposed. In order to improve the premature problem in the search performance algorithm of PSO algorithm, this paper combines the algorithm with the useful attributes of other algorithms to improve the particle diversity in the algorithm, enhance the global search ability of the particle, and achieve effective feature extraction. The research indicates that the method proposed in this paper has certain practical effects and can provide theoretical reference for subsequent related research.


2018 ◽  
Author(s):  
Rajdeep Pal ◽  
Ranjana Seshadri ◽  
Swarnashree Mysore Sathyendra ◽  
Natarajan S

2021 ◽  
Vol 60 (08) ◽  
Author(s):  
Guangqiang Li ◽  
Junyi Hou ◽  
Zhong Chen ◽  
Lei Yu ◽  
Shumin Fei

2021 ◽  
Author(s):  
Menaa Nawaz ◽  
Jameel Ahmed

Abstract Physiological signals retrieve information from sensors implanted or attached to the human body. These signals are vital data sources that can assist in predicting the disease well before time; thus, proper treatment can be made possible. With the addition of the Internet of Things in healthcare, real-time data collection and preprocessing for signal analysis has reduced the burden of in-person appointments and decision making on healthcare. Recently, deep learning-based algorithms have been implemented by researchers for the recognition, realization and prediction of diseases by extracting and analyzing important features. In this research, real-time 1-D time series data of on-body noninvasive biomedical sensors were acquired, preprocessed and analysed for anomaly detection. Feature engineered parameters of large and diverse datasets have been used to train the data to make the anomaly detection system more reliable. For comprehensive real-time monitoring, the implemented system uses wavelet time scattering features for classification and a deep learning-based autoencoder for anomaly detection of time series signals to assist the clinical diagnosis of cardiovascular and muscular activity. In this research, an implementation of an IoT-based AI-edge healthcare framework using biomedical sensors was presented. This paper also aims to analyse cloud data acquired through biomedical sensors using signal analysis techniques for anomaly detection, and time series classification has been performed for disease prognosis in real time by implementing 24 AI-based techniques to find the most accurate technique for real-time raw signals. The deep learning-based LSTM method based on wavelet time scattering feature extraction has shown a classification test accuracy of 100%. Using wavelet time scattering feature extraction achieved 95% signal reduction to increase the real-time processing speed. In real-time signal anomaly detection, 98% accuracy is achieved using LSTM autoencoders. The average mean absolute error loss of 0.0072 for normal signals and 0.078 is achieved for anomalous signals.


2018 ◽  
Author(s):  
Rajdeep Pal ◽  
Ranjana Seshadri ◽  
Swarnashree Mysore Sathyendra ◽  
Natarajan S

This paper presents an approach for detection of headgear in real-time video footages. The approach used is Feature Extraction followed by Classification. To pick specific features required for the aforesaid problem, a pre-trained deep learning model is used.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Yuanhui Yu

The traditional digital image processing technology has its limitations. It requires manual design features, which consumes manpower and material resources, and identifies crops with a single type, and the results are bad. Therefore, to find an efficient and fast real-time disease image recognition method is very meaningful. Deep learning is a machine learning algorithm that can automatically learn representative features to achieve better results in areas of image recognition. Therefore, the purpose of this paper is to use deep learning methods to identify crop pests and diseases and to find efficient and fast real-time image recognition methods of disease. Deep learning is a newly developed discipline in recent years. Its purpose is to study how to actively obtain a variety of feature representation methods from data samples and rely on data-driven methods, a series of nonlinear transformations are applied to finally collect the original data from specific to abstract, from general to specified semantics, and from low-level to high-level characteristic forms. This paper analyzes the classical and the latest neural network structure based on the theory of deep learning. For the problem that the network based on natural image classification is not suitable for crop pest and disease identification tasks, this paper has improved the network structure that can take care of both recognition speed and recognition accuracy. We discussed the influence of the crop pest and disease feature extraction layer on recognition performance. Finally, we used the inner layer as the main structure to be the pest and disease feature extraction layer by comparing the advantages and disadvantages of the inner and global average pooling layers. We analyze various loss functions such as Softmax Loss, Center Loss, and Angular Softmax Loss for pest identification. In view of the shortcomings of difficulty in loss function training, convergence, and operation, making the distance between pests and diseases smaller and the distance between classes more greater improved the loss function and introduced techniques such as feature normalization and weight normalization. The experimental results show that the method can effectively enhance the characteristic expression ability of pests and diseases and thus improve the recognition rate of pests and diseases. Moreover, the method makes the pest identification network training simpler and can improve the pest and disease recognition rate better.


2021 ◽  
Author(s):  
Diu Khue Luu ◽  
Anh Tuan Nguyen ◽  
Ming Jiang ◽  
Jian Xu ◽  
Markus W. Drealan ◽  
...  

AbstractThe ultimate goal of an upper-limb neuroprosthesis is to achieve dexterous and intuitive control of individual fingers. Previous literature shows that deep learning (DL) is an effective tool to decode the motor intent from neural signals obtained from different parts of the nervous system. However, it still requires complicated deep neural networks that are inefficient and not feasible to work in real-time. Here we investigate different approaches to enhance the efficiency of the DL-based motor decoding paradigm. First, a comprehensive collection of feature extraction techniques is applied to reduce the input data dimensionality. Next, we investigate two different strategies for deploying DL models: a one-step (1S) approach when big input data are available and a two-step (2S) when input data are limited. With the 1S approach, a single regression stage predicts the trajectories of all fingers. With the 2S approach, a classification stage identifies the fingers in motion, followed by a regression stage that predicts those active digits’ trajectories. The addition of feature extraction substantially lowers the motor decoder’s complexity, making it feasible for translation to a real-time paradigm. The 1S approach using a recurrent neural network (RNN) generally gives better prediction results than all the ML algorithms with mean squared error (MSE) ranges from 10−3 to 10−4 for all finger while variance accounted for (VAF) scores are above 0.8 for the most degree of freedom (DOF). This result reaffirms that DL is more advantageous than classic ML methods for handling a large dataset. However, when training on a smaller input data set as in the 2S approach, ML techniques offers a simpler implementation while ensuring comparably good decoding outcome to the DL ones. In the classification step, either machine-learning (ML) or DL models achieve the accuracy and F1 score of 0.99. Thanks to the classification step, in the regression step, both types of models result in comparable MSE and VAF scores as those of the 1S approach. Our study outlines the trade-offs to inform the implementation of real-time, low-latency, and high accuracy DL-based motor decoder for clinical applications.


2019 ◽  
Vol 8 (2) ◽  
pp. 6326-6333

Indian sign language is communicating language among deaf and dumb people of India. Hand gestures are broadly used as communication gestures among various forms of gesture. The real time classification of different signs is a challenging task due to the variation in shape and position of hands as well as due to the variation in the background which varies from person to person. There seems to be no availability of datasets resembling to Indian signs which poses a problem to the researcher. To address this problem, we design our own dataset which is formed by incorporating 1000 signs for the sign digits from 1 to 10 from 100 different people with varying backgrounds conditions by changing colour, and light illumination situations. The dataset comprises of the signs from left handed as well as right handed people. Feature extraction methodologies are studied and applied to recognition of Sign language. This paper focuses on deep learning CNN (convolution neural network) approach with pretrained model Alexnet for calculation of feature vector. Multiple SVM (Support Vector Machine) is applied to classify Indian sign language in real time surroundings. This paper also shows the comparative analysis between Deep learning feature extraction method with histogram of gradient, bag of feature and Speed up robust feature extraction method. The experimental results shown that Deep learning feature extraction using pretrained Alexnet model give accuracy of around 85% and above for the recognition of signed digit with the use of 60% training set and 40% testing set.


Sign in / Sign up

Export Citation Format

Share Document