scholarly journals High accuracy indoor visible light positioning using long short term memory-fully connected network based algorithm

2021 ◽  
Author(s):  
Hongyao Chen ◽  
Wei Han ◽  
Jianping Wang ◽  
Huimin Lu ◽  
Danyang Chen ◽  
...  
2021 ◽  
Vol 14 (4) ◽  
pp. 2408-2418 ◽  
Author(s):  
Tonny I. Okedi ◽  
Adrian C. Fisher

LSTM networks are shown to predict the seasonal component of biophotovoltaic current density and photoresponse to high accuracy.


2021 ◽  
Vol 11 (24) ◽  
pp. 12019
Author(s):  
Chia-Chun Chuang ◽  
Chien-Ching Lee ◽  
Chia-Hong Yeng ◽  
Edmund-Cheung So ◽  
Yeou-Jiunn Chen

Monitoring people’s blood pressure can effectively prevent blood pressure-related diseases. Therefore, providing a convenient and comfortable approach can effectively help patients in monitoring blood pressure. In this study, an attention mechanism-based convolutional long short-term memory (LSTM) neural network is proposed to easily estimate blood pressure. To easily and comfortably estimate blood pressure, electrocardiogram (ECG) and photoplethysmography (PPG) signals are acquired. To precisely represent the characteristics of ECG and PPG signals, the signals in the time and frequency domain are selected as the inputs of the proposed NN structure. To automatically extract the features, the convolutional neural networks (CNNs) are adopted as the first part of neural networks. To identify the meaningful features, the attention mechanism is used in the second part of neural networks. To model the characteristic of time series, the long short-term memory (LSTM) is adopted in the third part of neural networks. To integrate the information of previous neural networks, the fully connected networks are used to estimate blood pressure. The experimental results show that the proposed approach outperforms CNN and CNN-LSTM and complies with the Association for the Advancement of Medical Instrumentation standard.


2021 ◽  
Author(s):  
Santosh Kumar Yadav ◽  
Kamlesh Tiwari ◽  
Hari Mohan Pandey ◽  
Shaik Ali Akbar

AbstractHuman activity recognition aims to determine actions performed by a human in an image or video. Examples of human activity include standing, running, sitting, sleeping, etc. These activities may involve intricate motion patterns and undesired events such as falling. This paper proposes a novel deep convolutional long short-term memory (ConvLSTM) network for skeletal-based activity recognition and fall detection. The proposed ConvLSTM network is a sequential fusion of convolutional neural networks (CNNs), long short-term memory (LSTM) networks, and fully connected layers. The acquisition system applies human detection and pose estimation to pre-calculate skeleton coordinates from the image/video sequence. The ConvLSTM model uses the raw skeleton coordinates along with their characteristic geometrical and kinematic features to construct the novel guided features. The geometrical and kinematic features are built upon raw skeleton coordinates using relative joint position values, differences between joints, spherical joint angles between selected joints, and their angular velocities. The novel spatiotemporal-guided features are obtained using a trained multi-player CNN-LSTM combination. Classification head including fully connected layers is subsequently applied. The proposed model has been evaluated on the KinectHAR dataset having 130,000 samples with 81 attribute values, collected with the help of a Kinect (v2) sensor. Experimental results are compared against the performance of isolated CNNs and LSTM networks. Proposed ConvLSTM have achieved an accuracy of 98.89% that is better than CNNs and LSTMs having an accuracy of 93.89 and 92.75%, respectively. The proposed system has been tested in realtime and is found to be independent of the pose, facing of the camera, individuals, clothing, etc. The code and dataset will be made publicly available.


2019 ◽  
Vol 07 (01) ◽  
pp. 19-40
Author(s):  
Shakil Ahmed Sumon ◽  
Raihan Goni ◽  
Niyaz Bin Hashem ◽  
Tanzil Shahria ◽  
Rashedur M. Rahman

In this paper, we have explored different strategies to find out the saliency of the features from different pretrained models in detecting violence in videos. A dataset has been created which consists of violent and non-violent videos of different settings. Three ImageNet models; VGG16, VGG19, ResNet50 are being used to extract features from the frames of the videos. In one of the experiments, the extracted features have been feed into a fully connected network which detects violence in frame level. Moreover, in another experiment, we have fed the extracted features of 30 frames to a long short-term memory (LSTM) network at a time. Furthermore, we have applied attention to the features extracted from the frames through spatial transformer network which also enables transformations like rotation, translation and scale. Along with these models, we have designed a custom convolutional neural network (CNN) as a feature extractor and a pretrained model which is initially trained on a movie violence dataset. In the end, the features extracted from the ResNet50 pretrained model proved to be more salient towards detecting violence. These ResNet50 features, in combination with LSTM provide an accuracy of 97.06% which is better than the other models we have experimented with.


2019 ◽  
Vol 9 (17) ◽  
pp. 3470
Author(s):  
Nguyen Minh-Tuan ◽  
Yong-Hwa Kim

Many resource allocation problems can be modeled as a linear sum assignment problem (LSAP) in wireless communications. Deep learning techniques such as the fully-connected neural network and convolutional neural network have been used to solve the LSAP. We herein propose a new deep learning model based on the bidirectional long short-term memory (BDLSTM) structure for the LSAP. In the proposed method, the LSAP is divided into sequential sub-assignment problems, and BDLSTM extracts the features from sequential data. Simulation results indicate that the proposed BDLSTM is more memory efficient and achieves a higher accuracy than conventional techniques.


2021 ◽  
Author(s):  
Shane Grayson ◽  
Wilson Zhu

New parents are frequently awakened by the cries of their newborn babies. Attempts to stop these cries sometimes result in increasingly louder cries. By first transforming these cries into waveforms, and then into sound spectrograms, the efficiencies and accuracies of different computer learning modules were tested: a support vector machine, a 2-layer neural network, and a long short-term memory model. Finally, an automatic sorter that categorizes each cry was developed. Using this method, it is possible to eliminate error and time wastage when trying to calm a baby. The results of testing the programs demonstrate a high accuracy rate for determining the source of a baby’s cries. This program will enable parents to calm their crying babies in a shorter amount of time, giving them more peace of mind, and perhaps allowing them to get more sleep.


Sign in / Sign up

Export Citation Format

Share Document