scholarly journals General Seismic Wave and Phase Detection Software Driven by Deep Learning

2021 ◽  
pp. 100029
Author(s):  
Ming Zhao ◽  
Jiahui Ma ◽  
Hao Chang ◽  
Shi Chen
Sensors ◽  
2019 ◽  
Vol 19 (8) ◽  
pp. 1798 ◽  
Author(s):  
Cristina Jácome ◽  
Johan Ravn ◽  
Einar Holsbø ◽  
Juan Carlos Aviles-Solis ◽  
Hasse Melbye ◽  
...  

We applied deep learning to create an algorithm for breathing phase detection in lung sound recordings, and we compared the breathing phases detected by the algorithm and manually annotated by two experienced lung sound researchers. Our algorithm uses a convolutional neural network with spectrograms as the features, removing the need to specify features explicitly. We trained and evaluated the algorithm using three subsets that are larger than previously seen in the literature. We evaluated the performance of the method using two methods. First, discrete count of agreed breathing phases (using 50% overlap between a pair of boxes), shows a mean agreement with lung sound experts of 97% for inspiration and 87% for expiration. Second, the fraction of time of agreement (in seconds) gives higher pseudo-kappa values for inspiration (0.73–0.88) than expiration (0.63–0.84), showing an average sensitivity of 97% and an average specificity of 84%. With both evaluation methods, the agreement between the annotators and the algorithm shows human level performance for the algorithm. The developed algorithm is valid for detecting breathing phases in lung sound recordings.


2019 ◽  
Vol 293 ◽  
pp. 106261 ◽  
Author(s):  
Lijun Zhu ◽  
Zhigang Peng ◽  
James McClellan ◽  
Chenyu Li ◽  
Dongdong Yao ◽  
...  

Author(s):  
Tao Zhen ◽  
Lei Yan ◽  
Jian-lei Kong

Human-gait-phase-recognition is an important technology in the field of exoskeleton robot control and medical rehabilitation. Inertial sensors with accelerometers and gyroscopes are easy to wear, inexpensive and have great potential for analyzing gait dynamics. However, current deep-learning methods extract spatial and temporal features in isolation—while ignoring the inherent correlation in high-dimensional spaces—which limits the accuracy of a single model. This paper proposes an effective hybrid deep-learning framework based on the fusion of multiple spatiotemporal networks (FMS-Net), which is used to detect asynchronous phases from IMU signals. More specifically, it first uses a gait-information acquisition system to collect IMU sensor data fixed on the lower leg. Through data preprocessing, the framework constructs a spatial feature extractor with CNN module and a temporal feature extractor, combined with LSTM module. Finally, a skip-connection structure and the two-layer fully connected layer fusion module are used to achieve the final gait recognition. Experimental results show that this method has better identification accuracy than other comparative methods with the macro-F1 reaching 96.7%.


2018 ◽  
Vol 108 (5A) ◽  
pp. 2894-2901 ◽  
Author(s):  
Zachary E. Ross ◽  
Men‐Andrin Meier ◽  
Egill Hauksson ◽  
Thomas H. Heaton

Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-17
Author(s):  
Tao Zhen ◽  
Jian-lei Kong ◽  
Lei Yan

Human gait phase detection is a significance technology for robotics exoskeletons control and exercise rehabilitation therapy. Inertial Measurement Units (IMUs) with accelerometer and gyroscope are convenient and inexpensive to collect gait data, which are often used to analyze gait dynamics for personal daily applications. However, current deep-learning methods that extract spatial and the isolated temporal features can easily ignore the correlation that may exist in the high-dimensional space, which limits the recognition effect of a single model. In this study, an effective hybrid deep-learning framework based on Gaussian probability fusion of multiple spatiotemporal networks (GFM-Net) is proposed to detect different gait phases from multisource IMU signals. Furthermore, it first employs the gait information acquisition system to collect IMU data fixed on lower limb. With the data preprocessing, the framework constructs a spatial feature extractor with AutoEncoder and CNN modules and a multistream temporal feature extractor with three collateral modules combining RNN, LSTM, and GRU modules. Finally, the novel Gaussian probability fusion module optimized by the Expectation-Maximum (EM) algorithm is developed to integrate the different feature maps output by the three submodels and continues to realize gait recognition. The framework proposed in this paper implements the inner loop that also contains the EM algorithm in the outer loop and optimizes the reverse gradient in the entire network. Experiments show that this method has better performance in gait classification with accuracy reaching more than 96.7%.


Sign in / Sign up

Export Citation Format

Share Document