MEMS Inertial Sensor Fault Diagnosis Using a CNN-Based Data-Driven Method

Author(s):  
Tong Gao ◽  
Wei Sheng ◽  
Mingliang Zhou ◽  
Bin Fang ◽  
Liping Zheng

In this paper, we propose a novel fault diagnosis (FD) approach for micro-electromechanical systems (MEMS) inertial sensors that recognize the fault patterns of MEMS inertial sensors in an end-to-end manner. We use a convolutional neural network (CNN)-based data-driven method to classify the temperature-related sensor faults in unmanned aerial vehicles (UAVs). First, we formulate the FD problem for MEMS inertial sensors into a deep learning framework. Second, we design a multi-scale CNN which uses the raw data of MEMS inertial sensors as input and which outputs classification results indicating faults. Then we extract fault features in the temperature domain to solve the non-uniform sampling problem. Finally, we propose an improved adaptive learning rate optimization method which accelerates the loss convergence by using the Kalman filter (KF) to train the network efficiently with a small dataset. Our experimental results show that our method achieved high fault recognition accuracy and that our proposed adaptive learning rate method improved performance in terms of loss convergence and robustness on a small training batch.

Electronics ◽  
2020 ◽  
Vol 9 (11) ◽  
pp. 1809
Author(s):  
Hideaki Iiduka ◽  
Yu Kobayashi

The goal of this article is to train deep neural networks that accelerate useful adaptive learning rate optimization algorithms such as AdaGrad, RMSProp, Adam, and AMSGrad. To reach this goal, we devise an iterative algorithm combining the existing adaptive learning rate optimization algorithms with conjugate gradient-like methods, which are useful for constrained optimization. Convergence analyses show that the proposed algorithm with a small constant learning rate approximates a stationary point of a nonconvex optimization problem in deep learning. Furthermore, it is shown that the proposed algorithm with diminishing learning rates converges to a stationary point of the nonconvex optimization problem. The convergence and performance of the algorithm are demonstrated through numerical comparisons with the existing adaptive learning rate optimization algorithms for image and text classification. The numerical results show that the proposed algorithm with a constant learning rate is superior for training neural networks.


Author(s):  
Mahmoud Smaida ◽  
Serhii Yaroshchak ◽  
Ahmed Y. Ben Sasi

One of the most important hyper-parameters for model training and generalization is the learning rate. Recently, many research studies have shown that optimizing the learning rate schedule is very useful for training deep neural networks to get accurate and efficient results. In this paper, different learning rate schedules using some comprehensive optimization techniques have been compared in order to measure the accuracy of a convolutional neural network CNN model to classify four ophthalmic conditions. In this work, a deep learning CNN based on Keras and TensorFlow has been deployed using Python on a database that contains 1692 images, which consists of four types of ophthalmic cases: Glaucoma, Myopia, Diabetic retinopathy, and Normal eyes. The CNN model has been trained on Google Colab. GPU with different learning rate schedules and adaptive learning algorithms. Constant learning rate, time-based decay, step-based decay, exponential decay, and adaptive learning rate optimization techniques for deep learning have been addressed. Adam adaptive learning rate method. has outperformed the other optimization techniques and achieved the best model accuracy of 92.58% for training set and 80.49% for validation datasets, respectively.


Author(s):  
Vakada Naveen ◽  
Yaswanth Mareedu ◽  
Neeharika Sai Mandava ◽  
Sravya Kaveti ◽  
G. Krishna Kishore

Sign in / Sign up

Export Citation Format

Share Document