Edge Computing Accelerated Defect Classification Based on Deep Convolutional Neural Network With Application in Rolling Image Inspection

Author(s):  
Jiayu Huang ◽  
Nurretin Sergin ◽  
Akshay Dua ◽  
Erfan Bank Tavakoli ◽  
Hao Yan ◽  
...  

Abstract This paper develops a unified framework for training and deploying deep neural networks on the edge computing framework for image defect detection and classification. In the proposed framework, we combine the transfer learning and data augmentation with the improved accuracy given the small sample size. We further implement the edge computing framework to satisfy the real-time computational requirement. After the implement of the proposed model into a rolling manufacturing system, we conclude that deep learning approaches can perform around 30–40% better than some traditional machine learning algorithms such as random forest, decision tree, and SVM in terms of prediction accuracy. Furthermore, by deploying the CNNs in the edge computing framework, we can significantly reduce the computational time and satisfy the real-time computational requirement in the high-speed rolling and inspection system. Finally, the saliency map and embedding layer visualization techniques are used for a better understanding of proposed deep learning models.

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Xiang Yu ◽  
Chun Shan ◽  
Jilong Bian ◽  
Xianfei Yang ◽  
Ying Chen ◽  
...  

With the rapid development of Internet of Things (IoT), massive sensor data are being generated by the sensors deployed everywhere at an unprecedented rate. As the number of Internet of Things devices is estimated to grow to 25 billion by 2021, when facing the explicit or implicit anomalies in the real-time sensor data collected from Internet of Things devices, it is necessary to develop an effective and efficient anomaly detection method for IoT devices. Recent advances in the edge computing have significant impacts on the solution of anomaly detection in IoT. In this study, an adaptive graph updating model is first presented, based on which a novel anomaly detection method for edge computing environment is then proposed. At the cloud center, the unknown patterns are classified by a deep leaning model, based on the classification results, the feature graphs are updated periodically, and the classification results are constantly transmitted to each edge node where a cache is employed to keep the newly emerging anomalies or normal patterns temporarily until the edge node receives a newly updated feature graph. Finally, a series of comparison experiments are conducted to demonstrate the effectiveness of the proposed anomaly detection method for edge computing. And the results show that the proposed method can detect the anomalies in the real-time sensor data efficiently and accurately. More than that, the proposed method performs well when there exist newly emerging patterns, no matter they are anomalous or normal.


Energies ◽  
2020 ◽  
Vol 13 (15) ◽  
pp. 3930 ◽  
Author(s):  
Ayaz Hussain ◽  
Umar Draz ◽  
Tariq Ali ◽  
Saman Tariq ◽  
Muhammad Irfan ◽  
...  

Increasing waste generation has become a significant issue over the globe due to the rapid increase in urbanization and industrialization. In the literature, many issues that have a direct impact on the increase of waste and the improper disposal of waste have been investigated. Most of the existing work in the literature has focused on providing a cost-efficient solution for the monitoring of garbage collection system using the Internet of Things (IoT). Though an IoT-based solution provides the real-time monitoring of a garbage collection system, it is limited to control the spreading of overspill and bad odor blowout gasses. The poor and inadequate disposal of waste produces toxic gases, and radiation in the environment has adverse effects on human health, the greenhouse system, and global warming. While considering the importance of air pollutants, it is imperative to monitor and forecast the concentration of air pollutants in addition to the management of the waste. In this paper, we present and IoT-based smart bin using a machine and deep learning model to manage the disposal of garbage and to forecast the air pollutant present in the surrounding bin environment. The smart bin is connected to an IoT-based server, the Google Cloud Server (GCP), which performs the computation necessary for predicting the status of the bin and for forecasting air quality based on real-time data. We experimented with a traditional model (k-nearest neighbors algorithm (k-NN) and logistic reg) and a non-traditional (long short term memory (LSTM) network-based deep learning) algorithm for the creation of alert messages regarding bin status and forecasting the amount of air pollutant carbon monoxide (CO) present in the air at a specific instance. The recalls of logistic regression and k-NN algorithm is 79% and 83%, respectively, in a real-time testing environment for predicting the status of the bin. The accuracy of modified LSTM and simple LSTM models is 90% and 88%, respectively, to predict the future concentration of gases present in the air. The system resulted in a delay of 4 s in the creation and transmission of the alert message to a sanitary worker. The system provided the real-time monitoring of garbage levels along with notifications from the alert mechanism. The proposed works provide improved accuracy by utilizing machine learning as compared to existing solutions based on simple approaches.


1984 ◽  
Vol 106 (1) ◽  
pp. 83-88 ◽  
Author(s):  
T. Kitamura ◽  
T. Kijima ◽  
H. Akashi

This paper demonstrates a modeling technique of prosthetic heart valves. In the modeling, a pumping cycle is divided into four phases, in which the state of the valve and flow is different. The pressure-flow relation across the valve is formulated separately in each phase. This technique is developed to build a mathematical model used in the real time estimation of the hemodynamic state under artificial heart pumping. The model built by this technique is simple enough for saving the computational time in the real time estimation. The model is described by the first-order ordinary differential equation with 12 parameters. These parameters can be uniquely determined beforehand from in-vitro experimental data. It is shown that the model can adapt, with sufficient accuracy, to a change in the practical pumping condition and the viscosity of the fluid in their practical range, and is also demonstrated that the estimated backflow volume by model agrees closely with the actual one.


Sensors ◽  
2020 ◽  
Vol 20 (9) ◽  
pp. 2556
Author(s):  
Liyang Wang ◽  
Yao Mu ◽  
Jing Zhao ◽  
Xiaoya Wang ◽  
Huilian Che

The clinical symptoms of prediabetes are mild and easy to overlook, but prediabetes may develop into diabetes if early intervention is not performed. In this study, a deep learning model—referred to as IGRNet—is developed to effectively detect and diagnose prediabetes in a non-invasive, real-time manner using a 12-lead electrocardiogram (ECG) lasting 5 s. After searching for an appropriate activation function, we compared two mainstream deep neural networks (AlexNet and GoogLeNet) and three traditional machine learning algorithms to verify the superiority of our method. The diagnostic accuracy of IGRNet is 0.781, and the area under the receiver operating characteristic curve (AUC) is 0.777 after testing on the independent test set including mixed group. Furthermore, the accuracy and AUC are 0.856 and 0.825, respectively, in the normal-weight-range test set. The experimental results indicate that IGRNet diagnoses prediabetes with high accuracy using ECGs, outperforming existing other machine learning methods; this suggests its potential for application in clinical practice as a non-invasive, prediabetes diagnosis technology.


Author(s):  
S. Pu ◽  
L. Xie ◽  
M. Ji ◽  
Y. Zhao ◽  
W. Liu ◽  
...  

<p><strong>Abstract.</strong> This paper presents an innovative power line corridor inspection approach using UAV LiDAR edge computing and 4G real real-time transmission. First, sample point clouds of power towers are manually classified and decomposed into components according to five mainstream tower types: T type, V type, n type, I type and owl head type. A deep learning AI agent, named “Tovos Age Agent” internally, is trained by supervised deep learning the sample data sets under a 3D CNN framework. Second, laser points of power line corridors are simultaneously classified into Ground, Vegetation, Tower, Cable, and Building types using semantic feature constraints during the UAV-borne LiDAR acquisition process, and then tower types are further recognized by Tovos Agent for strain span separation. Spatial and topological relations between Cable points and other types are analyzed according to industry standards to identify potential risks at the same time. Finally, all potential risks are organized as industry standard reports and transmitted onto central server via 4G data link, so that maintenance personal can be notified the risks as soon as possible. Tests on LiDAR data of 1000&amp;thinsp;KV power line show the promising results of the proposed method.</p>


Author(s):  
Jong-Hak Lee ◽  
Hyung-Won Kim ◽  
Woojin Choi
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document