scholarly journals Monitoring Algorithm of Stress Point of Concrete Penstock in Large Construction Engineering

2021 ◽  
Vol 2066 (1) ◽  
pp. 012027
Author(s):  
Xiaoxing Kou

Abstract With the rapid development of the national construction industry, cracks and other problems often appear in the concrete structure during the initial and subsequent construction. When these problems develop further, the structural safety of the entire building may be compromised. Therefore, it is necessary to analyze the causes of cracks and other problems in concrete buildings, and be able to monitor and analyze these problems in time, and then propose reasonable solutions. This is already a problem that the entire construction technicians urgently need to solve. This paper studies the algorithm for monitoring stress points of concrete penstocks in large construction projects. Firstly, it uses literature research to explain the form of stress nodes in large-scale construction projects and the deficiencies in the research on the stress nodes of concrete penstocks in large-scale construction projects. In the experiment, the existing 3 algorithms are used to detect the force points, and compare their detection degree and false alarm rate. The experimental results show that the detection effect of the KNN algorithm is obviously inferior to the other two algorithms with the same neighbor parameters. Its detection rate is only 91%, and the false alarm rate reaches 30%. The other two algorithms are equivalent. The detection effect of the KNN algorithm is obviously inferior to the other two algorithms, the detection rate is poor, the outlier force points that are obviously deviating from the whole around the dense force points are not recognized, and the data of many normal force points located at the edge of the sparse area Instead, it was recognized as abnormal. Among the three algorithms, the detection rate of the NLOF algorithm is better, reaching 99%, which is significantly higher than the other two algorithms.

1992 ◽  
Vol 4 (5) ◽  
pp. 772-780 ◽  
Author(s):  
William G. Baxt

When either detection rate (sensitivity) or false alarm rate (specificity) is optimized in an artificial neural network trained to identify myocardial infarction, the increase in the accuracy of one is always done at the expense of the accuracy of the other. To overcome this loss, two networks that were separately trained on populations of patients with different likelihoods of myocardial infarction were used in concert. One network was trained on clinical pattern sets derived from patients who had a low likelihood of myocardial infarction, while the other was trained on pattern sets derived from patients with a high likelihood of myocardial infarction. Unknown patterns were analyzed by both networks. If the output generated by the network trained on the low risk patients was below an empirically set threshold, this output was chosen as the diagnostic output. If the output was above that threshold, the output of the network trained on the high risk patients was used as the diagnostic output. The dual network correctly identified 39 of the 40 patients who had sustained a myocardial infarction and 301 of 306 patients who did not have a myocardial infarction for a detection rate (sensitivity) and false alarm rate (1-specificity) of 97.50 and 1.63%, respectively. A parallel control experiment using a single network but identical training information correctly identified 39 of 40 patients who had sustained a myocardial infarction and 287 of 306 patients who had not sustained a myocardial infarction (p = 0.003).


Author(s):  
Mingming Fan ◽  
Shaoqing Tian ◽  
Kai Liu ◽  
Jiaxin Zhao ◽  
Yunsong Li

AbstractInfrared small target detection has been a challenging task due to the weak radiation intensity of targets and the complexity of the background. Traditional methods using hand-designed features are usually effective for specific background and have the problems of low detection rate and high false alarm rate in complex infrared scene. In order to fully exploit the features of infrared image, this paper proposes an infrared small target detection method based on region proposal and convolution neural network. Firstly, the small target intensity is enhanced according to the local intensity characteristics. Then, potential target regions are proposed by corner detection to ensure high detection rate of the method. Finally, the potential target regions are fed into the classifier based on convolutional neural network to eliminate the non-target regions, which can effectively suppress the complex background clutter. Extensive experiments demonstrate that the proposed method can effectively reduce the false alarm rate, and outperform other state-of-the-art methods in terms of subjective visual impression and quantitative evaluation metrics.


Electronics ◽  
2021 ◽  
Vol 10 (12) ◽  
pp. 1375
Author(s):  
Celestine Iwendi ◽  
Joseph Henry Anajemba ◽  
Cresantus Biamba ◽  
Desire Ngabo

Web security plays a very crucial role in the Security of Things (SoT) paradigm for smart healthcare and will continue to be impactful in medical infrastructures in the near future. This paper addressed a key component of security-intrusion detection systems due to the number of web security attacks, which have increased dramatically in recent years in healthcare, as well as the privacy issues. Various intrusion-detection systems have been proposed in different works to detect cyber threats in smart healthcare and to identify network-based attacks and privacy violations. This study was carried out as a result of the limitations of the intrusion detection systems in responding to attacks and challenges and in implementing privacy control and attacks in the smart healthcare industry. The research proposed a machine learning support system that combined a Random Forest (RF) and a genetic algorithm: a feature optimization method that built new intrusion detection systems with a high detection rate and a more accurate false alarm rate. To optimize the functionality of our approach, a weighted genetic algorithm and RF were combined to generate the best subset of functionality that achieved a high detection rate and a low false alarm rate. This study used the NSL-KDD dataset to simultaneously classify RF, Naive Bayes (NB) and logistic regression classifiers for machine learning. The results confirmed the importance of optimizing functionality, which gave better results in terms of the false alarm rate, precision, detection rate, recall and F1 metrics. The combination of our genetic algorithm and RF models achieved a detection rate of 98.81% and a false alarm rate of 0.8%. This research raised awareness of privacy and authentication in the smart healthcare domain, wireless communications and privacy control and developed the necessary intelligent and efficient web system. Furthermore, the proposed algorithm was applied to examine the F1-score and precisionperformance as compared to the NSL-KDD and CSE-CIC-IDS2018 datasets using different scaling factors. The results showed that the proposed GA was greatly optimized, for which the average precision was optimized by 5.65% and the average F1-score by 8.2%.


Sensors ◽  
2019 ◽  
Vol 19 (18) ◽  
pp. 4033 ◽  
Author(s):  
Yoo ◽  
Wang ◽  
Seol ◽  
Lee ◽  
Chung ◽  
...  

Recognizing and tracking the targets located behind walls through impulse radio ultra-wideband (IR-UWB) radar provides a significant advantage, as the characteristics of the IR-UWB radar signal enable it to penetrate obstacles. In this study, we design a through-wall radar system to estimate and track multiple targets behind a wall. The radar signal received through the wall experiences distortion, such as attenuation and delay, and the characteristics of the wall are estimated to compensate the distance error. In addition, unlike general cases, it is difficult to maintain a high detection rate and low false alarm rate in this through-wall radar application due to the attenuation and distortion caused by the wall. In particular, the generally used delay-and-sum algorithm is significantly affected by the motion of targets and distortion caused by the wall, rendering it difficult to obtain a good performance. Thus, we propose a novel method, which calculates the likelihood that a target exists in a certain location through a detection process. Unlike the delay-and-sum algorithm, this method does not use the radar signal directly. Simulations and experiments are conducted in different cases to show the validity of our through-wall radar system. The results obtained by using the proposed algorithm as well as delay-and-sum and trilateration are compared in terms of the detection rate, false alarm rate, and positioning error.


Author(s):  
Sunilkumar Soni ◽  
Santanu Das ◽  
Aditi Chattopadhyay

An optimal sensor placement methodology is proposed based on detection theory framework to maximize the detection rate and minimize the false alarm rate. Minimizing the false alarm rate for a given detection rate plays an important role in improving the efficiency of a Structural Health Monitoring (SHM) system as it reduces the number of false alarms. The placement technique is such that the sensor features are as directly correlated and as sensitive to damage as possible. The technique accounts for a number of factors, like actuation frequency and strength, minimum damage size, damage detection scheme, material damping, signal to noise ratio (SNR) and sensing radius. These factors are not independent and affect each other. Optimal sensor placement is done in two steps. First, a sensing radius, which can capture any detectable change caused by a perturbation and above a certain threshold, is calculated. This threshold value is based on Neyman-Pearson detector that maximizes the detection rate for a fixed false alarm rate. To avoid sensor redundancy, a criterion to minimize sensing region overlaps of neighboring sensors is defined. Based on the sensing region and the minimum overlap concept, number of sensors needed on a structural component is calculated. In the second step, a damage distribution pattern, known as probability of failure distribute, is calculated for a structural component using finite element analysis. This failure distribution helps in selecting the most sensitive sensors, thereby removing those making remote contributions to the overall detection scheme.


Author(s):  
P. Manoj Kumar ◽  
M. Parvathy ◽  
C. Abinaya Devi

Intrusion Detection Systems (IDS) is one of the important aspects of cyber security that can detect the anomalies in the network traffic. IDS are a part of Second defense line of a system that can be deployed along with other security measures such as access control, authentication mechanisms and encryption techniques to secure the systems against cyber-attacks. However, IDS suffers from the problem of handling large volume of data and in detecting zero-day attacks (new types of attacks) in a real-time traffic environment. To overcome this problem, an intelligent Deep Learning approach for Intrusion Detection is proposed based on Convolutional Neural Network (CNN-IDS). Initially, the model is trained and tested under a new real-time traffic dataset, CSE-CIC-IDS 2018 dataset. Then, the performance of CNN-IDS model is studied based on three important performance metrics namely, accuracy / training time, detection rate and false alarm rate. Finally, the experimental results are compared with those of various Deep Discriminative models including Recurrent Neural network (RNN), Deep Neural Network (DNN) etc., proposed for IDS under the same dataset. The Comparative results show that the proposed CNN-IDS model is very much suitable for modelling a classification model both in terms of binary and multi-class classification with higher detection rate, accuracy, and lower false alarm rate. The CNN-IDS model improves the accuracy of intrusion detection and provides a new research method for intrusion detection.


2021 ◽  
Author(s):  
Haoran Yan ◽  
Li Wang ◽  
Tiantian Zhang ◽  
Xiangting Jiang ◽  
Sensen Yang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document