Pandrol track fastener defect detection based on local convolutional neural networks

Author(s):  
Anqi Ma ◽  
Zhaomin Lv ◽  
Xingjie Chen ◽  
Liming Li ◽  
Yijin Qiu ◽  
...  

The Pandrol track fastener image is composed of two parts: track fastener clip sub-graph and track fastener bolt sub-graph. However, the detection of track fastener clip defect can be realized by track fastener image and track fastener image cannot effectively detect whether the bolt is loose. When the convolutional neural network is used to extract whole picture features and detect, many bolt features unrelated to the clips will be obtained, thereby resulting in a high false alarm rate. To solve these problems, a method based on local convolutional neural network to detect the Pandrol track fastener defects is proposed. First, the algorithm for automatic segmentation of track fastener pictures was used to divide the picture of the Pandrol track fastener into two sub-pictures, one sub-picture is the track fastener bolt and the other sub-picture is the track fastener clip. Second, convolutional neural network was used to detect the track fastener clip pictures. The influence of bolt features unrelated to clips on clips detection can be avoided through image segmentation for local feature extraction, thereby reducing the false alarm rate. Finally, the validity of the proposed method is verified using real Pandrol track fastener images.

Author(s):  
P. Manoj Kumar ◽  
M. Parvathy ◽  
C. Abinaya Devi

Intrusion Detection Systems (IDS) is one of the important aspects of cyber security that can detect the anomalies in the network traffic. IDS are a part of Second defense line of a system that can be deployed along with other security measures such as access control, authentication mechanisms and encryption techniques to secure the systems against cyber-attacks. However, IDS suffers from the problem of handling large volume of data and in detecting zero-day attacks (new types of attacks) in a real-time traffic environment. To overcome this problem, an intelligent Deep Learning approach for Intrusion Detection is proposed based on Convolutional Neural Network (CNN-IDS). Initially, the model is trained and tested under a new real-time traffic dataset, CSE-CIC-IDS 2018 dataset. Then, the performance of CNN-IDS model is studied based on three important performance metrics namely, accuracy / training time, detection rate and false alarm rate. Finally, the experimental results are compared with those of various Deep Discriminative models including Recurrent Neural network (RNN), Deep Neural Network (DNN) etc., proposed for IDS under the same dataset. The Comparative results show that the proposed CNN-IDS model is very much suitable for modelling a classification model both in terms of binary and multi-class classification with higher detection rate, accuracy, and lower false alarm rate. The CNN-IDS model improves the accuracy of intrusion detection and provides a new research method for intrusion detection.


2019 ◽  
Vol 11 (23) ◽  
pp. 2862 ◽  
Author(s):  
Weiwei Fan ◽  
Feng Zhou ◽  
Xueru Bai ◽  
Mingliang Tao ◽  
Tian Tian

Ship detection plays an important role in many remote sensing applications. However, the performance of the PolSAR ship detection may be degraded by the complicated scattering mechanism, multi-scale size of targets, and random speckle noise, etc. In this paper, we propose a ship detection method for PolSAR images based on modified faster region-based convolutional neural network (Faster R-CNN). The main improvements include proposal generation by adopting multi-level features produced by the convolution layers, which fits ships with different sizes, and the addition of a Deep Convolutional Neural Network (DCNN)-based classifier for training sample generation and coast mitigation. The proposed method has been validated by four measured datasets of NASA/JPL airborne synthetic aperture radar (AIRSAR) and uninhabited aerial vehicle synthetic aperture radar (UAVSAR). Performance comparison with the modified constant false alarm rate (CFAR) detector and the Faster R-CNN has demonstrated that the proposed method can improve the detection probability while reducing the false alarm rate and missed detections.


Author(s):  
Jabran Akhtar

AbstractA desired objective in radar target detection is to satisfy two very contradictory requirements: offer a high probability of detection with a low false alarm rate. In this paper, we propose the utilization of artificial neural networks for binary classification of targets detected by a depreciated detection process. It is shown that trained neural networks are capable of identifying false detections with considerable accuracy and can to this extent utilize information present in guard cells and Doppler profiles. This allows for a reduction in the false alarm rate with only moderate loss in the probability of detection. With an appropriately designed neural network, an overall improved system performance can be achieved when compared against traditional constant false alarm rate detectors for the specific trained scenarios.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Xieyi Chen ◽  
Dongyun Wang ◽  
Jinjun Shao ◽  
Jun Fan

To automatically detect plastic gasket defects, a set of plastic gasket defect visual detection devices based on GoogLeNet Inception-V2 transfer learning was designed and established in this study. The GoogLeNet Inception-V2 deep convolutional neural network (DCNN) was adopted to extract and classify the defect features of plastic gaskets to solve the problem of their numerous surface defects and difficulty in extracting and classifying the features. Deep learning applications require a large amount of training data to avoid model overfitting, but there are few datasets of plastic gasket defects. To address this issue, data augmentation was applied to our dataset. Finally, the performance of the three convolutional neural networks was comprehensively compared. The results showed that the GoogLeNet Inception-V2 transfer learning model had a better performance in less time. It means it had higher accuracy, reliability, and efficiency on the dataset used in this paper.


1992 ◽  
Vol 4 (5) ◽  
pp. 772-780 ◽  
Author(s):  
William G. Baxt

When either detection rate (sensitivity) or false alarm rate (specificity) is optimized in an artificial neural network trained to identify myocardial infarction, the increase in the accuracy of one is always done at the expense of the accuracy of the other. To overcome this loss, two networks that were separately trained on populations of patients with different likelihoods of myocardial infarction were used in concert. One network was trained on clinical pattern sets derived from patients who had a low likelihood of myocardial infarction, while the other was trained on pattern sets derived from patients with a high likelihood of myocardial infarction. Unknown patterns were analyzed by both networks. If the output generated by the network trained on the low risk patients was below an empirically set threshold, this output was chosen as the diagnostic output. If the output was above that threshold, the output of the network trained on the high risk patients was used as the diagnostic output. The dual network correctly identified 39 of the 40 patients who had sustained a myocardial infarction and 301 of 306 patients who did not have a myocardial infarction for a detection rate (sensitivity) and false alarm rate (1-specificity) of 97.50 and 1.63%, respectively. A parallel control experiment using a single network but identical training information correctly identified 39 of 40 patients who had sustained a myocardial infarction and 287 of 306 patients who had not sustained a myocardial infarction (p = 0.003).


2019 ◽  
Vol 8 (4) ◽  
pp. 12940-12944

Human life is a complex social structure. It is not possible for the humans to navigate without reading the other persons. They do it by identifying the faces. The state of response can be decided based on the mood of the opposite person. Whereas a person’s mood can be figured out by observing his emotion (Facial Gesture). The aim of the project is to construct a “Facial emotion Recognition” model using DCNN (Deep convolutional neural network) in real time. The model is constructed using DCNN as it is proven that DCNN work with greater accuracy than CNN (convolutional neural network). The facial expression of humans is very dynamic in nature it changes in split seconds whether it may be Happy, Sad, Angry, Fear, Surprise, Disgust and Neutral etc. This project is to predict the emotion of the person in real time. Our brains have neural networks which are responsible for all kinds of thinking (decision making, understanding). This model tries to develop these decisions making and classification skills by training the machine. It can classify and predict the multiple faces and different emotions at the very same time. In order to obtain higher accuracy, we take the models which are trained over thousands of datasets.


Perception ◽  
1996 ◽  
Vol 25 (7) ◽  
pp. 757-771 ◽  
Author(s):  
Peter Wenderoth

Detection of vertical bilateral symmetry has previously been studied in patterns composed of black or white dots on a grey background under four conditions: (a) same contrast (black or white) for all dots (called BB or WW, for ‘all black or all white’); (b) half of the dots black and half white with positive correspondence between symmetrical dot pairs (called MA for ‘matched’); (c) half of the dots black and half white with negative correspondence between symmetrical dot pairs (called OPP for ‘opposite’); and (d) black (white) dots on one side of the axis and white (black) dots on the other (called BW for ‘one side black the other white’). It was found that performance was ordered BB (or WW) = MA > OPP =BW, where > indicates better performance. That experiment was repeated here in experiment 1 with symmetry axes not only at vertical but also at horizontal and the two diagonals. It was found overall that BB = MA > OPP, BW. However, OPP > BW when random trials were included in the analysis but when they were excluded BW > OPP. This was due to a very high false-alarm rate in condition BW which could be accounted for if grouping by colour occurs prior to symmetry detection. In experiment 2 it was shown that vertical-symmetry salience over other orientations remained about the same as OPP patterns progressively changed into BB patterns by varying the percentage same polarity between 0% and 100% in 12%–13% steps. Thus, dot-pair polarity affects performance without affecting relative axis salience, as was also found recently when dot pattern outlines were masked. All of the data indicate that although opposite dot polarity does reduce performance slightly, the symmetry-detection mechanism is remarkably resilient to such perturbation. The high false-alarm rate in the BW condition of experiment 1 may be accounted for by extremely salient global grouping of dots by luminance which effectively creates an integral stimulus which is perceptually difficult to break down into its component dot pairs, prohibiting the required point-by-point matching necessary to reject symmetry detection. The small detrimental effect of nonmatched polarity might be due to the polarity differences masking the grouping of dots into ‘clumps’ on either side of the axis, a process for which there is a great deal of independent evidence.


Author(s):  
Sherif S. Ishak ◽  
Haitham M. Al-Deek

Pattern recognition techniques such as artificial neural networks continue to offer potential solutions to many of the existing problems associated with freeway incident-detection algorithms. This study focuses on the application of Fuzzy ART neural networks to incident detection on freeways. Unlike back-propagation models, Fuzzy ART is capable of fast, stable learning of recognition categories. It is an incremental approach that has the potential for on-line implementation. Fuzzy ART is trained with traffic patterns that are represented by 30-s loop-detector data of occupancy, speed, or a combination of both. Traffic patterns observed at the incident time and location are mapped to a group of categories. Each incident category maps incidents with similar traffic pattern characteristics, which are affected by the type and severity of the incident and the prevailing traffic conditions. Detection rate and false alarm rate are used to measure the performance of the Fuzzy ART algorithm. To reduce the false alarm rate that results from occasional misclassification of traffic patterns, a persistence time period of 3 min was arbitrarily selected. The algorithm performance improves when the temporal size of traffic patterns increases from one to two 30-s periods for all traffic parameters. An interesting finding is that the speed patterns produced better results than did the occupancy patterns. However, when combined, occupancy–speed patterns produced the best results. When compared with California algorithms 7 and 8, the Fuzzy ART model produced better performance.


Author(s):  
Liang Kim Meng ◽  
Azira Khalil ◽  
Muhamad Hanif Ahmad Nizar ◽  
Maryam Kamarun Nisham ◽  
Belinda Pingguan-Murphy ◽  
...  

Background: Bone Age Assessment (BAA) refers to a clinical procedure that aims to identify a discrepancy between biological and chronological age of an individual by assessing the bone age growth. Currently, there are two main methods of executing BAA which are known as Greulich-Pyle and Tanner-Whitehouse techniques. Both techniques involve a manual and qualitative assessment of hand and wrist radiographs, resulting in intra and inter-operator variability accuracy and time-consuming. An automatic segmentation can be applied to the radiographs, providing the physician with more accurate delineation of the carpal bone and accurate quantitative analysis. Methods: In this study, we proposed an image feature extraction technique based on image segmentation with the fully convolutional neural network with eight stride pixel (FCN-8). A total of 290 radiographic images including both female and the male subject of age ranging from 0 to 18 were manually segmented and trained using FCN-8. Results and Conclusion: The results exhibit a high training accuracy value of 99.68% and a loss rate of 0.008619 for 50 epochs of training. The experiments compared 58 images against the gold standard ground truth images. The accuracy of our fully automated segmentation technique is 0.78 ± 0.06, 1.56 ±0.30 mm and 98.02% in terms of Dice Coefficient, Hausdorff Distance, and overall qualitative carpal recognition accuracy, respectively.


Sign in / Sign up

Export Citation Format

Share Document