scholarly journals Computer Vision-Based Detection for Delayed Fracture of Bolts in Steel Bridges

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Jing Zhou ◽  
Linsheng Huo

The delayed fracture of high-strength bolts occurs frequently in the bolt connections of long-span steel bridges. This phenomenon can threaten the safety of structures and even lead to serious accidents in certain cases. However, the manual inspection commonly used in engineering to detect the fractured bolts is time-consuming and inconvenient. Therefore, a computer vision-based inspection approach is proposed in this paper to rapidly and automatically detect the fractured bolts. The proposed approach is realized by a convolutional neural network- (CNN-) based deep learning algorithm, the third version of You Only Look Once (YOLOv3). A challenge for the detector training using YOLOv3 is that only limited amounts of images of the fractured bolts are available in practice. To address this challenge, five data augmentation methods are introduced to produce more labeled images, including brightness transformation, Gaussian blur, flipping, perspective transformation, and scaling. Six YOLOv3 neural networks are trained using six different augmented training sets, and then, the performance of each detector is tested on the same testing set to compare the effectiveness of different augmentation methods. The highest average precision (AP) of the trained detectors is 89.14% when the intersection over union (IOU) threshold is set to 0.5. The practicality and robustness of the proposed method are further demonstrated on images that were never used in the training and testing of the detector. The results demonstrate that the proposed method can quickly and automatically detect the delayed fracture of high-strength bolts.

2019 ◽  
Vol 8 (2) ◽  
pp. 1746-1750

Segmentation is an important stage in any computer vision system. Segmentation involves discarding the objects which are not of our interest and extracting only the object of our interest. Automated segmentation has become very difficult when we have complex background and other challenges like illumination, occlusion etc. In this project we are designing an automated segmentation system using deep learning algorithm to segment images with complex background.


2021 ◽  
Vol 8 ◽  
Author(s):  
Castela Forte ◽  
Andrei Voinea ◽  
Malina Chichirau ◽  
Galiya Yeshmagambetova ◽  
Lea M. Albrecht ◽  
...  

Background: The inclusion of facial and bodily cues (clinical gestalt) in machine learning (ML) models improves the assessment of patients' health status, as shown in genetic syndromes and acute coronary syndrome. It is unknown if the inclusion of clinical gestalt improves ML-based classification of acutely ill patients. As in previous research in ML analysis of medical images, simulated or augmented data may be used to assess the usability of clinical gestalt.Objective: To assess whether a deep learning algorithm trained on a dataset of simulated and augmented facial photographs reflecting acutely ill patients can distinguish between healthy and LPS-infused, acutely ill individuals.Methods: Photographs from twenty-six volunteers whose facial features were manipulated to resemble a state of acute illness were used to extract features of illness and generate a synthetic dataset of acutely ill photographs, using a neural transfer convolutional neural network (NT-CNN) for data augmentation. Then, four distinct CNNs were trained on different parts of the facial photographs and concatenated into one final, stacked CNN which classified individuals as healthy or acutely ill. Finally, the stacked CNN was validated in an external dataset of volunteers injected with lipopolysaccharide (LPS).Results: In the external validation set, the four individual feature models distinguished acutely ill patients with sensitivities ranging from 10.5% (95% CI, 1.3–33.1% for the skin model) to 89.4% (66.9–98.7%, for the nose model). Specificity ranged from 42.1% (20.3–66.5%) for the nose model and 94.7% (73.9–99.9%) for skin. The stacked model combining all four facial features achieved an area under the receiver characteristic operating curve (AUROC) of 0.67 (0.62–0.71) and distinguished acutely ill patients with a sensitivity of 100% (82.35–100.00%) and specificity of 42.11% (20.25–66.50%).Conclusion: A deep learning algorithm trained on a synthetic, augmented dataset of facial photographs distinguished between healthy and simulated acutely ill individuals, demonstrating that synthetically generated data can be used to develop algorithms for health conditions in which large datasets are difficult to obtain. These results support the potential of facial feature analysis algorithms to support the diagnosis of acute illness.


Diabetologia ◽  
2021 ◽  
Author(s):  
Frank G. Preston ◽  
Yanda Meng ◽  
Jamie Burgess ◽  
Maryam Ferdousi ◽  
Shazli Azmi ◽  
...  

Abstract Aims/hypothesis We aimed to develop an artificial intelligence (AI)-based deep learning algorithm (DLA) applying attribution methods without image segmentation to corneal confocal microscopy images and to accurately classify peripheral neuropathy (or lack of). Methods The AI-based DLA utilised convolutional neural networks with data augmentation to increase the algorithm’s generalisability. The algorithm was trained using a high-end graphics processor for 300 epochs on 329 corneal nerve images and tested on 40 images (1 image/participant). Participants consisted of healthy volunteer (HV) participants (n = 90) and participants with type 1 diabetes (n = 88), type 2 diabetes (n = 141) and prediabetes (n = 50) (defined as impaired fasting glucose, impaired glucose tolerance or a combination of both), and were classified into HV, those without neuropathy (PN−) (n = 149) and those with neuropathy (PN+) (n = 130). For the AI-based DLA, a modified residual neural network called ResNet-50 was developed and used to extract features from images and perform classification. The algorithm was tested on 40 participants (15 HV, 13 PN−, 12 PN+). Attribution methods gradient-weighted class activation mapping (Grad-CAM), Guided Grad-CAM and occlusion sensitivity displayed the areas within the image that had the greatest impact on the decision of the algorithm. Results The results were as follows: HV: recall of 1.0 (95% CI 1.0, 1.0), precision of 0.83 (95% CI 0.65, 1.0), F1-score of 0.91 (95% CI 0.79, 1.0); PN−: recall of 0.85 (95% CI 0.62, 1.0), precision of 0.92 (95% CI 0.73, 1.0), F1-score of 0.88 (95% CI 0.71, 1.0); PN+: recall of 0.83 (95% CI 0.58, 1.0), precision of 1.0 (95% CI 1.0, 1.0), F1-score of 0.91 (95% CI 0.74, 1.0). The features displayed by the attribution methods demonstrated more corneal nerves in HV, a reduction in corneal nerves for PN− and an absence of corneal nerves for PN+ images. Conclusions/interpretation We demonstrate promising results in the rapid classification of peripheral neuropathy using a single corneal image. A large-scale multicentre validation study is required to assess the utility of AI-based DLA in screening and diagnostic programmes for diabetic neuropathy. Graphical abstract


2017 ◽  
Vol 108 ◽  
pp. 315-324 ◽  
Author(s):  
Víctor Campos ◽  
Francesc Sastre ◽  
Maurici Yagües ◽  
Míriam Bellver ◽  
Xavier Giró-i-Nieto ◽  
...  

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Rahee Walambe ◽  
Aboli Marathe ◽  
Ketan Kotecha ◽  
George Ghinea

The computer vision systems driving autonomous vehicles are judged by their ability to detect objects and obstacles in the vicinity of the vehicle in diverse environments. Enhancing this ability of a self-driving car to distinguish between the elements of its environment under adverse conditions is an important challenge in computer vision. For example, poor weather conditions like fog and rain lead to image corruption which can cause a drastic drop in object detection (OD) performance. The primary navigation of autonomous vehicles depends on the effectiveness of the image processing techniques applied to the data collected from various visual sensors. Therefore, it is essential to develop the capability to detect objects like vehicles and pedestrians under challenging conditions such as like unpleasant weather. Ensembling multiple baseline deep learning models under different voting strategies for object detection and utilizing data augmentation to boost the models’ performance is proposed to solve this problem. The data augmentation technique is particularly useful and works with limited training data for OD applications. Furthermore, using the baseline models significantly speeds up the OD process as compared to the custom models due to transfer learning. Therefore, the ensembling approach can be highly effective in resource-constrained devices deployed for autonomous vehicles in uncertain weather conditions. The applied techniques demonstrated an increase in accuracy over the baseline models and were able to identify objects from the images captured in the adverse foggy and rainy weather conditions. The applied techniques demonstrated an increase in accuracy over the baseline models and reached 32.75% mean average precision (mAP) and 52.56% average precision (AP) in detecting cars in the adverse fog and rain weather conditions present in the dataset. The effectiveness of multiple voting strategies for bounding box predictions on the dataset is also demonstrated. These strategies help increase the explainability of object detection in autonomous systems and improve the performance of the ensemble techniques over the baseline models.


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5598
Author(s):  
Jiaqi Li ◽  
Xuefeng Zhao ◽  
Guangyi Zhou ◽  
Mingyuan Zhang ◽  
Dongfang Li ◽  
...  

With the rapid development of deep learning, computer vision has assisted in solving a variety of problems in engineering construction. However, very few computer vision-based approaches have been proposed on work productivity’s evaluation. Therefore, taking a super high-rise project as a research case, using the detected object information obtained by a deep learning algorithm, a computer vision-based method for evaluating the productivity of assembling reinforcement is proposed. Firstly, a detector that can accurately distinguish various entities related to assembling reinforcement based on CenterNet is established. DLA34 is selected as the backbone. The mAP reaches 0.9682, and the speed of detecting a single image can be as low as 0.076 s. Secondly, the trained detector is used to detect the video frames, and images with detected boxes and documents with coordinates can be obtained. The position relationship between the detected work objects and detected workers is used to determine how many workers (N) have participated in the task. The time (T) to perform the process can be obtained from the change of coordinates of the work object. Finally, the productivity is evaluated according to N and T. The authors use four actual construction videos for validation, and the results show that the productivity evaluation is generally consistent with the actual conditions. The contribution of this research to construction management is twofold: On the one hand, without affecting the normal behavior of workers, a connection between construction individuals and work object is established, and the work productivity evaluation is realized. On the other hand, the proposed method has a positive effect on improving the efficiency of construction management.


2021 ◽  
Vol 11 (19) ◽  
pp. 8791
Author(s):  
Ji-Hun Kim ◽  
Yong-Cheol Mo ◽  
Seung-Myung Choi ◽  
Youk Hyun ◽  
Jung Woo Lee

Ankle fractures are common and, compared to other injuries, tend to be overlooked in the emergency department. We aim to develop a deep learning algorithm that can detect not only definite fractures but also obscure fractures. We collected the data of 1226 patients with suspected ankle fractures and performed both X-rays and CT scans. With anteroposterior (AP) and lateral ankle X-rays of 1040 patients with fractures and 186 normal patients, we developed a deep learning model. The training, validation, and test datasets were split in a 3/1/1 ratio. Data augmentation and under-sampling techniques were administered as part of the preprocessing. The Inception V3 model was utilized for the image classification. Performance of the model was validated using a confusion matrix and the area under the receiver operating characteristic curve (AUC-ROC). For the AP and lateral trials, the best accuracy and AUC values were 83%/0.91 in AP and 90%/0.95 in lateral. Additionally, the mean accuracy and AUC values were 83%/0.89 for the AP trials and 83%/0.9 for the lateral trials. The reliable dataset resulted in the CNN model providing higher accuracy than in past studies.


Sign in / Sign up

Export Citation Format

Share Document