A transfer learning object detection model for defects detection in X-ray images of spacecraft composite structures

2022 ◽  
pp. 115136
Author(s):  
Yanfeng Gong ◽  
Jun Luo ◽  
Hongliang Shao ◽  
Zhixue Li
2020 ◽  
Vol 13 (1) ◽  
pp. 23
Author(s):  
Wei Zhao ◽  
William Yamada ◽  
Tianxin Li ◽  
Matthew Digman ◽  
Troy Runge

In recent years, precision agriculture has been researched to increase crop production with less inputs, as a promising means to meet the growing demand of agriculture products. Computer vision-based crop detection with unmanned aerial vehicle (UAV)-acquired images is a critical tool for precision agriculture. However, object detection using deep learning algorithms rely on a significant amount of manually prelabeled training datasets as ground truths. Field object detection, such as bales, is especially difficult because of (1) long-period image acquisitions under different illumination conditions and seasons; (2) limited existing prelabeled data; and (3) few pretrained models and research as references. This work increases the bale detection accuracy based on limited data collection and labeling, by building an innovative algorithms pipeline. First, an object detection model is trained using 243 images captured with good illimitation conditions in fall from the crop lands. In addition, domain adaptation (DA), a kind of transfer learning, is applied for synthesizing the training data under diverse environmental conditions with automatic labels. Finally, the object detection model is optimized with the synthesized datasets. The case study shows the proposed method improves the bale detecting performance, including the recall, mean average precision (mAP), and F measure (F1 score), from averages of 0.59, 0.7, and 0.7 (the object detection) to averages of 0.93, 0.94, and 0.89 (the object detection + DA), respectively. This approach could be easily scaled to many other crop field objects and will significantly contribute to precision agriculture.


Author(s):  
Reagan L. Galvez ◽  
Elmer P. Dadios ◽  
Argel A. Bandala ◽  
Ryan Rhay P. Vicerra

IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 178699-178709
Author(s):  
Yu Quan ◽  
Zhixin Li ◽  
Canlong Zhang ◽  
Huifang Ma

Author(s):  
Ryan Motley ◽  
Andrew L Fielding ◽  
Prabhakar Ramachandran

Abstract Purpose The aim of this study was to assess the feasibility of the development and training of a deep learning object detection model for automating the assessment of fiducial marker migration and tracking of the prostate in radiotherapy patients. Methods and Materials A fiducial marker detection model was trained on the YOLO v2 detection framework using approximately 20,000 pelvis kV projection images with fiducial markers labelled. The ability of the trained model to detect marker positions was validated by tracking the motion of markers in a respiratory phantom and comparing detection data with the expected displacement from a reference position. Marker migration was then assessed in 14 prostate radiotherapy patients using the detector for comparison with previously conducted studies. This was done by determining variations in intermarker distance between the first and subsequent fractions in each patient. Results On completion of training, a detection model was developed that operated at a 96% detection efficacy and with a root mean square error of 0.3 pixels. By determining the displacement from a reference position in a respiratory phantom, experimentally and with the detector it was found that the detector was able to compute displacements with a mean accuracy of 97.8% when compared to the actual values. Interfraction marker migration was measured in 14 patients and the average and maximum ± standard deviation marker migration were found to be 2.0±0.9 mm and 2.3±0.9 mm, respectively. Conclusion This study demonstrates the benefits of pairing deep learning object detection, and image-guided radiotherapy and how a workflow to automate the assessment of organ motion and seed migration during prostate radiotherapy can be developed. The high detection efficacy and low error make the advantages of using a pre-trained model to automate the assessment of the target volume positional variation and the migration of fiducial markers between fractions.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Sanam Narejo ◽  
Bishwajeet Pandey ◽  
Doris Esenarro vargas ◽  
Ciro Rodriguez ◽  
M. Rizwan Anjum

Every year, a large amount of population reconciles gun-related violence all over the world. In this work, we develop a computer-based fully automated system to identify basic armaments, particularly handguns and rifles. Recent work in the field of deep learning and transfer learning has demonstrated significant progress in the areas of object detection and recognition. We have implemented YOLO V3 “You Only Look Once” object detection model by training it on our customized dataset. The training results confirm that YOLO V3 outperforms YOLO V2 and traditional convolutional neural network (CNN). Additionally, intensive GPUs or high computation resources were not required in our approach as we used transfer learning for training our model. Applying this model in our surveillance system, we can attempt to save human life and accomplish reduction in the rate of manslaughter or mass killing. Additionally, our proposed system can also be implemented in high-end surveillance and security robots to detect a weapon or unsafe assets to avoid any kind of assault or risk to human life.


Author(s):  
Pritam Ghosh ◽  
Subhranil Mustafi ◽  
Satyendra Nath Mandal

In this paper an attempt has been made to identify six different goat breeds from pure breed goat images. The images of goat breeds have been captured from different organized registered goat farms in India, and almost two thousand digital images of individual goats were captured in restricted (to get similar image background) and unrestricted (natural) environments without imposing stress to animals. A pre-trained deep learning-based object detection model called Faster R-CNN has been fine-tuned by using transfer-learning on the acquired images for automatic classification and localization of goat breeds. This fine-tuned model is able to locate the goat (localize) and classify (identify) its breed in the image. The Pascal VOC object detection evaluation metrics have been used to evaluate this model. Finally, comparison has been made with prediction accuracies of different technologies used for different animal breed identification.


Author(s):  
Jiun-In Guo ◽  
Chia-Chi Tsai ◽  
Yong-Hsiang Yang ◽  
Hung-Wei Lin ◽  
Bo-Xun Wu ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document