Casting defect detection in X-ray images using convolutional neural networks and attention-guided data augmentation

Measurement ◽  
2021 ◽  
Vol 170 ◽  
pp. 108736
Author(s):  
Lili Jiang ◽  
Yongxiong Wang ◽  
Zhenhui Tang ◽  
Yinlong Miao ◽  
Shuyi Chen
2021 ◽  
Vol 11 (1) ◽  
pp. 28
Author(s):  
Ivan Lorencin ◽  
Sandi Baressi Šegota ◽  
Nikola Anđelić ◽  
Anđela Blagojević ◽  
Tijana Šušteršić ◽  
...  

COVID-19 represents one of the greatest challenges in modern history. Its impact is most noticeable in the health care system, mostly due to the accelerated and increased influx of patients with a more severe clinical picture. These facts are increasing the pressure on health systems. For this reason, the aim is to automate the process of diagnosis and treatment. The research presented in this article conducted an examination of the possibility of classifying the clinical picture of a patient using X-ray images and convolutional neural networks. The research was conducted on the dataset of 185 images that consists of four classes. Due to a lower amount of images, a data augmentation procedure was performed. In order to define the CNN architecture with highest classification performances, multiple CNNs were designed. Results show that the best classification performances can be achieved if ResNet152 is used. This CNN has achieved AUCmacro¯ and AUCmicro¯ up to 0.94, suggesting the possibility of applying CNN to the classification of the clinical picture of COVID-19 patients using an X-ray image of the lungs. When higher layers are frozen during the training procedure, higher AUCmacro¯ and AUCmicro¯ values are achieved. If ResNet152 is utilized, AUCmacro¯ and AUCmicro¯ values up to 0.96 are achieved if all layers except the last 12 are frozen during the training procedure.


Author(s):  
Sarah Badr AlSumairi ◽  
Mohamed Maher Ben Ismail

Pneumonia is an infectious disease of the lungs. About one third to one half of pneumonia cases are caused by bacteria. Early diagnosis is a critical factor for a successful treatment process. Typically, the disease can be diagnosed by a radiologist using chest X-ray images. In fact, chest X-rays are currently the best available method for diagnosing pneumonia. However, the recognition of pneumonia symptoms is a challenging task that relies on the availability of expert radiologists. Such “human” diagnosis can be inaccurate and subjective due to lack of clarity and erroneous decision. Moreover, the error can increase more if the physician is requested to analyze tens of X-rays within a short period of time. Therefore, Computer-Aided Diagnosis (CAD) systems were introduced to support and assist physicians and make their efforts more productive. In this paper, we investigate, design, implement and assess customized Convolutional Neural Networks to overcome the image-based Pneumonia classification problem. Namely, ResNet-50 and DenseNet-161 models were inherited to design customized deep network architecture and improve the overall pneumonia classification accuracy. Moreover, data augmentation was deployed and associated with standard datasets to assess the proposed models. Besides, standard performance measures were used to validate and evaluate the proposed system.


2021 ◽  
Author(s):  
Sandi Baressi Šegota ◽  
◽  
Simon Lysdahlgaard ◽  
Søren Hess ◽  
Ronald Antulov

The fact that Artificial Intelligence (AI) based algorithms exhibit a high performance on image classification tasks has been shown many times. Still, certain issues exist with the application of machine learning (ML) artificial neural network (ANN) algorithms. The best known is the need for a large amount of statistically varied data, which can be addressed with expanded collection or data augmentation. Other issues are also present. Convolutional neural networks (CNNs) show extremely high performance on image-shaped data. Despite their performance, CNNs exhibit a large issue which is the sensitivity to image orientation. Previous research shows that varying the orientation of images may greatly lower the performance of the trained CNN. This is especially problematic in certain applications, such as X-ray radiography, an example of which is presented here. Previous research shows that the performance of CNNs is higher when used on images in a single orientation (left or right), as opposed to the combination of both. This means that the data needs to be differentiated before it enters the classification model. In this paper, the CNN-based model for differentiation between left and right-oriented images is presented. Multiple CNNs are trained and tested, with the highest performing being the VGG16 architecture which achieved an Accuracy of 0.99 (+/- 0.01), and an AUC of 0.98 (+/- 0.01). These results show that CNNs can be used to address the issue of orientation sensitivity by splitting the data in advance of being used in classification models.


2021 ◽  
Vol 11 (23) ◽  
pp. 11185
Author(s):  
Zhi-Peng Jiang ◽  
Yi-Yang Liu ◽  
Zhen-En Shao ◽  
Ko-Wei Huang

Image recognition has been applied to many fields, but it is relatively rarely applied to medical images. Recent significant deep learning progress for image recognition has raised strong research interest in medical image recognition. First of all, we found the prediction result using the VGG16 model on failed pneumonia X-ray images. Thus, this paper proposes IVGG13 (Improved Visual Geometry Group-13), a modified VGG16 model for classification pneumonia X-rays images. Open-source thoracic X-ray images acquired from the Kaggle platform were employed for pneumonia recognition, but only a few data were obtained, and datasets were unbalanced after classification, either of which can result in extremely poor recognition from trained neural network models. Therefore, we applied augmentation pre-processing to compensate for low data volume and poorly balanced datasets. The original datasets without data augmentation were trained using the proposed and some well-known convolutional neural networks, such as LeNet AlexNet, GoogLeNet and VGG16. In the experimental results, the recognition rates and other evaluation criteria, such as precision, recall and f-measure, were evaluated for each model. This process was repeated for augmented and balanced datasets, with greatly improved metrics such as precision, recall and F1-measure. The proposed IVGG13 model produced superior outcomes with the F1-measure compared with the current best practice convolutional neural networks for medical image recognition, confirming data augmentation effectively improved model accuracy.


2020 ◽  
Author(s):  
Leonardo Rodrigues ◽  
Larissa Rodrigues ◽  
Danilo Da Silva ◽  
João Fernando Mari

Coronavirus Disease 2019 (COVID-19) pandemic rapidly spread globally, impacting the lives of billions of people. The effective screening of infected patients is a critical step to struggle with COVID-19, and treating the patients avoiding this quickly disease spread. The need for automated and scalable methods has increased due to the unavailability of accurate automated toolkits. Recent researches using chest X-ray images suggest they include relevant information about the COVID-19 virus. Hence, applying machine learning techniques combined with radiological imaging promises to identify this disease accurately. It is straightforward to collect these images once it is spreadly shared and analyzed in the world. This paper presents a method for automatic COVID-19 detection using chest Xray images through four convolutional neural networks, namely: AlexNet, VGG-11, SqueezeNet, and DenseNet-121. This method had been providing accurate diagnostics for positive or negative COVID-19 classification. We validate our experiments using a ten-fold cross-validation procedure over the training and test sets. Our findings include the shallow fine-tuning and data augmentation strategies that can assist in dealing with the low number of positive COVID-19 images publicly available. The accuracy for all CNNs is higher than 97.00%, and the SqueezeNet model achieved the best result with 99.20%.


Author(s):  
Zhi-Hao Chen ◽  
Jyh-Ching Juang

To ensure the safety in aircraft flying, we aim use of the deep learning methods of nondestructive examination with multiple defect detection paradigms for X-ray image detection posed. The use of the Fast Region-based Convolutional Neural Networks (Fast R-CNN) driven model seeks to augment and improve existing automated Non-Destructive Testing (NDT) diagnosis. Within the context of X-ray screening, limited numbers insufficient types of X-ray aeronautics engine defect data samples can thus pose another problem in training model tackling multiple detections perform accuracy. To overcome this issue, we employ a deep learning paradigm of transfer learning tackling both single and multiple detection. Overall the achieve result get more then 90% accuracy based on the AE-RTISNet retrained with 8 types of defect detection. Caffe structure software to make networks tracking detection over multiples Fast R-CNN. We consider the AE-RTISNet provide best results to the more traditional multiple Fast R-CNN approaches simpler translate to C++ code and installed in the Jetson™ TX2 embedded computer. With the use of LMDB format, all images using input images of size 640 × 480 pixel. The results scope achieves 0.9 mean average precision (mAP) on 8 types of material defect classifiers problem and requires approximately 100 microseconds.


2021 ◽  
Vol 2 (5) ◽  
Author(s):  
Netzahualcoyotl Hernandez-Cruz ◽  
David Cato ◽  
Jesus Favela

AbstractCoronavirus disease 2019 (COVID-19) has accounted for millions of causalities. While it affects not only individuals but also our collective healthcare and economic systems, testing is insufficient and costly hampering efforts to deal with the pandemic. Chest X-rays are routine radiographic imaging tests that are used for the diagnosis of respiratory conditions such as pneumonia and COVID-19. Convolutional neural networks have shown promise to be effective at classifying X-rays for assisting diagnosis of conditions; however, achieving robust performance demanded in most modern medical applications typically requires a large number of samples. While there exist datasets containing thousands of X-ray images of patients with healthy and pneumonia diagnoses, because COVID-19 is such a recent phenomenon, there are relatively few confirmed COVID-19 positive chest X-rays openly available to the research community. In this paper, we demonstrate the effectiveness of cycle-generative adversarial network, commonly used for neural style transfer, as a way to augment COVID-19 negative X-ray images to look like COVID-19 positive images for increasing the number of COVID-19 positive training samples. The statistical results show an increase in the mean macro f1-score over 21% on a one-tailed t score = 2.68 and p value = 0.01 to accept our alternative hypothesis for an $$\alpha = 0.05$$ α = 0.05 . We conclude that this approach, when used in conjunction with standard transfer learning techniques, is effective at improving the performance of COVID-19 classifiers for a variety of common convolutional neural networks.


Sign in / Sign up

Export Citation Format

Share Document