scholarly journals 2D–3D reconstruction of distal forearm bone from actual X-ray images of the wrist using convolutional neural networks

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ryoya Shiode ◽  
Mototaka Kabashima ◽  
Yuta Hiasa ◽  
Kunihiro Oka ◽  
Tsuyoshi Murase ◽  
...  

AbstractThe purpose of the study was to develop a deep learning network for estimating and constructing highly accurate 3D bone models directly from actual X-ray images and to verify its accuracy. The data used were 173 computed tomography (CT) images and 105 actual X-ray images of a healthy wrist joint. To compensate for the small size of the dataset, digitally reconstructed radiography (DRR) images generated from CT were used as training data instead of actual X-ray images. The DRR-like images were generated from actual X-ray images in the test and adapted to the network, and high-accuracy estimation of a 3D bone model from a small data set was possible. The 3D shape of the radius and ulna were estimated from actual X-ray images with accuracies of 1.05 ± 0.36 and 1.45 ± 0.41 mm, respectively.

Author(s):  
Lawrence Hall ◽  
Dmitry Goldgof ◽  
Rahul Paul ◽  
Gregory M. Goldgof

<p>Testing for COVID-19 has been unable to keep up with the demand. Further, the false negative rate is projected to be as high as 30% and test results can take some time to obtain. X-ray machines are widely available and provide images for diagnosis quickly. This paper explores how useful chest X-ray images can be in diagnosing COVID-19 disease. We have obtained 135 chest X-rays of COVID-19 and 320 chest X-rays of viral and bacterial pneumonia. </p><p> A pre-trained deep convolutional neural network, Resnet50 was tuned on 102 COVID-19 cases and 102 other pneumonia cases in a 10-fold cross validation. The results were </p><p> an overall accuracy of 89.2% with a COVID-19 true positive rate of 0.8039 and an AUC of 0.95. Pre-trained Resnet50 and VGG16 plus our own small CNN were tuned or trained on a balanced set of COVID-19 and pneumonia chest X-rays. An ensemble of the three types of CNN classifiers was applied to a test set of 33 unseen COVID-19 and 218 pneumonia cases. The overall accuracy was 91.24% with the true positive rate for COVID-19 of 0.7879 with 6.88% false positives for a true negative rate of 0.9312 and AUC of 0.94. </p><p> This preliminary study has flaws, most critically a lack of information about where in the disease process the COVID-19 cases were and the small data set size. More COVID-19 case images at good resolution will enable a better answer to the question of how useful chest X-rays can be for diagnosing COVID-19.</p>


2016 ◽  
Vol 14 (03) ◽  
pp. 1642002 ◽  
Author(s):  
Bahar Akbal-Delibas ◽  
Roshanak Farhoodi ◽  
Marc Pomplun ◽  
Nurit Haspel

One of the major challenges for protein docking methods is to accurately discriminate native-like structures from false positives. Docking methods are often inaccurate and the results have to be refined and re-ranked to obtain native-like complexes and remove outliers. In a previous work, we introduced AccuRefiner, a machine learning based tool for refining protein–protein complexes. Given a docked complex, the refinement tool produces a small set of refined versions of the input complex, with lower root-mean-square-deviation (RMSD) of atomic positions with respect to the native structure. The method employs a unique ranking tool that accurately predicts the RMSD of docked complexes with respect to the native structure. In this work, we use a deep learning network with a similar set of features and five layers. We show that a properly trained deep learning network can accurately predict the RMSD of a docked complex with 1.40 Å error margin on average, by approximating the complex relationship between a wide set of scoring function terms and the RMSD of a docked structure. The network was trained on 35000 unbound docking complexes generated by RosettaDock. We tested our method on 25 different putative docked complexes produced also by RosettaDock for five proteins that were not included in the training data. The results demonstrate that the high accuracy of the ranking tool enables AccuRefiner to consistently choose the refinement candidates with lower RMSD values compared to the coarsely docked input structures.


2016 ◽  
Vol 16 (2) ◽  
pp. 167-177 ◽  
Author(s):  
Ahmad Esmaili Torshabi ◽  
Leila Ghorbanzadeh

At external beam radiotherapy, stereoscopic X-ray imaging system is responsible as tumor motion information provider. This system takes X-ray images intermittently from tumor position (1) at pretreatment step to provide training data set for model construction and (2) during treatment to control the accuracy of correlation model performance. In this work, we investigated the effect of imaging data points provided by this system on treatment quality. Because some information is still lacking about (1) the number of imaging data points, (2) shooting time for capturing each data point, and also (3) additional imaging dose delivered by this system. These 3 issues were comprehensively assessed at (1) pretreatment step while training data set is gathered for prediction model construction and (2) during treatment while model is tested and reconstructed using new arrival data points. A group of real patients treated with CyberKnife Synchrony module was chosen in this work, and an adaptive neuro-fuzzy inference system was considered as consistent correlation model. Results show that a proper model can be constructed while the number of imaging data points is highly enough to represent a good pattern of breathing cycles. Moreover, a trade-off between the number of imaging data points and additional imaging dose is considered in this study. Since breathing phenomena are highly variable at different patients, the time for taking some of imaging data points is very important, while their absence at that critical time may yield wrong tumor tracking. In contrast, the sensitivity of another category of imaging data points is not high, while breathing is normal and in the control range. Therefore, an adaptive supervision on the implementation of stereoscopic X-ray imaging is proposed to intelligently accomplish shooting process, based on breathing motion variations.


Author(s):  
Di Wang ◽  
Hong Bao ◽  
Feifei Zhang

This paper proposed an algorithm for a deep learning network for identifying circular traffic lights (CTL-DNNet). The sample labeling process uses translation to increase the number of positive samples, and the similarity is calculated to reduce the number of negative samples, thereby reducing overfitting. We use a dataset of approximately 370[Formula: see text]000 samples, with approximately 20[Formula: see text]000 positive samples and approximately 350[Formula: see text]000 negative samples. The datasets are generated from images taken at the Beijing Garden Expo. To obtain a very robust method for the detection of traffic lights, we use different layers, different cost functions and different activation functions of the depth neural network for training and comparison. Our algorithm has evaluated autonomous vehicles in varying illumination and gets the result with high accuracy and robustness. The experimental results show that CTL-DNNet is effective at recognizing road traffic lights in the Beijing Garden Expo area.


Author(s):  
Lakshmisetty Ruthvik Raj ◽  
◽  
Bitra Harsha Vardhan ◽  
Mullapudi Raghu Vamsi ◽  
Keerthikeshwar Reddy Mamilla ◽  
...  

COVID-19 is a severe and potentially fatal respiratory infection called coronavirus 2 disease (SARS-Co-2). COVID-19 is easily detectable on an abnormal chest x-ray. Numerous extensive studies have been conducted due to the findings, demonstrating how precise the detection of coronas using X-rays within the chest is. To train a deep learning network, such as a convolutional neural network, a large amount of data is required. Due to the recent end of the pandemic, it is difficult to collect many Covid x-ray images in a short period. The purpose of this study is to demonstrate how X-ray imaging (CXR) is created using the Covid CNN model-based convolutional network. Additionally, we demonstrate that the performance of CNNs and various COVID-19 acquisition algorithms can be used to generate synthetic images from data extensions. Alone, with CNN distribution, an accuracy of 85 percent was achieved. The accuracy has been increased to 95% by adding artificial images generated from data. We anticipate that this approach will expedite the discovery of COVID-19 and result in radiological solid programs. We leverage transfer learning in this paper to reduce time complexity and achieve the highest accuracy.


2018 ◽  
Vol 57 (04) ◽  
pp. 220-229
Author(s):  
Tung-I Tsai ◽  
Yaofeng Zhang ◽  
Gy-Yi Chao ◽  
Cheng-Chieh Tsai ◽  
Zhigang Zhang

Summary Background: Radiotherapy has serious side effects and thus requires prudent and cautious evaluation. However, obtaining protein expression profiles is expensive and timeconsuming, making it necessary to develop a theoretical and rational procedure for predicting the radiotherapy outcome for bladder cancer when working with limited data. Objective: A procedure for estimating the performance of radiotherapy is proposed in this research. The population domain (range of the population) of proteins and the relationships among proteins are considered to increase prediction accuracy. Methods: This research uses modified extreme value theory (MEVT), which is used to estimate the population domain of proteins, and correlation coefficients and prediction intervals to overcome the lack of knowledge regarding relationships among proteins. Results: When the size of the training data set was 5 samples, the mean absolute percentage error rate (MAPE) was 31.6200%; MAPE fell to 13.5505% when the number of samples was increased to 30. The standard deviation (SD) of forecasting error fell from 3.0609% for 5 samples to 1.2415% for 30 samples. These results show that the proposed procedure yields accurate and stable results, and is suitable for use with small data sets. Conclusions: The results show that considering the relationships among proteins is necessary when predicting the outcome of radiotherapy.


2021 ◽  
Vol 11 (8) ◽  
pp. 3301
Author(s):  
Pamir Ghimire ◽  
Igor Jovančević ◽  
Jean-José Orteu

We present a method to train a deep-network-based feature descriptor to calculate discriminative local descriptions from renders and corresponding real images with similar geometry. We are interested in using such descriptors for automatic industrial visual inspection whereby the inspection camera has been coarsely localized with respect to a relatively large mechanical assembly and presence of certain components needs to be checked compared to the reference computer-aided design model (CAD). We aim to perform the task by comparing the real inspection image with the render of textureless 3D CAD using the learned descriptors. The descriptor was trained to capture geometric features while staying invariant to image domain. Patch pairs for training the descriptor were extracted in a semisupervised manner from a small data set of 100 pairs of real images and corresponding renders that were manually finely registered starting from a relatively coarse localization of the inspection camera. Due to the small size of the training data set, the descriptor network was initialized with weights from classification training on ImageNet. A two-step training is proposed for addressing the problem of domain adaptation. The first, “bootstrapping”, is a classification training to obtain good initial weights for second training step, triplet-loss training, that provides weights for extracting the discriminative features comparable using l2 distance. The descriptor was tested for comparing renders and real images through two approaches: finding local correspondences between the images through nearest neighbor matching and transforming the images into Bag of Visual Words (BoVW) histograms. We observed that learning a robust cross-domain descriptor is feasible, even with a small data set, and such features might be of interest for CAD-based inspection of mechanical assemblies, and related applications such as tracking or finely registered augmented reality. To the best of our knowledge, this is the first work that reports learning local descriptors for comparing renders with real inspection images.


2020 ◽  
Author(s):  
Mundher Taresh ◽  
Ningbo Zhu ◽  
Talal Ahmed Ali Ali

AbstractNovel coronavirus pneumonia (COVID-19) is a contagious disease that has already caused thousands of deaths and infected millions of people worldwide. Thus, all technological gadgets that allow the fast detection of COVID-19 infection with high accuracy can offer help to healthcare professionals. This study is purposed to explore the effectiveness of artificial intelligence (AI) in the rapid and reliable detection of COVID-19 based on chest X-ray imaging. In this study, reliable pre-trained deep learning algorithms were applied to achieve the automatic detection of COVID-19-induced pneumonia from digital chest X-ray images.Moreover, the study aims to evaluate the performance of advanced neural architectures proposed for the classification of medical images over recent years. The data set used in the experiments involves 274 COVID-19 cases, 380 viral pneumonia, and 380 healthy cases, which was collected from the available X-ray images on public medical repositories. The confusion matrix provided a basis for testing the post-classification model. Furthermore, an open-source library PyCM* was used to support the statistical parameters. The study revealed the superiority of Model VGG16 over other models applied to conduct this research where the model performed best in terms of overall scores and based-class scores. According to the research results, deep learning with X-ray imaging is useful in the collection of critical biological markers associated with COVID-19 infection. The technique is conducive for the physicians to make a diagnosis of COVID-19 infection. Meanwhile, the high accuracy of this computer-aided diagnostic tool can significantly improve the speed and accuracy of COVID-19 diagnosis.


2019 ◽  
Vol 59 (1) ◽  
pp. 426
Author(s):  
James Lowell ◽  
Jacob Smith

The interpretation of key horizons on seismic data is an essential but time-consuming part of the subsurface workflow. This is compounded when surfaces need to be re-interpreted on variations of the same data, such as angle stacks, 4D data, or reprocessed data. Deep learning networks, which are a subset of machine learning, have the potential to automate this reinterpretation process, and significantly increase the efficiency of the subsurface workflow. This study investigates whether a deep learning network can learn from a single horizon interpretation in order to identify that event in a different version of the same data. The results were largely successful with the target horizon correctly identified in an alternative offset stack, and was correctly repositioned in areas where there was misalignment between the training data and the test data.


2010 ◽  
Vol 6 (3) ◽  
pp. 28-42 ◽  
Author(s):  
Bijan Raahemi ◽  
Ali Mumtaz

This paper presents a new approach using data mining techniques, and in particular a two-stage architecture, for classification of Peer-to-Peer (P2P) traffic in IP networks where in the first stage the traffic is filtered using standard port numbers and layer 4 port matching to label well-known P2P and NonP2P traffic. The labeled traffic produced in the first stage is used to train a Fast Decision Tree (FDT) classifier with high accuracy. The Unknown traffic is then applied to the FDT model which classifies the traffic into P2P and NonP2P with high accuracy. The two-stage architecture not only classifies well-known P2P applications, but also classifies applications that use random or non-standard port numbers and cannot be classified otherwise. The authors captured the internet traffic at a gateway router, performed pre-processing on the data, selected the most significant attributes, and prepared a training data set to which the new algorithm was applied. Finally, the authors built several models using a combination of various attribute sets for different ratios of P2P to NonP2P traffic in the training data.


Sign in / Sign up

Export Citation Format

Share Document