scholarly journals Is Intensity Inhomogeneity Correction Useful for Classification of Breast Cancer in Sonograms Using Deep Neural Network?

2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Chia-Yen Lee ◽  
Guan-Lin Chen ◽  
Zhong-Xuan Zhang ◽  
Yi-Hong Chou ◽  
Chih-Chung Hsu

The sonogram is currently an effective cancer screening and diagnosis way due to the convenience and harmlessness in humans. Traditionally, lesion boundary segmentation is first adopted and then classification is conducted, to reach the judgment of benign or malignant tumor. In addition, sonograms often contain much speckle noise and intensity inhomogeneity. This study proposes a novel benign or malignant tumor classification system, which comprises intensity inhomogeneity correction and stacked denoising autoencoder (SDAE), and it is suitable for small-size dataset. A classifier is established by extracting features in the multilayer training of SDAE; automatic analysis of imaging features by the deep learning algorithm is applied on image classification, thus allowing the system to have high efficiency and robust distinguishing. In this study, two kinds of dataset (private data and public data) are used for deep learning models training. For each dataset, two groups of test images are compared: the original images and the images after intensity inhomogeneity correction, respectively. The results show that when deep learning algorithm is applied on the sonograms after intensity inhomogeneity correction, there is a significant increase of the tumor distinguishing accuracy. This study demonstrated that it is important to use preprocessing to highlight the image features and further give these features for deep learning models. In this way, the classification accuracy will be better to just use the original images for deep learning.

Technologies ◽  
2021 ◽  
Vol 9 (1) ◽  
pp. 14
Author(s):  
James Dzisi Gadze ◽  
Akua Acheampomaa Bamfo-Asante ◽  
Justice Owusu Agyemang ◽  
Henry Nunoo-Mensah ◽  
Kwasi Adu-Boahen Opare

Software-Defined Networking (SDN) is a new paradigm that revolutionizes the idea of a software-driven network through the separation of control and data planes. It addresses the problems of traditional network architecture. Nevertheless, this brilliant architecture is exposed to several security threats, e.g., the distributed denial of service (DDoS) attack, which is hard to contain in such software-based networks. The concept of a centralized controller in SDN makes it a single point of attack as well as a single point of failure. In this paper, deep learning-based models, long-short term memory (LSTM) and convolutional neural network (CNN), are investigated. It illustrates their possibility and efficiency in being used in detecting and mitigating DDoS attack. The paper focuses on TCP, UDP, and ICMP flood attacks that target the controller. The performance of the models was evaluated based on the accuracy, recall, and true negative rate. We compared the performance of the deep learning models with classical machine learning models. We further provide details on the time taken to detect and mitigate the attack. Our results show that RNN LSTM is a viable deep learning algorithm that can be applied in the detection and mitigation of DDoS in the SDN controller. Our proposed model produced an accuracy of 89.63%, which outperformed linear-based models such as SVM (86.85%) and Naive Bayes (82.61%). Although KNN, which is a linear-based model, outperformed our proposed model (achieving an accuracy of 99.4%), our proposed model provides a good trade-off between precision and recall, which makes it suitable for DDoS classification. In addition, it was realized that the split ratio of the training and testing datasets can give different results in the performance of a deep learning algorithm used in a specific work. The model achieved the best performance when a split of 70/30 was used in comparison to 80/20 and 60/40 split ratios.


2020 ◽  
Vol 41 (Supplement_2) ◽  
Author(s):  
C Dockerill ◽  
W Woodward ◽  
A McCourt ◽  
A Beqiri ◽  
A Parker ◽  
...  

Abstract Background Stress echocardiography has become established as the most widely applied non-invasive imaging test for diagnosis of coronary artery disease within the UK. However, stress echocardiography has been substantially qualitative, rather than quantitative, based on visual wall motion assessment. For the first time, we have identified and validated quantitative descriptors of cardiac geometry and motion, extracted from ultrasound images acquired using contrast agents in an automated way. Purpose To establish whether these novel imaging features can be generated in an automated, quantifiable and reproducible way from images acquired with perfluoropropane contrast, as well as investigating how these extracted measures compare to those extracted from sulphur hexafluoride contrast and non-contrast studies. Methods 100 patients who received perfluoropropane contrast during their stress echocardiogram were recruited. Their stress echocardiography images were processed through a deep learning algorithm. Novel feature values were recorded and a subset of 10 studies were repeated. The automated measures of global longitudinal strain (GLS) and ejection fraction (EF) extracted from these images were compared to values previously extracted from sulphur hexafluoride contrast and non-contrast images using the same software. Results A full set of 31 novel imaging features were successfully extracted from 79 studies acquired using the perfluoropropane contrast agent with a dropout rate of 14% (n=92, 8 incomplete image sets). Repeated analysis in a subset of 10 perfluoropropane cases demonstrated excellent reproducibility of the extracted feature values (R2=1). Automated values of GLS and EF, at both rest (GLS = −16.4±4.8%, EF = 63±13%) and stress stages (GLS = −17.7±5.8%, EF = 68±11%), were extracted from 83 perfluoropropane studies, with a dropout rate of 16% (n=99, fewer incomplete sets as short axis view not required). The ranges of GLS and EF measures extracted from the perfluoropropane images were comparable to the other contrast studies (n=222) (Rest GLS = −16.8±5.8%, Rest EF = 63±10%; Stress GLS = −19.1±6.7%, Stress EF = 71±9%) and non-contrast studies (n=86) (Rest GLS = −15.7±5.3%, Rest EF = 57±10%; Stress GLS = −17.3±6.4%, Stress EF = 61±14%). Conclusions Novel features and clinically relevant measures were extracted from images acquired using perfluoropropane contrast for the first time in a fully automated and reproducible way using a deep learning algorithm. The analysis failure rate and generated measures are comparable to those extracted from images using other commonly used sulphur hexafluoride contrast agents and non-contrast stress echocardiography studies. These findings demonstrate that deep learning algorithms can be used for automated quantitative analysis of stress echocardiograms acquired using various contrast agents and in non-contrast studies to improve stress echocardiography practice. Funding Acknowledgement Type of funding source: Private company. Main funding source(s): Lantheus Medical Imaging, Inc.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Wei Zhang ◽  
Yang Wang

This study was aimed at exploring the treatment of asthma children with small airway obstruction in CT imaging features of deep learning and glucocorticoid. A total of 145 patients meeting the requirements in hospital were included in this study, and they were randomly assigned to receive aerosolized glucocorticoid ( n = 45 ), aerosolized glucocorticoid combined with bronchodilator ( n = 50 ), or oral steroids ( n = 50 ) for 4 weeks after discharge. The lung function and fractional exhaled nitric oxide (FENO) indexes of the three groups were measured, respectively, and then the effective rates were compared to evaluate the clinical efficacy of glucocorticoids with different administration methods and combined medications in the short-term maintenance treatment after acute exacerbation of asthma. Deep learning algorithm was used for CT image segmentation. The CT image is sent to the workbench for processing on the workbench, and then the convolution operation is performed on each input pixel point during the image processing. After 4 weeks of maintenance treatment, FEF50 %, FEF75 %, and MMEF75/25 increased significantly, and FENO decreased significantly ( P < 0.01 ). The improvement results of FEF50 %, FEF75 %, MMEF75/25, and FENO after maintenance treatment were as follows: the oral hormone group was the most effective, followed by the combined atomization inhalation group, and the hormone atomization inhalation group was the least effective. The differences among them were statistically significant ( P < 0.05 ). The accuracy of artificial intelligence segmentation algorithm was 81%. All the hormones were more effective than local medication in the treatment of small airway function and airway inflammation. In the treatment of aerosol inhalation, the hormone combined with bronchiectasis drug was the most effective in improving small airway obstruction and reducing airway inflammation compared with single drug inhalation. Deep learning CT images are simple, noninvasive, and intuitively observe lung changes in asthma with small airway functional obstruction. Asthma with small airway functional obstruction has high clinical diagnosis and evaluation value.


2020 ◽  
Author(s):  
S. Duchesne ◽  
D. Gourdeau ◽  
P. Archambault ◽  
C. Chartrand-Lefebvre ◽  
L. Dieumegarde ◽  
...  

ABSTRACTBackgroundDecision scores and ethically mindful algorithms are being established to adjudicate mechanical ventilation in the context of potential resources shortage due to the current onslaught of COVID-19 cases. There is a need for a reproducible and objective method to provide quantitative information for those scores.PurposeTowards this goal, we present a retrospective study testing the ability of a deep learning algorithm at extracting features from chest x-rays (CXR) to track and predict radiological evolution.Materials and MethodsWe trained a repurposed deep learning algorithm on the CheXnet open dataset (224,316 chest X-ray images of 65,240 unique patients) to extract features that mapped to radiological labels. We collected CXRs of COVID-19-positive patients from two open-source datasets (last accessed on April 9, 2020)(Italian Society for Medical and Interventional Radiology and MILA). Data collected form 60 pairs of sequential CXRs from 40 COVID patients (mean age ± standard deviation: 56 ± 13 years; 23 men, 10 women, seven not reported) and were categorized in three categories: “Worse”, “Stable”, or “Improved” on the basis of radiological evolution ascertained from images and reports. Receiver operating characteristic analyses, Mann-Whitney tests were performed.ResultsOn patients from the CheXnet dataset, the area under ROC curves ranged from 0.71 to 0.93 for seven imaging features and one diagnosis. Deep learning features between “Worse” and “Improved” outcome categories were significantly different for three radiological signs and one diagnostic (“Consolidation”, “Lung Lesion”, “Pleural effusion” and “Pneumonia”; all P < 0.05). Features from the first CXR of each pair could correctly predict the outcome category between “Worse” and “Improved” cases with 82.7% accuracy.ConclusionCXR deep learning features show promise for classifying the disease trajectory. Once validated in studies incorporating clinical data and with larger sample sizes, this information may be considered to inform triage decisions.


2020 ◽  
Author(s):  
Luna Zhang ◽  
Yang Zou ◽  
Ningning He ◽  
Yu Chen ◽  
Zhen Chen ◽  
...  

AbstractAs a novel type of post-translational modification, lysine 2-Hydroxyisobutyrylation (Khib) plays an important role in gene transcription and signal transduction. In order to understand its regulatory mechanism, the essential step is the recognition of Khib sites. Thousands of Khib sites have been experimentally verified across five different species. However, there are only a couple traditional machine-learning algorithms developed to predict Khib sites for limited species, lacking a general prediction algorithm. We constructed a deep-learning algorithm based on convolutional neural network with the one-hot encoding approach, dubbed CNNOH. It performs favorably to the traditional machine-learning models and other deep-learning models across different species, in terms of cross-validation and independent test. The area under the ROC curve (AUC) values for CNNOH ranged from 0.82 to 0.87 for different organisms, which is superior to the currently-available Khib predictors. Moreover, we developed the general model based on the integrated data from multiple species and it showed great universality and effectiveness with the AUC values in the range of 0.79 to 0.87. Accordingly, we constructed the on-line prediction tool dubbed DeepKhib for easily identifying Khib sites, which includes both species-specific and general models. DeepKhib is available at http://www.bioinfogo.org/DeepKhib.


PLoS ONE ◽  
2021 ◽  
Vol 16 (7) ◽  
pp. e0254997
Author(s):  
Ari Lee ◽  
Min Su Kim ◽  
Sang-Sun Han ◽  
PooGyeon Park ◽  
Chena Lee ◽  
...  

This study aimed to develop a high-performance deep learning algorithm to differentiate Stafne’s bone cavity (SBC) from cysts and tumors of the jaw based on images acquired from various panoramic radiographic systems. Data sets included 176 Stafne’s bone cavities and 282 odontogenic cysts and tumors of the mandible (98 dentigerous cysts, 91 odontogenic keratocysts, and 93 ameloblastomas) that required surgical removal. Panoramic radiographs were obtained using three different imaging systems. The trained model showed 99.25% accuracy, 98.08% sensitivity, and 100% specificity for SBC classification and resulted in one misclassified SBC case. The algorithm was approved to recognize the typical imaging features of SBC in panoramic radiography regardless of the imaging system when traced back with Grad-Cam and Guided Grad-Cam methods. The deep learning model for SBC differentiating from odontogenic cysts and tumors showed high performance with images obtained from multiple panoramic systems. The present algorithm is expected to be a useful tool for clinicians, as it diagnoses SBCs in panoramic radiography to prevent unnecessary examinations for patients. Additionally, it would provide support for clinicians to determine further examinations or referrals to surgeons for cases where even experts are unsure of diagnosis using panoramic radiography alone.


2021 ◽  
Vol 251 ◽  
pp. 04012
Author(s):  
Simon Akar ◽  
Gowtham Atluri ◽  
Thomas Boettcher ◽  
Michael Peters ◽  
Henry Schreiner ◽  
...  

The locations of proton-proton collision points in LHC experiments are called primary vertices (PVs). Preliminary results of a hybrid deep learning algorithm for identifying and locating these, targeting the Run 3 incarnation of LHCb, have been described at conferences in 2019 and 2020. In the past year we have made significant progress in a variety of related areas. Using two newer Kernel Density Estimators (KDEs) as input feature sets improves the fidelity of the models, as does using full LHCb simulation rather than the “toy Monte Carlo” originally (and still) used to develop models. We have also built a deep learning model to calculate the KDEs from track information. Connecting a tracks-to-KDE model to a KDE-to-hists model used to find PVs provides a proof-of-concept that a single deep learning model can use track information to find PVs with high efficiency and high fidelity. We have studied a variety of models systematically to understand how variations in their architectures affect performance. While the studies reported here are specific to the LHCb geometry and operating conditions, the results suggest that the same approach could be used by the ATLAS and CMS experiments.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ye Rang Park ◽  
Young Jae Kim ◽  
Woong Ju ◽  
Kyehyun Nam ◽  
Soonyung Kim ◽  
...  

AbstractCervical cancer is the second most common cancer in women worldwide with a mortality rate of 60%. Cervical cancer begins with no overt signs and has a long latent period, making early detection through regular checkups vitally immportant. In this study, we compare the performance of two different models, machine learning and deep learning, for the purpose of identifying signs of cervical cancer using cervicography images. Using the deep learning model ResNet-50 and the machine learning models XGB, SVM, and RF, we classified 4119 Cervicography images as positive or negative for cervical cancer using square images in which the vaginal wall regions were removed. The machine learning models extracted 10 major features from a total of 300 features. All tests were validated by fivefold cross-validation and receiver operating characteristics (ROC) analysis yielded the following AUCs: ResNet-50 0.97(CI 95% 0.949–0.976), XGB 0.82(CI 95% 0.797–0.851), SVM 0.84(CI 95% 0.801–0.854), RF 0.79(CI 95% 0.804–0.856). The ResNet-50 model showed a 0.15 point improvement (p < 0.05) over the average (0.82) of the three machine learning methods. Our data suggest that the ResNet-50 deep learning algorithm could offer greater performance than current machine learning models for the purpose of identifying cervical cancer using cervicography images.


Sign in / Sign up

Export Citation Format

Share Document