scholarly journals Texture Analysis Based on Pi-Rads 4/5-Scored MRI Images Combined with Machine Learning to Distinguish Benign Lesions from Prostate Cancer

Author(s):  
Lu Ma ◽  
Qi Zhou ◽  
Huming Yin ◽  
Xiaojie Ang ◽  
Yu Li ◽  
...  

Abstract Background: To extract the texture features of Apparent Diffusion Coefficient (ADC) images in Mp-MRI and build a machine learning model based on radiomics texture analysis to determine its ability to distinguish benign from prostate cancer (PCa) lesions using PI-RADS 4/5 score.Materials and methods: First, use ImageJ software to obtain texture feature parameters based on ADC images; use R language to standardize texture feature parameters, and use Lasso regression to reduce the dimensionality of multiple feature parameters; then, use the feature parameters after dimensionality reduction to construct image-based groups. Learn R-Logistic, R-SVM, R-AdaBoost to identify the machine learning classification model of prostate benign and malignant nodules. Secondly, the clinical indicators of the patients were statistically analyzed, and the three clinical indicators with the largest AUC values were selected to establish a classification model based on clinical indicators of benign and malignant prostate nodules. Finally, compare the performance of the model based on radiomics texture features and clinical indicators to identify benign and malignant prostate nodules in PI-RADS 4/5.Results: The experimental results show that the AUC of the R-Logistic model test set is 0.838, which is higher than the R-SVM and R-AdaBoost classification models. At this time, the corresponding R-Logistic classification model formula is: Y_radiomics=9.396-7.464*median ADC-0.584 *kurtosis+0.627*skewness+0.576*MRI lesions volume; analysis of clinical indicators shows that the 3 indicators with the highest discrimination efficiency are PSA, Fib, LDL-C, and the corresponding C-Logistic classification model formula is: Y_clinical =-2.608 +0.324*PSA-3.045*Fib+4.147*LDL-C, the AUC value of the model training set is 0.860, which is smaller than the training set R-Logistic classification model AUC value of 0.936.Conclusion: The machine learning classifier model is established based on the texture features of radiomics. It has a good classification performance in identifying benign and malignant nodules of the prostate in PI-RADS 4/5. This has certain potential and clinical value for patients with prostate cancer to adopt different treatment methods and prognosis.

2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Hui-Bin Tan ◽  
Fei Xiong ◽  
Yuan-Liang Jiang ◽  
Wen-Cai Huang ◽  
Ye Wang ◽  
...  

Abstract To explore the possibility of predicting the clinical types of Corona-Virus-Disease-2019 (COVID-19) pneumonia by analyzing the non-focus area of the lung in the first chest CT image of patients with COVID-19 by using automatic machine learning (Auto-ML). 136 moderate and 83 severe patients were selected from the patients with COVID-19 pneumonia. The clinical and laboratory data were collected for statistical analysis. The texture features of the Non-focus area of the first chest CT of patients with COVID-19 pneumonia were extracted, and then the classification model of the first chest CT of COVID-19 pneumonia was constructed by using these texture features based on the Auto-ML method of radiomics, The area under curve(AUC), true positive rate(TPR), true negative rate (TNR), positive predictive value(PPV) and negative predictive value (NPV) of the operating characteristic curve (ROC) were used to evaluate the accuracy of the first chest CT image classification model in patients with COVID-19 pneumonia. The TPR, TNR, PPV, NPV and AUC of the training cohort and test cohort of the moderate group and the control group, the severe group and the control group, the moderate group and the severe group were all greater than 95% and 0.95 respectively. The non-focus area of the first CT image of COVID-19 pneumonia has obvious difference in different clinical types. The AUTO-ML classification model of Radiomics based on this difference can be used to predict the clinical types of COVID-19 pneumonia.


2020 ◽  
Vol 10 (4) ◽  
pp. 5986-5991
Author(s):  
A. N. Saeed

Artificial Intelligence (AI) based Machine Learning (ML) is gaining more attention from researchers. In ophthalmology, ML has been applied to fundus photographs, achieving robust classification performance in the detection of diseases such as diabetic retinopathy, retinopathy of prematurity, etc. The detection and extraction of blood vessels in the retina is an essential part of various diagnosing problems associated with eyes, such as diabetic retinopathy. This paper proposes a novel machine learning approach to segment the retinal blood vessels from eye fundus images using a combination of color features, texture features, and Back Propagation Neural Networks (BPNN). The proposed method comprises of two steps, namely the color texture feature extraction and training the BPNN to get the segmented retinal nerves. Magenta color and correlation-texture features are given as input to the BPNN. The system was trained and tested in retinal fundus images taken from two distinct databases. The average sensitivity, specificity, and accuracy obtained for the segmentation of retinal blood vessels were 0.470%, 0.914%, and 0.903% respectively. Results obtained reveal that the proposed methodology is excellent in automated segmentation retinal nerves. The proposed segmentation methodology was able to obtain comparable accuracy with other methods.


Author(s):  
Priyesh Tiwari ◽  
Shivendra Nath Sharan ◽  
Kulwant Singh ◽  
Suraj Kamya

Content based image retrieval (CBIR), is an application of real-world computer vision domain where from a query image, similar images are searched from the database. The research presented in this paper aims to find out best features and classification model for optimum results for CBIR system.Five different set of feature combinations in two different color domains (i.e., RGB & HSV) are compared and evaluated using Neural Network Classifier, where best results obtained are 88.2% in terms of classifier accuracy. Color moments feature used comprises of: Mean, Standard Deviation,Kurtosis and Skewness. Histogram features is calculated via 10 probability bins. Wang-1k dataset is used to evaluate the CBIR system performance for image retrieval.Research concludes that integrated multi-level 3D color-texture feature yields most accurate results and also performs better in comparison to individually computed color and texture features.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Yaolin Zhu ◽  
Jiayi Huang ◽  
Tong Wu ◽  
Xueqin Ren

PurposeThe purpose of this paper is to select the optimal feature parameters to further improve the identification accuracy of cashmere and wool.Design/methodology/approachTo increase the accuracy, the authors put forward a method selecting optimal parameters based on the fusion of morphological feature and texture feature. The first step is to acquire the fiber diameter measured by the central axis algorithm. The second step is to acquire the optimal texture feature parameters. This step is mainly achieved by using the variance of secondary statistics of these two texture features to get four statistics and then finding the impact factors of gray level co-occurrence matrix relying on the relationship between the secondary statistic values and the pixel pitch. Finally, the five-dimensional feature vectors extracted from the sample image are fed into the fisher classifier.FindingsThe improvement of identification accuracy can be achieved by determining the optimal feature parameters and fusing two texture features. The average identification accuracy is 96.713% in this paper, which is very helpful to improve the efficiency of detector in the textile industry.Originality/valueIn this paper, a novel identification method which extracts the optimal feature parameter is proposed.


2021 ◽  
Vol 10 ◽  
Author(s):  
Jinke Xie ◽  
Basen Li ◽  
Xiangde Min ◽  
Peipei Zhang ◽  
Chanyuan Fan ◽  
...  

ObjectiveTo evaluate a combination of texture features and machine learning-based analysis of apparent diffusion coefficient (ADC) maps for the prediction of Grade Group (GG) upgrading in Gleason score (GS) ≤6 prostate cancer (PCa) (GG1) and GS 3 + 4 PCa (GG2).Materials and methodsFifty-nine patients who were biopsy-proven to have GG1 or GG2 and underwent MRI examination with the same MRI scanner prior to transrectal ultrasound (TRUS)-guided systemic biopsy were included. All these patients received radical prostatectomy to confirm the final GG. Patients were divided into training cohort and test cohort. 94 texture features were extracted from ADC maps for each patient. The independent sample t-test or Mann−Whitney U test was used to identify the texture features with statistically significant differences between GG upgrading group and GG non-upgrading group. Texture features of GG1 and GG2 were compared based on the final pathology of radical prostatectomy. We used the least absolute shrinkage and selection operator (LASSO) algorithm to filter features. Four supervised machine learning methods were employed. The prediction performance of each model was evaluated by area under the receiver operating characteristic curve (AUC). The statistical comparison between AUCs was performed.ResultsSix texture features were selected for the machine learning models building. These texture features were significantly different between GG upgrading group and GG non-upgrading group (P < 0.05). The six features had no significant difference between GG1 and GG2 based on the final pathology of radical prostatectomy. All machine learning methods had satisfactory predictive efficacy. The diagnostic performance of nearest neighbor algorithm (NNA) and support vector machine (SVM) was better than random forests (RF) in the training cohort. The AUC, sensitivity, and specificity of NNA were 0.872 (95% CI: 0.750−0.994), 0.967, and 0.778, respectively. The AUC, sensitivity, and specificity of SVM were 0.861 (95%CI: 0.732−0.991), 1.000, and 0.722, respectively. There had no significant difference between AUCs in the test cohort.ConclusionA combination of texture features and machine learning-based analysis of ADC maps could predict PCa GG upgrading from biopsy to radical prostatectomy non-invasively with satisfactory predictive efficacy.


Sign in / Sign up

Export Citation Format

Share Document