scholarly journals Reliability of an automatic classifier for brain enlarged perivascular spaces burden and comparison with human performance

2017 ◽  
Vol 131 (13) ◽  
pp. 1465-1481 ◽  
Author(s):  
Víctor González-Castro ◽  
María del C. Valdés Hernández ◽  
Francesca M. Chappell ◽  
Paul A. Armitage ◽  
Stephen Makin ◽  
...  

In the brain, enlarged perivascular spaces (PVS) relate to cerebral small vessel disease (SVD), poor cognition, inflammation and hypertension. We propose a fully automatic scheme that uses a support vector machine (SVM) to classify the burden of PVS in the basal ganglia (BG) region as low or high. We assess the performance of three different types of descriptors extracted from the BG region in T2-weighted MRI images: (i) statistics obtained from Wavelet transform’s coefficients, (ii) local binary patterns and (iii) bag of visual words (BoW) based descriptors characterizing local keypoints obtained from a dense grid with the scale-invariant feature transform (SIFT) characteristics. When the latter were used, the SVM classifier achieved the best accuracy (81.16%). The output from the classifier using the BoW descriptors was compared with visual ratings done by an experienced neuroradiologist (Observer 1) and by a trained image analyst (Observer 2). The agreement and cross-correlation between the classifier and Observer 2 (κ = 0.67 (0.58–0.76)) were slightly higher than between the classifier and Observer 1 (κ = 0.62 (0.53–0.72)) and comparable between both the observers (κ = 0.68 (0.61–0.75)). Finally, three logistic regression models using clinical variables as independent variable and each of the PVS ratings as dependent variable were built to assess how clinically meaningful were the predictions of the classifier. The goodness-of-fit of the model for the classifier was good (area under the curve (AUC) values: 0.93 (model 1), 0.90 (model 2) and 0.92 (model 3)) and slightly better (i.e. AUC values: 0.02 units higher) than that of the model for Observer 2. These results suggest that, although it can be improved, an automatic classifier to assess PVS burden from brain MRI can provide clinically meaningful results close to those from a trained observer.

2019 ◽  
Vol 45 (10) ◽  
pp. 3193-3201 ◽  
Author(s):  
Yajuan Li ◽  
Xialing Huang ◽  
Yuwei Xia ◽  
Liling Long

Abstract Purpose To explore the value of CT-enhanced quantitative features combined with machine learning for differential diagnosis of renal chromophobe cell carcinoma (chRCC) and renal oncocytoma (RO). Methods Sixty-one cases of renal tumors (chRCC = 44; RO = 17) that were pathologically confirmed at our hospital between 2008 and 2018 were retrospectively analyzed. All patients had undergone preoperative enhanced CT scans including the corticomedullary (CMP), nephrographic (NP), and excretory phases (EP) of contrast enhancement. Volumes of interest (VOIs), including lesions on the images, were manually delineated using the RadCloud platform. A LASSO regression algorithm was used to screen the image features extracted from all VOIs. Five machine learning classifications were trained to distinguish chRCC from RO by using a fivefold cross-validation strategy. The performance of the classifier was mainly evaluated by areas under the receiver operating characteristic (ROC) curve and accuracy. Results In total, 1029 features were extracted from CMP, NP, and EP. The LASSO regression algorithm was used to screen out the four, four, and six best features, respectively, and eight features were selected when CMP and NP were combined. All five classifiers had good diagnostic performance, with area under the curve (AUC) values greater than 0.850, and support vector machine (SVM) classifier showed a diagnostic accuracy of 0.945 (AUC 0.964 ± 0.054; sensitivity 0.999; specificity 0.800), showing the best performance. Conclusions Accurate preoperative differential diagnosis of chRCC and RO can be facilitated by a combination of CT-enhanced quantitative features and machine learning.


2018 ◽  
Vol 2018 ◽  
pp. 1-13 ◽  
Author(s):  
Hasan Mahmud ◽  
Md. Kamrul Hasan ◽  
Abdullah-Al-Tariq ◽  
Md. Hasanul Kabir ◽  
M. A. Mottalib

Symbolic gestures are the hand postures with some conventionalized meanings. They are static gestures that one can perform in a very complex environment containing variations in rotation and scale without using voice. The gestures may be produced in different illumination conditions or occluding background scenarios. Any hand gesture recognition system should find enough discriminative features, such as hand-finger contextual information. However, in existing approaches, depth information of hand fingers that represents finger shapes is utilized in limited capacity to extract discriminative features of fingers. Nevertheless, if we consider finger bending information (i.e., a finger that overlaps palm), extracted from depth map, and use them as local features, static gestures varying ever so slightly can become distinguishable. Our work here corroborated this idea and we have generated depth silhouettes with variation in contrast to achieve more discriminative keypoints. This approach, in turn, improved the recognition accuracy up to 96.84%. We have applied Scale-Invariant Feature Transform (SIFT) algorithm which takes the generated depth silhouettes as input and produces robust feature descriptors as output. These features (after converting into unified dimensional feature vectors) are fed into a multiclass Support Vector Machine (SVM) classifier to measure the accuracy. We have tested our results with a standard dataset containing 10 symbolic gesture representing 10 numeric symbols (0-9). After that we have verified and compared our results among depth images, binary images, and images consisting of the hand-finger edge information generated from the same dataset. Our results show higher accuracy while applying SIFT features on depth images. Recognizing numeric symbols accurately performed through hand gestures has a huge impact on different Human-Computer Interaction (HCI) applications including augmented reality, virtual reality, and other fields.


2020 ◽  
Vol 8 (6) ◽  
pp. 2613-2618

Among the most dangerous of cancers found in human beings, skin cancer is the prevalent one. These are of various forms. The most sporadic among them is melanoma. Early phase identification of melanoma will be helpful in curing it. Intensive skin exposure to UV radiation is the principal cause of melanoma. In this article, along with other techniques for extracting features (LDP [Local Directional Patterns], LBP [Local Binary Patterns], Convolutional Neural Networks [CNN]), we have used an SVM classifier for the study of melanoma skin photos. Such suggested algorithms are best graded when opposed to other recognition schemes. The LBP and LDP gives us means to extract features; these figures are subsequently used for identification of derived features from these methods or algorithms and classified or separated by the SVM (Support Vector Machine) classifier. For many of the classifications of melanoma skin images using these algorithms, we have accuracy nearly above 80 %, whereby the LBP system together with the SVM classifier was the most powerful attribute extraction tool of the three with their polynomial kernel type. Thus using this algorithm-classifier, the melanoma skin lesion images can be detected and diagnosed by the doctors in its early stage itself, resultantly, helping save lives.


PeerJ ◽  
2019 ◽  
Vol 7 ◽  
pp. e6201 ◽  
Author(s):  
Dina A. Ragab ◽  
Maha Sharkas ◽  
Stephen Marshall ◽  
Jinchang Ren

It is important to detect breast cancer as early as possible. In this manuscript, a new methodology for classifying breast cancer using deep learning and some segmentation techniques are introduced. A new computer aided detection (CAD) system is proposed for classifying benign and malignant mass tumors in breast mammography images. In this CAD system, two segmentation approaches are used. The first approach involves determining the region of interest (ROI) manually, while the second approach uses the technique of threshold and region based. The deep convolutional neural network (DCNN) is used for feature extraction. A well-known DCNN architecture named AlexNet is used and is fine-tuned to classify two classes instead of 1,000 classes. The last fully connected (fc) layer is connected to the support vector machine (SVM) classifier to obtain better accuracy. The results are obtained using the following publicly available datasets (1) the digital database for screening mammography (DDSM); and (2) the Curated Breast Imaging Subset of DDSM (CBIS-DDSM). Training on a large number of data gives high accuracy rate. Nevertheless, the biomedical datasets contain a relatively small number of samples due to limited patient volume. Accordingly, data augmentation is a method for increasing the size of the input data by generating new data from the original input data. There are many forms for the data augmentation; the one used here is the rotation. The accuracy of the new-trained DCNN architecture is 71.01% when cropping the ROI manually from the mammogram. The highest area under the curve (AUC) achieved was 0.88 (88%) for the samples obtained from both segmentation techniques. Moreover, when using the samples obtained from the CBIS-DDSM, the accuracy of the DCNN is increased to 73.6%. Consequently, the SVM accuracy becomes 87.2% with an AUC equaling to 0.94 (94%). This is the highest AUC value compared to previous work using the same conditions.


2020 ◽  
Vol 9 (7) ◽  
pp. 2156
Author(s):  
Mi-ri Kwon ◽  
Jung Hee Shin ◽  
Hyunjin Park ◽  
Hwanho Cho ◽  
Eunjin Kim ◽  
...  

We aimed to evaluate whether radiomics analysis based on gray-scale ultrasound (US) can predict distant metastasis of follicular thyroid cancer (FTC). We retrospectively included 35 consecutive FTCs with distant metastases and 134 FTCs without distant metastasis. We extracted a total of 60 radiomics features derived from the first order, shape, gray-level cooccurrence matrix, and gray-level size zone matrix features using US imaging. A radiomics signature was generated using the least absolute shrinkage and selection operator and was used to train a support vector machine (SVM) classifier in five-fold cross-validation. The SVM classifier showed an area under the curve (AUC) of 0.90 on average on the test folds. Age, size, widely invasive histology, extrathyroidal extension, lymph node metastases on pathology, nodule-in-nodule appearance, marked hypoechogenicity, and rim calcification on the US were significantly more frequent among FTCs with distant metastasis compared to those without metastasis (p < 0.05). Radiomics signature and widely invasive histology were significantly associated with distant metastasis on multivariate analysis (p < 0.01 and p = 0.003). The classifier using the results of the multivariate analysis showed an AUC of 0.93. The radiomics signature from thyroid ultrasound is an independent biomarker for noninvasively predicting distant metastasis of FTC.


Electronics ◽  
2020 ◽  
Vol 9 (9) ◽  
pp. 1443
Author(s):  
Mai Ramadan Ibraheem ◽  
Shaker El-Sappagh ◽  
Tamer Abuhmed ◽  
Mohammed Elmogy

The formation of malignant neoplasm can be seen as deterioration of a pre-malignant skin neoplasm in its functionality and structure. Distinguishing melanocytic skin neoplasms is a challenging task due to their high visual similarity with different types of lesions and the intra-structural variants of melanocytic neoplasms. Besides, there is a high visual likeliness level between different lesion types with inhomogeneous features and fuzzy boundaries. The abnormal growth of melanocytic neoplasms takes various forms from uniform typical pigment network to irregular atypical shape, which can be described by border irregularity of melanocyte lesion image. This work proposes analytical reasoning for the human-observable phenomenon as a high-level feature to determine the neoplasm growth phase using a novel pixel-based feature space. The pixel-based feature space, which is comprised of high-level features and other color and texture features, are fed into the classifier to classify different melanocyte neoplasm phases. The proposed system was evaluated on the PH2 dermoscopic images benchmark dataset. It achieved an average accuracy of 95.1% using a support vector machine (SVM) classifier with the radial basis function (RBF) kernel. Furthermore, it reached an average Disc similarity coefficient (DSC) of 95.1%, an area under the curve (AUC) of 96.9%, and a sensitivity of 99%. The results of the proposed system outperform the results of other state-of-the-art multiclass techniques.


2019 ◽  
Vol 33 (19) ◽  
pp. 1950213 ◽  
Author(s):  
Vibhav Prakash Singh ◽  
Rajeev Srivastava ◽  
Yadunath Pathak ◽  
Shailendra Tiwari ◽  
Kuldeep Kaur

Content-based image retrieval (CBIR) system generally retrieves images based on the matching of the query image from all the images of the database. This exhaustive matching and searching slow down the image retrieval process. In this paper, a fast and effective CBIR system is proposed which uses supervised learning-based image management and retrieval techniques. It utilizes machine learning approaches as a prior step for speeding up image retrieval in the large database. For the implementation of this, first, we extract statistical moments and the orthogonal-combination of local binary patterns (OC-LBP)-based computationally light weighted color and texture features. Further, using some ground truth annotation of images, we have trained the multi-class support vector machine (SVM) classifier. This classifier works as a manager and categorizes the remaining images into different libraries. However, at the query time, the same features are extracted and fed to the SVM classifier. SVM detects the class of query and searching is narrowed down to the corresponding library. This supervised model with weighted Euclidean Distance (ED) filters out maximum irrelevant images and speeds up the searching time. This work is evaluated and compared with the conventional model of the CBIR system on two benchmark databases, and it is found that the proposed work is significantly encouraging in terms of retrieval accuracy and response time for the same set of used features.


2020 ◽  
Author(s):  
Yang-Hong Dai ◽  
Po-Chien Shen ◽  
Wei-Chou Chang ◽  
Chen-Hsiang Lo ◽  
Jen-Fu Yang ◽  
...  

Abstract Background : Stereotactic body radiotherapy (SBRT) is an effective but less focused alternative for treatment of hepatocellular carcinoma (HCC). To date, a personalized model for predicting therapeutic response is lacking. This study aimed to review current knowledge and to propose a radiomics-based machine-learning (ML) strategy for local response (LR) prediction. Methods : We searched the literature for studies conducted between January 1993 and August 2019 that used > 100 patients. Additionally, 172 HCC patients in our hospital were retrospectively analyzed between January 2007 and December 2016. In the radiomic analysis, 41 treated tumors were contoured and 46 radiomic features were extracted. Results : The 1-year local control was 85.4% in our patient cohort, comparable with current results (87-99%). The Support Vector Machine (SVM) classifier, based on computed tomography (CT) scans in the A phase processed by equal probability (Ep) quantization with 8 gray levels, showed the highest mean F1 score (0.7995) for favorable LR within 1 year (W1R), at the end of follow-up (EndR), and condition of in-field failure-free (IFFF). The area under the curve (AUC) for this model was 92.1%, 96.3%, and 99.2% for W1R, EndR, and IFFF, respectively. Conclusions : SBRT has high 1-year local control and our study sets the basis for constructing predictive models for HCC patients receiving SBRT.


Author(s):  
Jean-Nicola Blanchet ◽  
Sébastien Déry ◽  
Jacques-André Landry ◽  
Kate Osborne

Current coral reef health monitoring programs rely on biodiversity data obtained through the acquisition and annotation of underwater photographs. Manual annotation of these photographs is a necessary step, but has become problematic due to the high volume of images and the lack of available human resources. While automated and reliable multi-spectral annotation methods exist, coral reef images are often limited to visible light, which makes automation difficult. Much of the previous work has focused on popular texture recognition methods, but the results remain unsatisfactory when compared to human performance for the same task. In this work, we present an improved automatic method for coral image annotation that yields consistent accuracy improvements over existing methods. Our method builds on previous work by combining multiple feature representations. We demonstrate that the aggregation of multiple methods outperforms any single method. Furthermore, our proposed system requires virtually no parameter tuning, and supports rejection for improved results. Firstly, the complex texture diversity of corals is handled by combining multiple feature representations: local binary patterns, hue and opponent angle histograms, textons, and deep convolutional activation feature. Secondly, these multiple representations are aggregated using a score-level fusion of multiple support vector machines. Thirdly, rejection can optionally be applied to enhance classification results, and allows efficient semi-supervised image annotation in collaboration with human experts.


In this proposed method, MR Brain image segmentation technique based on K-means clustering combined with Discrete Wavelet Transform (DWT) based feature extraction and Gray Level Co-Occurrence Matrix (GLCM) based feature selection approach has been presented. A Perfect Radial Basis Function (RBF) - Support Vector Machine (SVM) Classifier has been selected for this process. The Performance of the classifier was estimated through accuracy based on the fractions selectivity and sensitivity. Accuracy of the proposed classifier was found to be 93%. Moreover, in this proposed method, instead of selecting the cluster centres in a random manner, Histogram technique was used.


Sign in / Sign up

Export Citation Format

Share Document