scholarly journals Automatic Recognition of Colon and Esophagogastric Cancer with Machine Learning and Hyperspectral Imaging

Diagnostics ◽  
2021 ◽  
Vol 11 (10) ◽  
pp. 1810
Author(s):  
Toby Collins ◽  
Marianne Maktabi ◽  
Manuel Barberio ◽  
Valentin Bencteux ◽  
Boris Jansen-Winkeln ◽  
...  

There are approximately 1.8 million diagnoses of colorectal cancer, 1 million diagnoses of stomach cancer, and 0.6 million diagnoses of esophageal cancer each year globally. An automatic computer-assisted diagnostic (CAD) tool to rapidly detect colorectal and esophagogastric cancer tissue in optical images would be hugely valuable to a surgeon during an intervention. Based on a colon dataset with 12 patients and an esophagogastric dataset of 10 patients, several state-of-the-art machine learning methods have been trained to detect cancer tissue using hyperspectral imaging (HSI), including Support Vector Machines (SVM) with radial basis function kernels, Multi-Layer Perceptrons (MLP) and 3D Convolutional Neural Networks (3DCNN). A leave-one-patient-out cross-validation (LOPOCV) with and without combining these sets was performed. The ROC-AUC score of the 3DCNN was slightly higher than the MLP and SVM with a difference of 0.04 AUC. The best performance was achieved with the 3DCNN for colon cancer and esophagogastric cancer detection with a high ROC-AUC of 0.93. The 3DCNN also achieved the best DICE scores of 0.49 and 0.41 on the colon and esophagogastric datasets, respectively. These scores were significantly improved using a patient-specific decision threshold to 0.58 and 0.51, respectively. This indicates that, in practical use, an HSI-based CAD system using an interactive decision threshold is likely to be valuable. Experiments were also performed to measure the benefits of combining the colorectal and esophagogastric datasets (22 patients), and this yielded significantly better results with the MLP and SVM models.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
A. Sharafeldeen ◽  
M. Elsharkawy ◽  
F. Khalifa ◽  
A. Soliman ◽  
M. Ghazal ◽  
...  

AbstractThis study proposes a novel computer assisted diagnostic (CAD) system for early diagnosis of diabetic retinopathy (DR) using optical coherence tomography (OCT) B-scans. The CAD system is based on fusing novel OCT markers that describe both the morphology/anatomy and the reflectivity of retinal layers to improve DR diagnosis. This system separates retinal layers automatically using a segmentation approach based on an adaptive appearance and their prior shape information. High-order morphological and novel reflectivity markers are extracted from individual segmented layers. Namely, the morphological markers are layer thickness and tortuosity while the reflectivity markers are the 1st-order reflectivity of the layer in addition to local and global high-order reflectivity based on Markov-Gibbs random field (MGRF) and gray-level co-occurrence matrix (GLCM), respectively. The extracted image-derived markers are represented using cumulative distribution function (CDF) descriptors. The constructed CDFs are then described using their statistical measures, i.e., the 10th through 90th percentiles with a 10% increment. For individual layer classification, each extracted descriptor of a given layer is fed to a support vector machine (SVM) classifier with a linear kernel. The results of the four classifiers are then fused using a backpropagation neural network (BNN) to diagnose each retinal layer. For global subject diagnosis, classification outputs (probabilities) of the twelve layers are fused using another BNN to make the final diagnosis of the B-scan. This system is validated and tested on 130 patients, with two scans for both eyes (i.e. 260 OCT images), with a balanced number of normal and DR subjects using different validation metrics: 2-folds, 4-folds, 10-folds, and leave-one-subject-out (LOSO) cross-validation approaches. The performance of the proposed system was evaluated using sensitivity, specificity, F1-score, and accuracy metrics. The system’s performance after the fusion of these different markers showed better performance compared with individual markers and other machine learning fusion methods. Namely, it achieved $$96.15\%$$ 96.15 % , $$99.23\%$$ 99.23 % , $$97.66\%$$ 97.66 % , and $$97.69\%$$ 97.69 % , respectively, using the LOSO cross-validation technique. The reported results, based on the integration of morphology and reflectivity markers and by using state-of-the-art machine learning classifications, demonstrate the ability of the proposed system to diagnose the DR early.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3827
Author(s):  
Gemma Urbanos ◽  
Alberto Martín ◽  
Guillermo Vázquez ◽  
Marta Villanueva ◽  
Manuel Villa ◽  
...  

Hyperspectral imaging techniques (HSI) do not require contact with patients and are non-ionizing as well as non-invasive. As a consequence, they have been extensively applied in the medical field. HSI is being combined with machine learning (ML) processes to obtain models to assist in diagnosis. In particular, the combination of these techniques has proven to be a reliable aid in the differentiation of healthy and tumor tissue during brain tumor surgery. ML algorithms such as support vector machine (SVM), random forest (RF) and convolutional neural networks (CNN) are used to make predictions and provide in-vivo visualizations that may assist neurosurgeons in being more precise, hence reducing damages to healthy tissue. In this work, thirteen in-vivo hyperspectral images from twelve different patients with high-grade gliomas (grade III and IV) have been selected to train SVM, RF and CNN classifiers. Five different classes have been defined during the experiments: healthy tissue, tumor, venous blood vessel, arterial blood vessel and dura mater. Overall accuracy (OACC) results vary from 60% to 95% depending on the training conditions. Finally, as far as the contribution of each band to the OACC is concerned, the results obtained in this work are 3.81 times greater than those reported in the literature.


2016 ◽  
Vol 2016 ◽  
pp. 1-15 ◽  
Author(s):  
Henry Joutsijoki ◽  
Markus Haponen ◽  
Jyrki Rasku ◽  
Katriina Aalto-Setälä ◽  
Martti Juhola

The focus of this research is on automated identification of the quality of human induced pluripotent stem cell (iPSC) colony images. iPS cell technology is a contemporary method by which the patient’s cells are reprogrammed back to stem cells and are differentiated to any cell type wanted. iPS cell technology will be used in future to patient specific drug screening, disease modeling, and tissue repairing, for instance. However, there are technical challenges before iPS cell technology can be used in practice and one of them is quality control of growing iPSC colonies which is currently done manually but is unfeasible solution in large-scale cultures. The monitoring problem returns to image analysis and classification problem. In this paper, we tackle this problem using machine learning methods such as multiclass Support Vector Machines and several baseline methods together with Scaled Invariant Feature Transformation based features. We perform over 80 test arrangements and do a thorough parameter value search. The best accuracy (62.4%) for classification was obtained by using ak-NN classifier showing improved accuracy compared to earlier studies.


PLoS ONE ◽  
2021 ◽  
Vol 16 (10) ◽  
pp. e0257901
Author(s):  
Yanjing Bi ◽  
Chao Li ◽  
Yannick Benezeth ◽  
Fan Yang

Phoneme pronunciations are usually considered as basic skills for learning a foreign language. Practicing the pronunciations in a computer-assisted way is helpful in a self-directed or long-distance learning environment. Recent researches indicate that machine learning is a promising method to build high-performance computer-assisted pronunciation training modalities. Many data-driven classifying models, such as support vector machines, back-propagation networks, deep neural networks and convolutional neural networks, are increasingly widely used for it. Yet, the acoustic waveforms of phoneme are essentially modulated from the base vibrations of vocal cords, and this fact somehow makes the predictors collinear, distorting the classifying models. A commonly-used solution to address this issue is to suppressing the collinearity of predictors via partial least square regressing algorithm. It allows to obtain high-quality predictor weighting results via predictor relationship analysis. However, as a linear regressor, the classifiers of this type possess very simple topology structures, constraining the universality of the regressors. For this issue, this paper presents an heterogeneous phoneme recognition framework which can further benefit the phoneme pronunciation diagnostic tasks by combining the partial least square with support vector machines. A French phoneme data set containing 4830 samples is established for the evaluation experiments. The experiments of this paper demonstrates that the new method improves the accuracy performance of the phoneme classifiers by 0.21 − 8.47% comparing to state-of-the-arts with different data training data density.


2017 ◽  
Vol 58 (1) ◽  
pp. 123-134 ◽  
Author(s):  
Koujiro Ikushima ◽  
Hidetaka Arimura ◽  
Ze Jin ◽  
Hidetake Yabu-uchi ◽  
Jumpei Kuwazuru ◽  
...  

Abstract We have proposed a computer-assisted framework for machine-learning–based delineation of gross tumor volumes (GTVs) following an optimum contour selection (OCS) method. The key idea of the proposed framework was to feed image features around GTV contours (determined based on the knowledge of radiation oncologists) into a machine-learning classifier during the training step, after which the classifier produces the ‘degree of GTV’ for each voxel in the testing step. Initial GTV regions were extracted using a support vector machine (SVM) that learned the image features inside and outside each tumor region (determined by radiation oncologists). The leave-one-out-by-patient test was employed for training and testing the steps of the proposed framework. The final GTV regions were determined using the OCS method that can be used to select a global optimum object contour based on multiple active delineations with a LSM around the GTV. The efficacy of the proposed framework was evaluated in 14 lung cancer cases [solid: 6, ground-glass opacity (GGO): 4, mixed GGO: 4] using the 3D Dice similarity coefficient (DSC), which denotes the degree of region similarity between the GTVs contoured by radiation oncologists and those determined using the proposed framework. The proposed framework achieved an average DSC of 0.777 for 14 cases, whereas the OCS-based framework produced an average DSC of 0.507. The average DSCs for GGO and mixed GGO were 0.763 and 0.701, respectively, obtained by the proposed framework. The proposed framework can be employed as a tool to assist radiation oncologists in delineating various GTV regions.


2021 ◽  
Vol 11 ◽  
Author(s):  
Yanjie Li ◽  
Mahmoud Al-Sarayreh ◽  
Kenji Irie ◽  
Deborah Hackell ◽  
Graeme Bourdot ◽  
...  

Weeds can be major environmental and economic burdens in New Zealand. Traditional methods of weed control including manual and chemical approaches can be time consuming and costly. Some chemical herbicides may have negative environmental and human health impacts. One of the proposed important steps for providing alternatives to these traditional approaches is the automated identification and mapping of weeds. We used hyperspectral imaging data and machine learning to explore the possibility of fast, accurate and automated discrimination of weeds in pastures where ryegrass and clovers are the sown species. Hyperspectral images from two grasses (Setaria pumila [yellow bristle grass] and Stipa arundinacea [wind grass]) and two broad leaf weed species (Ranunculus acris [giant buttercup] and Cirsium arvense [Californian thistle]) were acquired and pre-processed using the standard normal variate method. We trained three classification models, namely partial least squares-discriminant analysis, support vector machine, and Multilayer Perceptron (MLP) using whole plant averaged (Av) spectra and superpixels (Sp) averaged spectra from each weed sample. All three classification models showed repeatable identification of four weeds using both Av and Sp spectra with a range of overall accuracy of 70–100%. However, MLP based on the Sp method produced the most reliable and robust prediction result (89.1% accuracy). Four significant spectral regions were found as highly informative for characterizing the four weed species and could form the basis for a rapid and efficient methodology for identifying weeds in ryegrass/clover pastures.


Smart Cities ◽  
2020 ◽  
Vol 3 (3) ◽  
pp. 767-792 ◽  
Author(s):  
Wen-Hao Su

Crop productivity is readily reduced by competition from weeds. It is particularly important to control weeds early to prevent yield losses. Limited herbicide choices and increasing costs of weed management are threatening the profitability of crops. Smart agriculture can use intelligent technology to accurately measure the distribution of weeds in the field and perform weed control tasks in selected areas, which cannot only improve the effectiveness of pesticides, but also increase the economic benefits of agricultural products. The most important thing for an automatic system to remove weeds within crop rows is to utilize reliable sensing technology to achieve accurate differentiation of weeds and crops at specific locations in the field. In recent years, there have been many significant achievements involving the differentiation of crops and weeds. These studies are related to the development of rapid and non-destructive sensors, as well as the analysis methods for the data obtained. This paper presents a review of the use of three sensing methods including spectroscopy, color imaging, and hyperspectral imaging in the discrimination of crops and weeds. Several algorithms of machine learning have been employed for data analysis such as convolutional neural network (CNN), artificial neural network (ANN), and support vector machine (SVM). Successful applications include the weed detection in grain crops (such as maize, wheat, and soybean), vegetable crops (such as tomato, lettuce, and radish), and fiber crops (such as cotton) with unsupervised or supervised learning. This review gives a brief introduction into proposed sensing and machine learning methods, then provides an overview of instructive examples of these techniques for weed/crop discrimination. The discussion describes the recent progress made in the development of automated technology for accurate plant identification as well as the challenges and future prospects. It is believed that this review is of great significance to those who study automatic plant care in crops using intelligent technology.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Daiki Sato ◽  
Toshihiro Takamatsu ◽  
Masakazu Umezawa ◽  
Yuichi Kitagawa ◽  
Kosuke Maeda ◽  
...  

AbstractThe diagnosis of gastrointestinal stromal tumor (GIST) using conventional endoscopy is difficult because submucosal tumor (SMT) lesions like GIST are covered by a mucosal layer. Near-infrared hyperspectral imaging (NIR-HSI) can obtain optical information from deep inside tissues. However, far less progress has been made in the development of techniques for distinguishing deep lesions like GIST. This study aimed to investigate whether NIR-HSI is suitable for distinguishing deep SMT lesions. In this study, 12 gastric GIST lesions were surgically resected and imaged with an NIR hyperspectral camera from the aspect of the mucosal surface. Thus, the images were obtained ex-vivo. The site of the GIST was defined by a pathologist using the NIR image to prepare training data for normal and GIST regions. A machine learning algorithm, support vector machine, was then used to predict normal and GIST regions. Results were displayed using color-coded regions. Although 7 specimens had a mucosal layer (thickness 0.4–2.5 mm) covering the GIST lesion, NIR-HSI analysis by machine learning showed normal and GIST regions as color-coded areas. The specificity, sensitivity, and accuracy of the results were 73.0%, 91.3%, and 86.1%, respectively. The study suggests that NIR-HSI analysis may potentially help distinguish deep lesions.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Paul G. M. Knoops ◽  
Athanasios Papaioannou ◽  
Alessandro Borghi ◽  
Richard W. F. Breakey ◽  
Alexander T. Wilson ◽  
...  

Abstract Current computational tools for planning and simulation in plastic and reconstructive surgery lack sufficient precision and are time-consuming, thus resulting in limited adoption. Although computer-assisted surgical planning systems help to improve clinical outcomes, shorten operation time and reduce cost, they are often too complex and require extensive manual input, which ultimately limits their use in doctor-patient communication and clinical decision making. Here, we present the first large-scale clinical 3D morphable model, a machine-learning-based framework involving supervised learning for diagnostics, risk stratification, and treatment simulation. The model, trained and validated with 4,261 faces of healthy volunteers and orthognathic (jaw) surgery patients, diagnoses patients with 95.5% sensitivity and 95.2% specificity, and simulates surgical outcomes with a mean accuracy of 1.1 ± 0.3 mm. We demonstrate how this model could fully-automatically aid diagnosis and provide patient-specific treatment plans from a 3D scan alone, to help efficient clinical decision making and improve clinical understanding of face shape as a marker for primary and secondary surgery.


Foods ◽  
2019 ◽  
Vol 8 (12) ◽  
pp. 620 ◽  
Author(s):  
Pan Gao ◽  
Wei Xu ◽  
Tianying Yan ◽  
Chu Zhang ◽  
Xin Lv ◽  
...  

Narrow-leaved oleaster (Elaeagnus angustifolia) fruit is a kind of natural product used as food and traditional medicine. Narrow-leaved oleaster fruits from different geographical origins vary in chemical and physical properties and differ in their nutritional and commercial values. In this study, near-infrared hyperspectral imaging covering the spectral range of 874–1734 nm was used to identify the geographical origins of dry narrow-leaved oleaster fruits with machine learning methods. Average spectra of each single narrow-leaved oleaster fruit were extracted. Second derivative spectra were used to identify effective wavelengths. Partial least squares discriminant analysis (PLS-DA) and support vector machine (SVM) were used to build discriminant models for geographical origin identification using full spectra and effective wavelengths. In addition, deep convolutional neural network (CNN) models were built using full spectra and effective wavelengths. Good classification performances were obtained by these three models using full spectra and effective wavelengths, with classification accuracy of the calibration, validation, and prediction set all over 90%. Models using effective wavelengths obtained close results to models using full spectra. The performances of the PLS-DA, SVM, and CNN models were close. The overall results illustrated that near-infrared hyperspectral imaging coupled with machine learning could be used to trace geographical origins of dry narrow-leaved oleaster fruits.


Sign in / Sign up

Export Citation Format

Share Document