scholarly journals SS-2 Current status and future perspective of radiomics in glioma imaging

2020 ◽  
Vol 2 (Supplement_3) ◽  
pp. ii1-ii1
Author(s):  
Manabu Kinoshita ◽  
Yoshitaka Narita ◽  
Yonehiro Kanemura ◽  
Haruhiko Kishima

Abstract Qualitative imaging, primarily focusing on brain tumors’ genetic alterations, has gained traction since the introduction of molecular-based diagnosis of gliomas. This trend started with fine-tuning MRS for detecting intracellular 2HG in IDH-mutant astrocytomas and further expanded into a novel research field named “radiomics”. Along with the explosive development of machine learning algorithms, radiomics became one of the most competitive research fields in neuro-oncology. However, one should be cautious in interpreting research achievements produced by radiomics as there is no “standard” set in this novel research field. For example, the method used for image feature extraction is different from research to research, and some utilize machine learning for image feature extraction while others do not. Furthermore, the types of images used for input vary among various research. Some restrict data input only for conventional anatomical MRI, while others could include diffusion-weighted or even perfusion-weighted images. Taken together, however, previous reports seem to support the conclusion that IDH mutation status can be predicted with 80 to 90% accuracy for lower-grade gliomas. In contrast, the prediction of MGMT promoter methylation status for glioblastoma is exceptionally challenging. Although we can see sound improvements in radiomics, there is still no clue when the daily clinical practice can incorporate this novel technology. Difficulty in generalizing the acquired prediction model to the external cohort is the major challenge in radiomics. This problem may derive from the fact that radiomics requires normalization of qualitative MR images to semi-quantitative images. Introducing “true” quantitative MR images to radiomics may be a key solution to this inherent problem.

2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Alejandro Chavez-Badiola ◽  
Adolfo Flores-Saiffe Farias ◽  
Gerardo Mendizabal-Ruiz ◽  
Rodolfo Garcia-Sanchez ◽  
Andrew J. Drakeley ◽  
...  

2017 ◽  
Vol 3 ◽  
pp. e11731 ◽  
Author(s):  
Steren Chabert ◽  
Tomás Mardones ◽  
Rodrigo Riveros ◽  
Maximiliano Godoy ◽  
Alejandro Veloz ◽  
...  

2021 ◽  
Vol 2083 (3) ◽  
pp. 032054
Author(s):  
Lihua Luo

Abstract Nowadays, we are in the information age. Pictures carry a lot of information and play an indispensable role. For a large number of images, it is very important to find useful image information within the effective time. Therefore, the excellent performance of the image classification algorithm has certain influence factors on the result of image classification. Image classification is to input an image, and then use a certain classification algorithm to determine the category of the image. The main process of image classification: image preprocessing, image feature extraction and classifier design. Compared with the manual feature extraction of traditional machine learning, the convolutional neural network under the deep learning model can automatically extract local features and share weights. Compared with traditional machine learning algorithms, the image classification effect is better. This paper focuses on the study of image classification algorithms based on convolutional neural networks, and at the same time compares and analyzes deep belief network algorithms, and summarizes the application characteristics of different algorithms.


Cancers ◽  
2021 ◽  
Vol 13 (6) ◽  
pp. 1192
Author(s):  
Mizuho Nishio ◽  
Mari Nishio ◽  
Naoe Jimbo ◽  
Kazuaki Nakane

The purpose of this study was to develop a computer-aided diagnosis (CAD) system for automatic classification of histopathological images of lung tissues. Two datasets (private and public datasets) were obtained and used for developing and validating CAD. The private dataset consists of 94 histopathological images that were obtained for the following five categories: normal, emphysema, atypical adenomatous hyperplasia, lepidic pattern of adenocarcinoma, and invasive adenocarcinoma. The public dataset consists of 15,000 histopathological images that were obtained for the following three categories: lung adenocarcinoma, lung squamous cell carcinoma, and benign lung tissue. These images were automatically classified using machine learning and two types of image feature extraction: conventional texture analysis (TA) and homology-based image processing (HI). Multiscale analysis was used in the image feature extraction, after which automatic classification was performed using the image features and eight machine learning algorithms. The multicategory accuracy of our CAD system was evaluated in the two datasets. In both the public and private datasets, the CAD system with HI was better than that with TA. It was possible to build an accurate CAD system for lung tissues. HI was more useful for the CAD systems than TA.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1274
Author(s):  
Daniel Bonet-Solà ◽  
Rosa Ma Alsina-Pagès

Acoustic event detection and analysis has been widely developed in the last few years for its valuable application in monitoring elderly or dependant people, for surveillance issues, for multimedia retrieval, or even for biodiversity metrics in natural environments. For this purpose, sound source identification is a key issue to give a smart technological answer to all the aforementioned applications. Diverse types of sounds and variate environments, together with a number of challenges in terms of application, widen the choice of artificial intelligence algorithm proposal. This paper presents a comparative study on combining several feature extraction algorithms (Mel Frequency Cepstrum Coefficients (MFCC), Gammatone Cepstrum Coefficients (GTCC), and Narrow Band (NB)) with a group of machine learning algorithms (k-Nearest Neighbor (kNN), Neural Networks (NN), and Gaussian Mixture Model (GMM)), tested over five different acoustic environments. This work has the goal of detailing a best practice method and evaluate the reliability of this general-purpose algorithm for all the classes. Preliminary results show that most of the combinations of feature extraction and machine learning present acceptable results in most of the described corpora. Nevertheless, there is a combination that outperforms the others: the use of GTCC together with kNN, and its results are further analyzed for all the corpora.


Sign in / Sign up

Export Citation Format

Share Document