scholarly journals Prediction of an oxygen extraction fraction map by convolutional neural network: validation of input data among MR and PET images

Author(s):  
Keisuke Matsubara ◽  
Masanobu Ibaraki ◽  
Yuki Shinohara ◽  
Noriyuki Takahashi ◽  
Hideto Toyoshima ◽  
...  

Abstract Purpose Oxygen extraction fraction (OEF) is a biomarker for the viability of brain tissue in ischemic stroke. However, acquisition of the OEF map using positron emission tomography (PET) with oxygen-15 gas is uncomfortable for patients because of the long fixation time, invasive arterial sampling, and radiation exposure. We aimed to predict the OEF map from magnetic resonance (MR) and PET images using a deep convolutional neural network (CNN) and to demonstrate which PET and MR images are optimal as inputs for the prediction of OEF maps. Methods Cerebral blood flow at rest (CBF) and during stress (sCBF), cerebral blood volume (CBV) maps acquired from oxygen-15 PET, and routine MR images (T1-, T2-, and T2*-weighted images) for 113 patients with steno-occlusive disease were learned with U-Net. MR and PET images acquired from the other 25 patients were used as test data. We compared the predicted OEF maps and intraclass correlation (ICC) with the real OEF values among combinations of MRI, CBF, CBV, and sCBF. Results Among the combinations of input images, OEF maps predicted by the model learned with MRI, CBF, CBV, and sCBF maps were the most similar to the real OEF maps (ICC: 0.597 ± 0.082). However, the contrast of predicted OEF maps was lower than that of real OEF maps. Conclusion These results suggest that the deep CNN learned useful features from CBF, sCBF, CBV, and MR images and predict qualitatively realistic OEF maps. These findings suggest that the deep CNN model can shorten the fixation time for 15O PET by skipping 15O2 scans. Further training with a larger data set is required to predict accurate OEF maps quantitatively.

2017 ◽  
Vol 79 (2) ◽  
pp. 890-899 ◽  
Author(s):  
Sebastian Domsch ◽  
Bettina Mürle ◽  
Sebastian Weingärtner ◽  
Jascha Zapp ◽  
Frederik Wenz ◽  
...  

2020 ◽  
Vol 5 (2) ◽  
pp. 192-195
Author(s):  
Umesh B. Chavan ◽  
Dinesh Kulkarni

Facial expression recognition (FER) systems have attracted much research interest in the area of Machine Learning. We designed a large, deep convolutional neural network to classify 40,000 images in the data-set into one of seven categories (disgust, fear, happy, angry, sad, neutral, surprise). In this project, we have designed deep learning Convolution Neural Network (CNN) for facial expression recognition and developed model in Theano and Caffe for training process. The proposed architecture achieves 61% accuracy. This work presents results of accelerated implementation of the CNN with graphic processing units (GPUs). Optimizing Deep CNN is to reduce training time for system.


Development of abnormal cells are the cause of skin cancer that have the ability to attack or spread to various parts of the body. The skin cancer signs may include mole that has varied in size, shape, color, and may also haveno –uniform edges, might be having multiple colours, and would itch orevn bleed in some cases. The exposure to the UV-rays from the sun is considered to be accountable for more than 90% of the Skin Cancer cases which are recorded.In this paper, the development of a classificiation system for skin cancer, is discussed, using Convolutional Neural Network which would help in classifying the cancer usingTensorFlow and Keras as Malignantor Benign. The collected images from the data set are fed into the system and it is processed to classify the skin cancer. After the implementation the accuracy of the Convolutional 2-D layer system developed is found to be 78%.


2020 ◽  
Vol 2020 ◽  
pp. 1-10 ◽  
Author(s):  
Wendong Wang

In recent years, with the acceleration of the aging process and the aggravation of life pressure, the proportion of chronic epidemics has gradually increased. A large amount of medical data will be generated during the hospitalization of diabetics. It will have important practical significance and social value to discover potential medical laws and valuable information among medical data. In view of this, an improved deep convolutional neural network (“CNN+” for short) algorithm was proposed to predict the changes of diabetes. Firstly, the bagging integrated classification algorithm was used instead of the output layer function of the deep CNN, which can help the improved deep CNN algorithm constructed for the data set of diabetic patients and improve the accuracy of classification. In this way, the “CNN+” algorithm can take the advantages of both the deep CNN and the bagging algorithm. On the one hand, it can extract the potential features of the data set by using the powerful feature extraction ability of deep CNN. On the other hand, the bagging integrated classification algorithm can be used for feature classification, so as to improve the classification accuracy and obtain better disease prediction effect to assist doctors in diagnosis and treatment. Experimental results show that compared with the traditional convolutional neural network and other classification algorithm, the “CNN+” model can get more reliable prediction results.


2021 ◽  
Vol 7 (2) ◽  
pp. 356-362
Author(s):  
Harry Coppock ◽  
Alex Gaskell ◽  
Panagiotis Tzirakis ◽  
Alice Baird ◽  
Lyn Jones ◽  
...  

BackgroundSince the emergence of COVID-19 in December 2019, multidisciplinary research teams have wrestled with how best to control the pandemic in light of its considerable physical, psychological and economic damage. Mass testing has been advocated as a potential remedy; however, mass testing using physical tests is a costly and hard-to-scale solution.MethodsThis study demonstrates the feasibility of an alternative form of COVID-19 detection, harnessing digital technology through the use of audio biomarkers and deep learning. Specifically, we show that a deep neural network based model can be trained to detect symptomatic and asymptomatic COVID-19 cases using breath and cough audio recordings.ResultsOur model, a custom convolutional neural network, demonstrates strong empirical performance on a data set consisting of 355 crowdsourced participants, achieving an area under the curve of the receiver operating characteristics of 0.846 on the task of COVID-19 classification.ConclusionThis study offers a proof of concept for diagnosing COVID-19 using cough and breath audio signals and motivates a comprehensive follow-up research study on a wider data sample, given the evident advantages of a low-cost, highly scalable digital COVID-19 diagnostic tool.


2020 ◽  
Vol 65 (6) ◽  
pp. 759-773
Author(s):  
Segu Praveena ◽  
Sohan Pal Singh

AbstractLeukaemia detection and diagnosis in advance is the trending topic in the medical applications for reducing the death toll of patients with acute lymphoblastic leukaemia (ALL). For the detection of ALL, it is essential to analyse the white blood cells (WBCs) for which the blood smear images are employed. This paper proposes a new technique for the segmentation and classification of the acute lymphoblastic leukaemia. The proposed method of automatic leukaemia detection is based on the Deep Convolutional Neural Network (Deep CNN) that is trained using an optimization algorithm, named Grey wolf-based Jaya Optimization Algorithm (GreyJOA), which is developed using the Grey Wolf Optimizer (GWO) and Jaya Optimization Algorithm (JOA) that improves the global convergence. Initially, the input image is applied to pre-processing and the segmentation is performed using the Sparse Fuzzy C-Means (Sparse FCM) clustering algorithm. Then, the features, such as Local Directional Patterns (LDP) and colour histogram-based features, are extracted from the segments of the pre-processed input image. Finally, the extracted features are applied to the Deep CNN for the classification. The experimentation evaluation of the method using the images of the ALL IDB2 database reveals that the proposed method acquired a maximal accuracy, sensitivity, and specificity of 0.9350, 0.9528, and 0.9389, respectively.


1988 ◽  
Vol 8 (2) ◽  
pp. 227-235 ◽  
Author(s):  
Iwao Kanno ◽  
Kazuo Uemura ◽  
Schuichi Higano ◽  
Matsutaro Murakami ◽  
Hidehiro Iida ◽  
...  

The oxygen extraction fraction (OEF) at maximally vasodilated tissue in patients with chronic cerebrovascular disease was evaluated using positron emission tomography. The vascular responsiveness to changes in PaCO2 was measured by the H215O autoradiographic method. It was correlated with the resting-state OEF, as estimated using the 15O steady-state method. The subjects comprised 15 patients with unilateral or bilateral occlusion and stenosis of the internal carotid artery or middle cerebral artery or moyamoya disease. In hypercapnia, the scattergram between the OEF and the vascular responsiveness to changes in PaCO2 revealed a significant negative correlation in 11 of 19 studies on these patients, and the OEF at the zero cross point of the regression line with a vascular responsiveness of 0 was 0.53 ± 0.08 (n = 11). This OEF in the resting state corresponds to exhaustion of the capacity for vasodilation. The vasodilatory capacity is discussed in relation to the lower limit of autoregulation.


Sign in / Sign up

Export Citation Format

Share Document