scholarly journals MRI BRAIN IMAGE SEGMENTATION BY USING A DEEP SPECTRUM IMAGE TRANSLATION NETWORK

2021 ◽  
Vol 10 (4) ◽  
pp. 3097-3100
Author(s):  
Srinivasarao Gajula

Now a days medical image processing is challenging task. Because of its structure, flexibility in place, and irregular borders, manual identification and segmentation of brain tumours is difficult. The proposed work uses the super pixel technique to identify and segment brain tumours based on transfer learning. This process is called as dense prediction because we are predicting for each pixel in the image. It is important to identify these tumours early to provide better treatment to patients. Early detection improves the patient's chances of survival. The primary goal of this study is to use deep learning to segment brain tumours in MRI images. The suggested technique is tested using data from Kaggle data sets for Brain Tumour Segmentation. In the first step we are pre-processing the required data sets, after getting required manner we are applying the data to VGG-19 transfer learning network to identify the disorder of the brain tumours. And then we are using UNet model for tumour detection process. Due to these processes, we are getting better improvement in terms of quality metrics.

2021 ◽  
Vol 229 ◽  
pp. 01034
Author(s):  
Vikas Kumar

Brain tumour segmentation aims to separate the various types of tumour tissues like active cells, necrotic core, and edema from normal brain tissues of substantia alba (WM), grey matter (GM), and spinal fluid (CSF). Magnetic Resonance Imaging based brain tumour segmentation studies are attracting more and more attention in recent years thanks to non-invasive imaging and good soft tissue contrast of resonance Imaging (MRI) images. With the event of just about two decades, the ingenious approaches applying computer-aided techniques for segmenting brain tumour are getting more and more mature and coming closer to routine clinical applications. the aim of this paper is to supply a comprehensive overview for MRIbased brain tumour segmentation methods. Firstly, a quick introduction to brain tumours and imaging modalities of brain tumours is given in this proposed research, convolution based optimization. These stepwise step refine the segmentation and improve the classification parameter with the assistance of particle swarmoptimization.


JAMIA Open ◽  
2021 ◽  
Vol 4 (3) ◽  
Author(s):  
Anthony Finch ◽  
Alexander Crowell ◽  
Yung-Chieh Chang ◽  
Pooja Parameshwarappa ◽  
Jose Martinez ◽  
...  

Abstract Objective Attention networks learn an intelligent weighted averaging mechanism over a series of entities, providing increases to both performance and interpretability. In this article, we propose a novel time-aware transformer-based network and compare it to another leading model with similar characteristics. We also decompose model performance along several critical axes and examine which features contribute most to our model’s performance. Materials and methods Using data sets representing patient records obtained between 2017 and 2019 by the Kaiser Permanente Mid-Atlantic States medical system, we construct four attentional models with varying levels of complexity on two targets (patient mortality and hospitalization). We examine how incorporating transfer learning and demographic features contribute to model success. We also test the performance of a model proposed in recent medical modeling literature. We compare these models with out-of-sample data using the area under the receiver-operator characteristic (AUROC) curve and average precision as measures of performance. We also analyze the attentional weights assigned by these models to patient diagnoses. Results We found that our model significantly outperformed the alternative on a mortality prediction task (91.96% AUROC against 73.82% AUROC). Our model also outperformed on the hospitalization task, although the models were significantly more competitive in that space (82.41% AUROC against 80.33% AUROC). Furthermore, we found that demographic features and transfer learning features which are frequently omitted from new models proposed in the EMR modeling space contributed significantly to the success of our model. Discussion We proposed an original construction of deep learning electronic medical record models which achieved very strong performance. We found that our unique model construction outperformed on several tasks in comparison to a leading literature alternative, even when input data was held constant between them. We obtained further improvements by incorporating several methods that are frequently overlooked in new model proposals, suggesting that it will be useful to explore these options further in the future.


Author(s):  
Moh'd Rasoul Al-Hadidi ◽  
Bayan AlSaaidah ◽  
Mohammed Al-Gawagzeh

Brain tumour segmentation can improve diagnostics efficiency, rise the prediction rate and treatment planning. This will help the doctors and experts in their work. Where many types of brain tumour may be classified easily, the gliomas tumour is challenging to be segmented because of the diffusion between the tumour and the surrounding edema. Another important challenge with this type of brain tumour is that the tumour may grow anywhere in the brain with different shape and size. Brain cancer presents one of the most famous diseases over the world, which encourage the researchers to find a high-throughput system for tumour detection and classification. Several approaches have been proposed to design automatic detection and classification systems. This paper presents an integrated framework to segment the gliomas brain tumour automatically using pixel clustering for the MRI images foreground and background and classify its type based on deep learning mechanism, which is the convolutional neural network. In this work, a novel segmentation and classification system is proposed to detect the tumour cells and classify the brain image if it is healthy or not. After collecting data for healthy and non-healthy brain images, satisfactory results are found and registered using computer vision approaches. This approach can be used as a part of a bigger diagnosis system for breast tumour detection and manipulation.


2003 ◽  
Vol 42 (05) ◽  
pp. 215-219
Author(s):  
G. Platsch ◽  
A. Schwarz ◽  
K. Schmiedehausen ◽  
B. Tomandl ◽  
W. Huk ◽  
...  

Summary: Aim: Although the fusion of images from different modalities may improve diagnostic accuracy, it is rarely used in clinical routine work due to logistic problems. Therefore we evaluated performance and time needed for fusing MRI and SPECT images using a semiautomated dedicated software. Patients, material and Method: In 32 patients regional cerebral blood flow was measured using 99mTc ethylcystein dimer (ECD) and the three-headed SPECT camera MultiSPECT 3. MRI scans of the brain were performed using either a 0,2 T Open or a 1,5 T Sonata. Twelve of the MRI data sets were acquired using a 3D-T1w MPRAGE sequence, 20 with a 2D acquisition technique and different echo sequences. Image fusion was performed on a Syngo workstation using an entropy minimizing algorithm by an experienced user of the software. The fusion results were classified. We measured the time needed for the automated fusion procedure and in case of need that for manual realignment after automated, but insufficient fusion. Results: The mean time of the automated fusion procedure was 123 s. It was for the 2D significantly shorter than for the 3D MRI datasets. For four of the 2D data sets and two of the 3D data sets an optimal fit was reached using the automated approach. The remaining 26 data sets required manual correction. The sum of the time required for automated fusion and that needed for manual correction averaged 320 s (50-886 s). Conclusion: The fusion of 3D MRI data sets lasted significantly longer than that of the 2D MRI data. The automated fusion tool delivered in 20% an optimal fit, in 80% manual correction was necessary. Nevertheless, each of the 32 SPECT data sets could be merged in less than 15 min with the corresponding MRI data, which seems acceptable for clinical routine use.


2012 ◽  
Author(s):  
Kate C. Miller ◽  
Lindsay L. Worthington ◽  
Steven Harder ◽  
Scott Phillips ◽  
Hans Hartse ◽  
...  

2021 ◽  
Vol 13 (13) ◽  
pp. 2433
Author(s):  
Shu Yang ◽  
Fengchao Peng ◽  
Sibylle von Löwis ◽  
Guðrún Nína Petersen ◽  
David Christian Finger

Doppler lidars are used worldwide for wind monitoring and recently also for the detection of aerosols. Automatic algorithms that classify the lidar signals retrieved from lidar measurements are very useful for the users. In this study, we explore the value of machine learning to classify backscattered signals from Doppler lidars using data from Iceland. We combined supervised and unsupervised machine learning algorithms with conventional lidar data processing methods and trained two models to filter noise signals and classify Doppler lidar observations into different classes, including clouds, aerosols and rain. The results reveal a high accuracy for noise identification and aerosols and clouds classification. However, precipitation detection is underestimated. The method was tested on data sets from two instruments during different weather conditions, including three dust storms during the summer of 2019. Our results reveal that this method can provide an efficient, accurate and real-time classification of lidar measurements. Accordingly, we conclude that machine learning can open new opportunities for lidar data end-users, such as aviation safety operators, to monitor dust in the vicinity of airports.


Author(s):  
Jianping Ju ◽  
Hong Zheng ◽  
Xiaohang Xu ◽  
Zhongyuan Guo ◽  
Zhaohui Zheng ◽  
...  

AbstractAlthough convolutional neural networks have achieved success in the field of image classification, there are still challenges in the field of agricultural product quality sorting such as machine vision-based jujube defects detection. The performance of jujube defect detection mainly depends on the feature extraction and the classifier used. Due to the diversity of the jujube materials and the variability of the testing environment, the traditional method of manually extracting the features often fails to meet the requirements of practical application. In this paper, a jujube sorting model in small data sets based on convolutional neural network and transfer learning is proposed to meet the actual demand of jujube defects detection. Firstly, the original images collected from the actual jujube sorting production line were pre-processed, and the data were augmented to establish a data set of five categories of jujube defects. The original CNN model is then improved by embedding the SE module and using the triplet loss function and the center loss function to replace the softmax loss function. Finally, the depth pre-training model on the ImageNet image data set was used to conduct training on the jujube defects data set, so that the parameters of the pre-training model could fit the parameter distribution of the jujube defects image, and the parameter distribution was transferred to the jujube defects data set to complete the transfer of the model and realize the detection and classification of the jujube defects. The classification results are visualized by heatmap through the analysis of classification accuracy and confusion matrix compared with the comparison models. The experimental results show that the SE-ResNet50-CL model optimizes the fine-grained classification problem of jujube defect recognition, and the test accuracy reaches 94.15%. The model has good stability and high recognition accuracy in complex environments.


Sign in / Sign up

Export Citation Format

Share Document