scholarly journals Diagnosis of Medical Images Using Cloud-Deep Learning System

2021 ◽  
Vol 10 (2) ◽  
pp. 155
Author(s):  
Michael Jacobs ◽  
Ali Arfan ◽  
Alaa Sheta

Diagnosis of brain tumors is one of the most severe medical problems that affect thousands of people each year in the United States. Manual classification of cancerous tumors through examination of MRI images is a difficult task even for trained professionals. It is an error-prone procedure that is dependent on the experience of the radiologist. Brain tumors, in particular, have a high level of complexity.  Therefore, computer-aided diagnosis systems designed to assist with this task are of specific interest for physicians. Accurate detection and classification of brain tumors via magnetic resonance imaging (MRI) examination is a famous approach to analyze MRI images. This paper proposes a method to classify different brain tumors using a Convolutional Neural Network (CNN). We explore the performance of several CNN architectures and examine if decreasing the input image resolution affects the model's accuracy. The dataset used to train the model has initially been 3064 MRI scans. We augmented the data set to 8544 MRI scans to balance the available classes of images. The results show that the design of a suitable CNN architecture can significantly better diagnose medical images. The developed model classification performance was up to 97\% accuracy.

2020 ◽  
Vol 10 (6) ◽  
pp. 1999 ◽  
Author(s):  
Milica M. Badža ◽  
Marko Č. Barjaktarović

The classification of brain tumors is performed by biopsy, which is not usually conducted before definitive brain surgery. The improvement of technology and machine learning can help radiologists in tumor diagnostics without invasive measures. A machine-learning algorithm that has achieved substantial results in image segmentation and classification is the convolutional neural network (CNN). We present a new CNN architecture for brain tumor classification of three tumor types. The developed network is simpler than already-existing pre-trained networks, and it was tested on T1-weighted contrast-enhanced magnetic resonance images. The performance of the network was evaluated using four approaches: combinations of two 10-fold cross-validation methods and two databases. The generalization capability of the network was tested with one of the 10-fold methods, subject-wise cross-validation, and the improvement was tested by using an augmented image database. The best result for the 10-fold cross-validation method was obtained for the record-wise cross-validation for the augmented data set, and, in that case, the accuracy was 96.56%. With good generalization capability and good execution speed, the new developed CNN architecture could be used as an effective decision-support tool for radiologists in medical diagnostics.


2020 ◽  
Vol 10 (6) ◽  
pp. 1401-1407
Author(s):  
Hyungtai Kim ◽  
Minhee Lee ◽  
Min Kyun Sohn ◽  
Jongmin Lee ◽  
Deog Yung Kim ◽  
...  

This paper shows the simultaneous clustering and classification that is done in order to discover internal grouping on an unlabeled data set. Moreover, it simultaneously classifies the data using clusters discovered as class labels. During the simultaneous clustering and classification, silhouette and F1 scores were calculated for clustering and classification, respectively, according to the number of clusters in order to find an optimal number of clusters that guarantee the desired level of classification performance. In this study, we applied this approach to the data set of Ischemic stroke patients in order to discover function recovery patterns where clear diagnoses do not exist. In addition, we have developed a classifier that predicts the type of function recovery for new patients with early clinical test scores in clinically meaningful levels of accuracy. This classifier can be a helpful tool for clinicians in the rehabilitation field.


The problem of medical data classification is analyzed and the methods of classification are reviewed in various aspects. However, the efficiency of classification algorithms is still under question. With the motivation to leverage the classification performance, a Class Level disease Convergence and Divergence (CLDC) measure based algorithm is presented in this paper. For any dimension of medical data, it convergence or divergence indicates the support for the disease class. Initially, the data set has been preprocessed to remove the noisy data points. Further, the method estimates disease convergence/divergence measure on different dimensions. The convergence measure is computed based on the frequency of dimensional match where the divergence is estimated based on the dimensional match of other classes. Based on the measures a disease support factor is estimated. The value of disease support has been used to classify the data point and improves the classification performance.


2006 ◽  
Vol 24 (18_suppl) ◽  
pp. 9058-9058
Author(s):  
P. G. Fisher ◽  
E. K. Curran ◽  
K. L. Cobb ◽  
G. M. Le ◽  
J. M. Propp

9058 Background: Past studies of medulloblastoma (MB) present conflicting claims about declines and rises in MB incidence, possibly due to misclassification. By using a strict classification of the disease and a rigorous analysis of a data registry, we aimed to determine the incidence trends of MB over the last three decades. Methods: 441 MB patients diagnosed between 1985 and 2002 were identified from the Central Brain Tumor Registry of the United States (CBTRUS), a data set representing approximately 5% of the American population (6 registries). MB was strictly defined and non-cerebellar embryonal tumors (primitive neuro-ectodermal tumors [PNETs]) excluded, using histology and site codes. Multiplicative Poisson regression and joinpoint regression were performed (Joinpoint Regression Program, version 3.0, Statistical Research and Applications Branch, National Cancer Institute) to determine the estimated average annual percentage change (EAPC) and sharp (i.e., acute) changes in incidence, respectively. Results: A slight but nonsignificant (p=.18) increase in medulloblastoma was demonstrated (EAPC = 1.1), and no sharp changes in incidence were found (joinpoints = 0). The analysis was repeated with a less strict definition of MB (including non-cerebellar PNETs) and 559 patients were identified. Using this broader classification scheme, there was a statistically significant increase in incidence (p=.02, EAPC = 1.6), but no sharp changes in incidence (joinpoints = 0). Conclusions: MB incidence does not appear to have changed since the 1980s. “Medulloblastoma” incidence increased only when the diagnosis was not strictly defined and misclassified by including non-cerebellar PNETs in the analysis. The observed increase in the combined MB/PNET classification may relate to the PNET hypothesis–a proposal that all brain tumors of apparently undifferentiated neuroepithelial cells be considered a unique diagnostic group–popularized in the 1980s and early 1990s. No significant financial relationships to disclose.


Author(s):  
Ivan Kruzhilov ◽  
Mikhail Romanov ◽  
Anton Konushin

Layout estimation is a challenge of segmenting a cluttered room image into floor, walls and ceiling. We applied Double refinement network proved to be efficient in the depth estimation to generate heat maps for room key points and edges. Our method is the first not using encoder-decoder architecture for the room layout estimation. ResNet50 was utilized as a backbone for the network instead of VGG16 commonly used for the task, allowing the network to be more compact and faster. We designed a special layout score function and layout ranking algorithm for key points and edges output. Our method achieved the lowest pixel and corner errors on the LSUN data set. The input image resolution is 224*224.


Geophysics ◽  
2013 ◽  
Vol 78 (1) ◽  
pp. E41-E46 ◽  
Author(s):  
Laurens Beran ◽  
Barry Zelt ◽  
Leonard Pasion ◽  
Stephen Billings ◽  
Kevin Kingdon ◽  
...  

We have developed practical strategies for discriminating between buried unexploded ordnance (UXO) and metallic clutter. These methods are applicable to time-domain electromagnetic data acquired with multistatic, multicomponent sensors designed for UXO classification. Each detected target is characterized by dipole polarizabilities estimated via inversion of the observed sensor data. The polarizabilities are intrinsic target features and so are used to distinguish between UXO and clutter. We tested this processing with four data sets from recent field demonstrations, with each data set characterized by metrics of data and model quality. We then developed techniques for building a representative training data set and determined how the variable quality of estimated features affects overall classification performance. Finally, we devised a technique to optimize classification performance by adapting features during target prioritization.


2016 ◽  
Vol 35 (4) ◽  
pp. 427-443 ◽  
Author(s):  
Annie Waldherr ◽  
Daniel Maier ◽  
Peter Miltner ◽  
Enrico Günther

In this article, we focus on noise in the sense of irrelevant information in a data set as a specific methodological challenge of web research in the era of big data. We empirically evaluate several methods for filtering hyperlink networks in order to reconstruct networks that contain only webpages that deal with a particular issue. The test corpus of webpages was collected from hyperlink networks on the issue of food safety in the United States and Germany. We applied three filtering strategies and evaluated their performance to exclude irrelevant content from the networks: keyword filtering, automated document classification with a machine-learning algorithm, and extraction of core networks with network-analytical measures. Keyword filtering and automated classification of webpages were the most effective methods for reducing noise, whereas extracting a core network did not yield satisfying results for this case.


2019 ◽  
Vol 6 (1) ◽  
Author(s):  
Han Hu ◽  
NhatHai Phan ◽  
Soon A. Chun ◽  
James Geller ◽  
Huy Vo ◽  
...  

Abstract Drug abuse continues to accelerate towards becoming the most severe public health problem in the United States. The ability to detect drug-abuse risk behavior at a population scale, such as among the population of Twitter users, can help us to monitor the trend of drug-abuse incidents. Unfortunately, traditional methods do not effectively detect drug-abuse risk behavior, given tweets. This is because: (1) tweets usually are noisy and sparse and (2) the availability of labeled data is limited. To address these challenging problems, we propose a deep self-taught learning system to detect and monitor drug-abuse risk behaviors in the Twitter sphere, by leveraging a large amount of unlabeled data. Our models automatically augment annotated data: (i) to improve the classification performance and (ii) to capture the evolving picture of drug abuse on online social media. Our extensive experiments have been conducted on three million drug-abuse-related tweets with geo-location information. Results show that our approach is highly effective in detecting drug-abuse risk behaviors.


2021 ◽  
Vol 12 (4) ◽  
pp. 85-95
Author(s):  
Yaroslav Voznyi ◽  
Mariia Nazarkevych ◽  
Volodymyr Hrytsyk ◽  
Nataliia Lotoshynska ◽  
Bohdana Havrysh

The method of biometric identification, designed to ensure the protection of confidential information, is considered. The method of classification of biometric prints by means of machine learning is offered. One of the variants of the solution of the problem of identification of biometric images on the basis of the k-means algorithm is given. Marked data samples were created for learning and testing processes. Biometric fingerprint data were used to establish identity. A new fingerprint scan that belongs to a particular person is compared to the data stored for that person. If the measurements match, the statement that the person has been identified is true. Experimental results indicate that the k-means method is a promising approach to the classification of fingerprints. The development of biometrics leads to the creation of security systems with a better degree of recognition and with fewer errors than the security system on traditional media. Machine learning was performed using a number of samples from a known biometric database, and verification / testing was performed with samples from the same database that were not included in the training data set. Biometric fingerprint data based on the freely available NIST Special Database 302 were used to establish identity, and the learning outcomes were shown. A new fingerprint scan that belongs to a particular person is compared to the data stored for that person. If the measurements match, the statement that the person has been identified is true. The machine learning system is built on a modular basis, by forming combinations of individual modules scikit-learn library in a python environment.


2021 ◽  
Vol 35 (4) ◽  
pp. 341-347
Author(s):  
Aparna Gullapelly ◽  
Barnali Gupta Banik

Classifying moving objects in video surveillance can be difficult, and it is challenging to classify hard and soft objects with high Accuracy. Here rigid and non-rigid objects are limited to vehicles and people. CNN is used for the binary classification of rigid and non-rigid objects. A deep-learning system using convolutional neural networks was trained using python and categorized according to their appearance. The classification is supplemented by the use of a data set, which contains two classes of images that are both rigid and not rigid that differ by illuminations.


Sign in / Sign up

Export Citation Format

Share Document