scholarly journals Classification of Image using Convolutional Neural Networks

Author(s):  
Dr. Abhay E Wagh

Abstract: Now a day, with the rapid advancement in the digital contents identification, auto classification of the images is most challenging job in the computer field. Programmed comprehension and breaking down of pictures by framework is troublesome when contrasted with human visions. A Several research have been done to defeat issue in existing classification system,, yet the yield was limited distinctly to low even out picture natives. Nonetheless, those approach need with exact order of pictures. This system uses deep learning algorithm concept to achieve the desired results in this area like computer. Our framework presents Convolutional Neural Network (CNN), a machine learning algorithm is used for automatic classification the images. This system uses the Digit of MNIST data set as a bench mark for classification of gray-scale images. The gray-scale images are used for training which requires more computational power for classification of those images. Using CNN network the result is near about 98% accuracy. Our model accomplishes the high precision in grouping of images. Keywords: Convolutional Neural Network (CNN), deep learning, MINIST, Machine Learning.

2020 ◽  
Vol 10 (6) ◽  
pp. 1999 ◽  
Author(s):  
Milica M. Badža ◽  
Marko Č. Barjaktarović

The classification of brain tumors is performed by biopsy, which is not usually conducted before definitive brain surgery. The improvement of technology and machine learning can help radiologists in tumor diagnostics without invasive measures. A machine-learning algorithm that has achieved substantial results in image segmentation and classification is the convolutional neural network (CNN). We present a new CNN architecture for brain tumor classification of three tumor types. The developed network is simpler than already-existing pre-trained networks, and it was tested on T1-weighted contrast-enhanced magnetic resonance images. The performance of the network was evaluated using four approaches: combinations of two 10-fold cross-validation methods and two databases. The generalization capability of the network was tested with one of the 10-fold methods, subject-wise cross-validation, and the improvement was tested by using an augmented image database. The best result for the 10-fold cross-validation method was obtained for the record-wise cross-validation for the augmented data set, and, in that case, the accuracy was 96.56%. With good generalization capability and good execution speed, the new developed CNN architecture could be used as an effective decision-support tool for radiologists in medical diagnostics.


Author(s):  
Vijayaprabakaran K. ◽  
Sathiyamurthy K. ◽  
Ponniamma M.

A typical healthcare application for elderly people involves monitoring daily activities and providing them with assistance. Automatic analysis and classification of an image by the system is difficult compared to human vision. Several challenging problems for activity recognition from the surveillance video involving the complexity of the scene analysis under observations from irregular lighting and low-quality frames. In this article, the authors system use machine learning algorithms to improve the accuracy of activity recognition. Their system presents a convolutional neural network (CNN), a machine learning algorithm being used for image classification. This system aims to recognize and assist human activities for elderly people using input surveillance videos. The RGB image in the dataset used for training purposes which requires more computational power for classification of the image. By using the CNN network for image classification, the authors obtain a 79.94% accuracy in the experimental part which shows their model obtains good accuracy for image classification when compared with other pre-trained models.


Diagnostics ◽  
2022 ◽  
Vol 12 (1) ◽  
pp. 181
Author(s):  
Anna Landsmann ◽  
Jann Wieler ◽  
Patryk Hejduk ◽  
Alexander Ciritsis ◽  
Karol Borkowski ◽  
...  

The aim of this study was to investigate the potential of a machine learning algorithm to accurately classify parenchymal density in spiral breast-CT (BCT), using a deep convolutional neural network (dCNN). In this retrospectively designed study, 634 examinations of 317 patients were included. After image selection and preparation, 5589 images from 634 different BCT examinations were sorted by a four-level density scale, ranging from A to D, using ACR BI-RADS-like criteria. Subsequently four different dCNN models (differences in optimizer and spatial resolution) were trained (70% of data), validated (20%) and tested on a “real-world” dataset (10%). Moreover, dCNN accuracy was compared to a human readout. The overall performance of the model with lowest resolution of input data was highest, reaching an accuracy on the “real-world” dataset of 85.8%. The intra-class correlation of the dCNN and the two readers was almost perfect (0.92) and kappa values between both readers and the dCNN were substantial (0.71–0.76). Moreover, the diagnostic performance between the readers and the dCNN showed very good correspondence with an AUC of 0.89. Artificial Intelligence in the form of a dCNN can be used for standardized, observer-independent and reliable classification of parenchymal density in a BCT examination.


2016 ◽  
Author(s):  
Saman Sarraf ◽  
Ghassem Tofighi

Over the past decade, machine learning techniques and in particular predictive modeling and pattern recognition in biomedical sciences, from drug delivery systems to medical imaging, have become one of the most important methods of assisting researchers in gaining a deeper understanding of issues in their entirety and solving complex medical problems. Deep learning is a powerful machine learning algorithm in classification that extracts low- to high-level features. In this paper, we employ a convolutional neural network to distinguish an Alzheimer′s brain from a normal, healthy brain. The importance of classifying this type of medical data lies in its potential to develop a predictive model or system in order to recognize the symptoms of Alzheimer′s disease when compared with normal subjects and to estimate the stages of the disease. Classification of clinical data for medical conditions such as Alzheimer′s disease has always been challenging, and the most problematic aspect has always been selecting the strongest discriminative features. Using the Convolutional Neural Network (CNN) and the famous architecture LeNet-5, we successfully classified functional MRI data of Alzheimer′s subjects from normal controls, where the accuracy of testing data reached 96.85%. This experiment suggests that the shift and scale invariant features extracted by CNN followed by deep learning classification represents the most powerful method of distinguishing clinical data from healthy data in fMRI. This approach also allows for expansion of the methodology to predict more complicated systems.


Cancers ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 652 ◽  
Author(s):  
Carlo Augusto Mallio ◽  
Andrea Napolitano ◽  
Gennaro Castiello ◽  
Francesco Maria Giordano ◽  
Pasquale D'Alessio ◽  
...  

Background: Coronavirus disease 2019 (COVID-19) pneumonia and immune checkpoint inhibitor (ICI) therapy-related pneumonitis share common features. The aim of this study was to determine on chest computed tomography (CT) images whether a deep convolutional neural network algorithm is able to solve the challenge of differential diagnosis between COVID-19 pneumonia and ICI therapy-related pneumonitis. Methods: We enrolled three groups: a pneumonia-free group (n = 30), a COVID-19 group (n = 34), and a group of patients with ICI therapy-related pneumonitis (n = 21). Computed tomography images were analyzed with an artificial intelligence (AI) algorithm based on a deep convolutional neural network structure. Statistical analysis included the Mann–Whitney U test (significance threshold at p < 0.05) and the receiver operating characteristic curve (ROC curve). Results: The algorithm showed low specificity in distinguishing COVID-19 from ICI therapy-related pneumonitis (sensitivity 97.1%, specificity 14.3%, area under the curve (AUC) = 0.62). ICI therapy-related pneumonitis was identified by the AI when compared to pneumonia-free controls (sensitivity = 85.7%, specificity 100%, AUC = 0.97). Conclusions: The deep learning algorithm is not able to distinguish between COVID-19 pneumonia and ICI therapy-related pneumonitis. Awareness must be increased among clinicians about imaging similarities between COVID-19 and ICI therapy-related pneumonitis. ICI therapy-related pneumonitis can be applied as a challenge population for cross-validation to test the robustness of AI models used to analyze interstitial pneumonias of variable etiology.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Peter M. Maloca ◽  
Philipp L. Müller ◽  
Aaron Y. Lee ◽  
Adnan Tufail ◽  
Konstantinos Balaskas ◽  
...  

AbstractMachine learning has greatly facilitated the analysis of medical data, while the internal operations usually remain intransparent. To better comprehend these opaque procedures, a convolutional neural network for optical coherence tomography image segmentation was enhanced with a Traceable Relevance Explainability (T-REX) technique. The proposed application was based on three components: ground truth generation by multiple graders, calculation of Hamming distances among graders and the machine learning algorithm, as well as a smart data visualization (‘neural recording’). An overall average variability of 1.75% between the human graders and the algorithm was found, slightly minor to 2.02% among human graders. The ambiguity in ground truth had noteworthy impact on machine learning results, which could be visualized. The convolutional neural network balanced between graders and allowed for modifiable predictions dependent on the compartment. Using the proposed T-REX setup, machine learning processes could be rendered more transparent and understandable, possibly leading to optimized applications.


Sensors ◽  
2019 ◽  
Vol 19 (1) ◽  
pp. 210 ◽  
Author(s):  
Zied Tayeb ◽  
Juri Fedjaev ◽  
Nejla Ghaboosi ◽  
Christoph Richter ◽  
Lukas Everding ◽  
...  

Non-invasive, electroencephalography (EEG)-based brain-computer interfaces (BCIs) on motor imagery movements translate the subject’s motor intention into control signals through classifying the EEG patterns caused by different imagination tasks, e.g., hand movements. This type of BCI has been widely studied and used as an alternative mode of communication and environmental control for disabled patients, such as those suffering from a brainstem stroke or a spinal cord injury (SCI). Notwithstanding the success of traditional machine learning methods in classifying EEG signals, these methods still rely on hand-crafted features. The extraction of such features is a difficult task due to the high non-stationarity of EEG signals, which is a major cause by the stagnating progress in classification performance. Remarkable advances in deep learning methods allow end-to-end learning without any feature engineering, which could benefit BCI motor imagery applications. We developed three deep learning models: (1) A long short-term memory (LSTM); (2) a spectrogram-based convolutional neural network model (CNN); and (3) a recurrent convolutional neural network (RCNN), for decoding motor imagery movements directly from raw EEG signals without (any manual) feature engineering. Results were evaluated on our own publicly available, EEG data collected from 20 subjects and on an existing dataset known as 2b EEG dataset from “BCI Competition IV”. Overall, better classification performance was achieved with deep learning models compared to state-of-the art machine learning techniques, which could chart a route ahead for developing new robust techniques for EEG signal decoding. We underpin this point by demonstrating the successful real-time control of a robotic arm using our CNN based BCI.


2022 ◽  
Vol 10 (1) ◽  
pp. 0-0

Brain tumor is a severe cancer disease caused by uncontrollable and abnormal partitioning of cells. Timely disease detection and treatment plans lead to the increased life expectancy of patients. Automated detection and classification of brain tumor are a more challenging process which is based on the clinician’s knowledge and experience. For this fact, one of the most practical and important techniques is to use deep learning. Recent progress in the fields of deep learning has helped the clinician’s in medical imaging for medical diagnosis of brain tumor. In this paper, we present a comparison of Deep Convolutional Neural Network models for automatically binary classification query MRI images dataset with the goal of taking precision tools to health professionals based on fined recent versions of DenseNet, Xception, NASNet-A, and VGGNet. The experiments were conducted using an MRI open dataset of 3,762 images. Other performance measures used in the study are the area under precision, recall, and specificity.


2021 ◽  
Author(s):  
Aria Abubakar ◽  
Mandar Kulkarni ◽  
Anisha Kaul

Abstract In the process of deriving the reservoir petrophysical properties of a basin, identifying the pay capability of wells by interpreting various geological formations is key. Currently, this process is facilitated and preceded by well log correlation, which involves petrophysicists and geologists examining multiple raw log measurements for the well in question, indicating geological markers of formation changes and correlating them with those of neighboring wells. As it may seem, this activity of picking markers of a well is performed manually and the process of ‘examining’ may be highly subjective, thus, prone to inconsistencies. In our work, we propose to automate the well correlation workflow by using a Soft- Attention Convolutional Neural Network to predict well markers. The machine learning algorithm is supervised by examples of manual marker picks and their corresponding occurrence in logs such as gamma-ray, resistivity and density. Our experiments have shown that, specifically, the attention mechanism allows the Convolutional Neural Network to look at relevant features or patterns in the log measurements that suggest a change in formation, making the machine learning model highly precise.


Sign in / Sign up

Export Citation Format

Share Document