scholarly journals BCM-VEMT: Classification of Brain Cancer from MRI Images using Deep Learning and Ensemble of Machine Learning Techniques

Author(s):  
Prottoy Saha ◽  
Rudra Das ◽  
Shanta Kumar Das

Abstract Brain Cancer is quite possibly the most driving reason for death in recent years. Appropriate diagnosis of the cancer type empowers the specialists to make the right choice of treatment, decision and to save the patient's life. It goes no saying the importance of a computer-aided diagnosis system with image processing that can classify the tumor types correctly. In this paper, an enhanced approach has been proposed, that can classify brain tumor types from Magnetic Resonance Images (MRI) using deep learning and an ensemble of Machine Learning Algorithms. The system named BCM-VEMT can classify among four different classes that consist of three categories of Brain Cancers (Glioma, Meningioma, and Pituitary) and Non-Cancerous which means Normal type. A Convolutional Neural Network is developed to extract deep features from the MRI images. Then these extracted deep features are fed into multi-class ML classifiers to classify among these cancer types. Finally, a weighted average ensemble of classifiers is used to achieve better performance by combining the results of each ML classifier. The dataset of the system has a total of 3787 MRI images of four classes. BCM-VEMT has achieved better performance with 97.90% accuracy for the Glioma class, 98.94% accuracy for the Meningioma class, 98.00% accuracy for the Normal class, 98.92% accuracy for the Pituitary class, and overall accuracy of 98.42%. BCM-VEMT can have a great significance in classifying Brain Cancer types.

2018 ◽  
Vol 37 (6) ◽  
pp. 451-461 ◽  
Author(s):  
Zhen Wang ◽  
Haibin Di ◽  
Muhammad Amir Shafiq ◽  
Yazeed Alaudah ◽  
Ghassan AlRegib

As a process that identifies geologic structures of interest such as faults, salt domes, or elements of petroleum systems in general, seismic structural interpretation depends heavily on the domain knowledge and experience of interpreters as well as visual cues of geologic structures, such as texture and geometry. With the dramatic increase in size of seismic data acquired for hydrocarbon exploration, structural interpretation has become more time consuming and labor intensive. By treating seismic data as images rather than signal traces, researchers have been able to utilize advanced image-processing and machine-learning techniques to assist interpretation directly. In this paper, we mainly focus on the interpretation of two important geologic structures, faults and salt domes, and summarize interpretation workflows based on typical or advanced image-processing and machine-learning algorithms. In recent years, increasing computational power and the massive amount of available data have led to the rise of deep learning. Deep-learning models that simulate the human brain's biological neural networks can achieve state-of-the-art accuracy and even exceed human-level performance on numerous applications. The convolutional neural network — a form of deep-learning model that is effective in analyzing visual imagery — has been applied in fault and salt dome interpretation. At the end of this review, we provide insight and discussion on the future of structural interpretation.


Author(s):  
Anna Ferrari ◽  
Daniela Micucci ◽  
Marco Mobilio ◽  
Paolo Napoletano

AbstractHuman activity recognition (HAR) is a line of research whose goal is to design and develop automatic techniques for recognizing activities of daily living (ADLs) using signals from sensors. HAR is an active research filed in response to the ever-increasing need to collect information remotely related to ADLs for diagnostic and therapeutic purposes. Traditionally, HAR used environmental or wearable sensors to acquire signals and relied on traditional machine-learning techniques to classify ADLs. In recent years, HAR is moving towards the use of both wearable devices (such as smartphones or fitness trackers, since they are daily used by people and they include reliable inertial sensors), and deep learning techniques (given the encouraging results obtained in the area of computer vision). One of the major challenges related to HAR is population diversity, which makes difficult traditional machine-learning algorithms to generalize. Recently, researchers successfully attempted to address the problem by proposing techniques based on personalization combined with traditional machine learning. To date, no effort has been directed at investigating the benefits that personalization can bring in deep learning techniques in the HAR domain. The goal of our research is to verify if personalization applied to both traditional and deep learning techniques can lead to better performance than classical approaches (i.e., without personalization). The experiments were conducted on three datasets that are extensively used in the literature and that contain metadata related to the subjects. AdaBoost is the technique chosen for traditional machine learning, while convolutional neural network is the one chosen for deep learning. These techniques have shown to offer good performance. Personalization considers both the physical characteristics of the subjects and the inertial signals generated by the subjects. Results suggest that personalization is most effective when applied to traditional machine-learning techniques rather than to deep learning ones. Moreover, results show that deep learning without personalization performs better than any other methods experimented in the paper in those cases where the number of training samples is high and samples are heterogeneous (i.e., they represent a wider spectrum of the population). This suggests that traditional deep learning can be more effective, provided you have a large and heterogeneous dataset, intrinsically modeling the population diversity in the training process.


Author(s):  
R Kanthavel Et.al

Osteoarthritis is mainly a familiar kind of arthritis when an elastic tissue named Cartilage that softens the tops of the bones, cracks down. The Person with osteoarthritis can encompass joint pain, inflexibility, or inflammation and there is no particular examination for osteoarthritis and physicians take the amalgamation of both medical cum clinical record and X-rays imaging analysis to make a diagnosis of the state. Osteoarthritis is generally only detected following ache and bone scratch and in advance, analysis could permit for ultimate involvement to avoid cartilage worsening and bone injury. Through machine-learning algorithms, the system can be trained to automatically distinguish among people who would develop osteoarthritis and persons who would not with the detection of exact biochemical variances in the midpoint of the knee’s cartilage. The outcome of the Machine learning Techniques will give the persons who are pre-symptomatic by the occasion of the baseline imaging and also the reduction in liquid concentration. In this study, we present the analysis of various deep learning techniques for timely detection of osteoarthritis disease. Several subsets of machine learning called deep learning techniques have been in use for the timely detection of osteoarthritis disease; and therefore analysis is needed highly to choose the best as far as accuracy and reliability are concerned.


2021 ◽  
Vol 11 (4) ◽  
pp. 286-290
Author(s):  
Md. Golam Kibria ◽  
◽  
Mehmet Sevkli

The increased credit card defaulters have forced the companies to think carefully before the approval of credit applications. Credit card companies usually use their judgment to determine whether a credit card should be issued to the customer satisfying certain criteria. Some machine learning algorithms have also been used to support the decision. The main objective of this paper is to build a deep learning model based on the UCI (University of California, Irvine) data sets, which can support the credit card approval decision. Secondly, the performance of the built model is compared with the other two traditional machine learning algorithms: logistic regression (LR) and support vector machine (SVM). Our results show that the overall performance of our deep learning model is slightly better than that of the other two models.


Cancer has been portrayed as a heterogeneous disease comprising of a wide range of subtypes. The early diagnosis of a cancer type is very important to determine the course of medical treatment required by the patient. The significance of classifying cancerous cells into benign or malignant has driven many research studies, in the biomedical and the bioinformatics field. In the past years researchers have been encouraged to use different machine learning (ML) techniques for cancer detection, as well as prediction of survivability and recurrence. What's more, ML instruments can be used to distinguish key highlights from complex datasets and uncover their significance. An assortment of these procedures, including Artificial Neural Networks (ANNs), Bayesian Networks (BNs), Random Forest Methods (RVMs) and Decision Trees (DTs) has been usually used in cancer research for the development of predictive models, resulting in successful and exact decision making. Although it is obvious that the usage of machine learning techniques can enhance our comprehension of cancer detection, progression, recurrence and survivability, a proper level of accuracy is required for these strategies to be considered in the ordinary clinical practice. The predictive models talked about here depend on different administered ML strategies and on various input features and data samples. We have used Naïve-Bayes classifier, Neural Networks method, Decision Tree and Logistic Regression algorithm to detect the type of breast cancer (Benign or Malignant) and selection of features which are more relevant for prediction. We have made a comparative study to find out the best algorithm of the above four, for prediction of cancer type. With a high level of accuracy, any of these methods can be used to predict the type of breast cancer of any particular patient


2021 ◽  
pp. 46-56
Author(s):  
Parvesh K ◽  
◽  
◽  
◽  
Tharun C ◽  
...  

The rapid development of e-commerce shopping marketplaces necessitates the use of recommendation engines and quick, precise, and efficient algorithms in order for the company's business models to generate a massive amount of profit. A computer vision software programme enables a computer to learn a great deal from digital images or movies. Machine learning methods are used in computer vision, and several machine learning techniques have been developed specifically for this purpose. Information retrieval is the process of extracting useful information from a dataset, and computer vision is the most commonly used tool for this purpose nowadays. This project consists of a series of modules that run sequentially to retrieve information from a marked area on a receipt. A receipt image is used as an input for the model, and the model first uses various image processing algorithms to clean the data, after which the pre-processed data is applied to machine learning algorithms to produce better results, and the result is a string of numerical digits including the decimal point. The program's accuracy is primarily determined by the image quality or pixel density, and it is necessary to ensure that an input receipt is not damaged and content is not blurred.


Author(s):  
Anisha M. Lal ◽  
B. Koushik Reddy ◽  
Aju D.

Machine learning can be defined as the ability of a computer to learn and solve a problem without being explicitly coded. The efficiency of the program increases with experience through the task specified. In traditional programming, the program and the input are specified to get the output, but in the case of machine learning, the targets and predictors are provided to the algorithm make the process trained. This chapter focuses on various machine learning techniques and their performance with commonly used datasets. A supervised learning algorithm consists of a target variable that is to be predicted from a given set of predictors. Using these established targets is a function that plots targets to a given set of predictors. The training process allows the system to train the unknown data and continues until the model achieves a desired level of accuracy on the training data. The supervised methods can be usually categorized as classification and regression. This chapter discourses some of the popular supervised machine learning algorithms and their performances using quotidian datasets. This chapter also discusses some of the non-linear regression techniques and some insights on deep learning with respect to object recognition.


Sensors ◽  
2020 ◽  
Vol 20 (16) ◽  
pp. 4583 ◽  
Author(s):  
Vibekananda Dutta ◽  
Michał Choraś ◽  
Marek Pawlicki ◽  
Rafał Kozik

Currently, expert systems and applied machine learning algorithms are widely used to automate network intrusion detection. In critical infrastructure applications of communication technologies, the interaction among various industrial control systems and the Internet environment intrinsic to the IoT technology makes them susceptible to cyber-attacks. Given the existence of the enormous network traffic in critical Cyber-Physical Systems (CPSs), traditional methods of machine learning implemented in network anomaly detection are inefficient. Therefore, recently developed machine learning techniques, with the emphasis on deep learning, are finding their successful implementations in the detection and classification of anomalies at both the network and host levels. This paper presents an ensemble method that leverages deep models such as the Deep Neural Network (DNN) and Long Short-Term Memory (LSTM) and a meta-classifier (i.e., logistic regression) following the principle of stacked generalization. To enhance the capabilities of the proposed approach, the method utilizes a two-step process for the apprehension of network anomalies. In the first stage, data pre-processing, a Deep Sparse AutoEncoder (DSAE) is employed for the feature engineering problem. In the second phase, a stacking ensemble learning approach is utilized for classification. The efficiency of the method disclosed in this work is tested on heterogeneous datasets, including data gathered in the IoT environment, namely IoT-23, LITNET-2020, and NetML-2020. The results of the evaluation of the proposed approach are discussed. Statistical significance is tested and compared to the state-of-the-art approaches in network anomaly detection.


2020 ◽  
Vol 2020 ◽  
pp. 1-15
Author(s):  
Mehedi Masud ◽  
Hesham Alhumyani ◽  
Sultan S. Alshamrani ◽  
Omar Cheikhrouhou ◽  
Saleh Ibrahim ◽  
...  

Malaria is a contagious disease that affects millions of lives every year. Traditional diagnosis of malaria in laboratory requires an experienced person and careful inspection to discriminate healthy and infected red blood cells (RBCs). It is also very time-consuming and may produce inaccurate reports due to human errors. Cognitive computing and deep learning algorithms simulate human intelligence to make better human decisions in applications like sentiment analysis, speech recognition, face detection, disease detection, and prediction. Due to the advancement of cognitive computing and machine learning techniques, they are now widely used to detect and predict early disease symptoms in healthcare field. With the early prediction results, healthcare professionals can provide better decisions for patient diagnosis and treatment. Machine learning algorithms also aid the humans to process huge and complex medical datasets and then analyze them into clinical insights. This paper looks for leveraging deep learning algorithms for detecting a deadly disease, malaria, for mobile healthcare solution of patients building an effective mobile system. The objective of this paper is to show how deep learning architecture such as convolutional neural network (CNN) which can be useful in real-time malaria detection effectively and accurately from input images and to reduce manual labor with a mobile application. To this end, we evaluate the performance of a custom CNN model using a cyclical stochastic gradient descent (SGD) optimizer with an automatic learning rate finder and obtain an accuracy of 97.30% in classifying healthy and infected cell images with a high degree of precision and sensitivity. This outcome of the paper will facilitate microscopy diagnosis of malaria to a mobile application so that reliability of the treatment and lack of medical expertise can be solved.


2021 ◽  
Author(s):  
V. N. Aditya Datta Chivukula ◽  
Sri Keshava Reddy Adupala

Machine learning techniques have become a vital part of every ongoing research in technical areas. In recent times the world has witnessed many beautiful applications of machine learning in a practical sense which amaze us in every aspect. This paper is all about whether we should always rely on deep learning techniques or is it really possible to overcome the performance of simple deep learning algorithms by simple statistical machine learning algorithms by understanding the application and processing the data so that it can help in increasing the performance of the algorithm by a notable amount. The paper mentions the importance of data pre-processing than that of the selection of the algorithm. It discusses the functions involving trigonometric, logarithmic, and exponential terms and also talks about functions that are purely trigonometric. Finally, we discuss regression analysis on music signals.


Sign in / Sign up

Export Citation Format

Share Document