scholarly journals Deep Learning Feature Extraction for Brain Tumor Characterization and Detection

Author(s):  
Otman Basir ◽  
Kalifa Shantta

Deep Learning is a growing field of artificial intelligence that has become an operative research topic in a wide range of disciplines. Today we are witnessing the tangible successes of Deep Learning in our daily lives in various applications, including education, manufacturing, transportation, healthcare, military, and automotive, etc.<strong> </strong>Deep Learning is a subfield of Machine Learning that stems from Artificial Neural Networks, where a cascade of layers is employed to progressively extract higher-level features from the raw input and make predictive guesses about new data. This paper will discuss the effect of attribute extraction profoundly inherent in training<strong> </strong>approaches such as Convolutional Neural Networks (CNN). Furthermore, the paper aims to offer a study on Deep Learning techniques and attribute extraction methods that have appeared in the last few years. As the demand increases, considerable research in the attribute extraction assignment has become even more instrumental. Brain tumor characterization and detection will be used as a case study to demonstrate Deep Learning CNN's ability to achieve effective representational learning and tumor characterization.

Image classification has been a rapidly developing field over the previous decade, and the utilization of Convolutional Neural Networks (CNNs) and other deep learning techniques is developing rapidly. However, CNNs are compute-intensive. Another algorithm which was broadly utilized and keeps on being utilized is the Viola-Jones algorithm. Viola-Jones adopts an accumulation strategy. This means Viola-Jones utilizes a wide range of classifiers, each looking at a different part of the image. Every individual classifier is more fragile than the last classifier since it is taking in fewer data. At the point when the outcomes from every classifier are joined, be that as it may, they produce a solid classifier. In this paper, we would like to develop a model that will be able to detect the Bengali license plates of, using the Viola-Jones Algorithm with better precision. It can be utilized for various purposes like roadside help, road safety, parking management, etc


Complexity ◽  
2018 ◽  
Vol 2018 ◽  
pp. 1-9 ◽  
Author(s):  
C. A. Martín ◽  
J. M. Torres ◽  
R. M. Aguilar ◽  
S. Diaz

Technology and the Internet have changed how travel is booked, the relationship between travelers and the tourism industry, and how tourists share their travel experiences. As a result of this multiplicity of options, mass tourism markets have been dispersing. But the global demand has not fallen; quite the contrary, it has increased. Another important factor, the digital transformation, is taking hold to reach new client profiles, especially the so-called third generation of tourism consumers, digital natives who only understand the world through their online presence and who make the most of every one of its advantages. In this context, the digital platforms where users publish their impressions of tourism experiences are starting to carry more weight than the corporate content created by companies and brands. In this paper, we propose using different deep-learning techniques and architectures to solve the problem of classifying the comments that tourists publish online and that new tourists use to decide how best to plan their trip. Specifically, in this paper, we propose a classifier to determine the sentiments reflected on the http://booking.com and http://tripadvisor.com platforms for the service received in hotels. We develop and compare various classifiers based on convolutional neural networks (CNN) and long short-term memory networks (LSTM). These classifiers were trained and validated with data from hotels located on the island of Tenerife. An analysis of our findings shows that the most accurate and robust estimators are those based on LSTM recurrent neural networks.


2020 ◽  
pp. 107754632092914
Author(s):  
Mohammed Alabsi ◽  
Yabin Liao ◽  
Ala-Addin Nabulsi

Deep learning has seen tremendous growth over the past decade. It has set new performance limits for a wide range of applications, including computer vision, speech recognition, and machinery health monitoring. With the abundance of instrumentation data and the availability of high computational power, deep learning continues to prove itself as an efficient tool for the extraction of micropatterns from machinery big data repositories. This study presents a comparative study for feature extraction capabilities using stacked autoencoders considering the use of expert domain knowledge. Case Western Reserve University bearing dataset was used for the study, and a classifier was trained and tested to extract and visualize features from 12 different failure classes. Based on the raw data preprocessing, four different deep neural network structures were studied. Results indicated that integrating domain knowledge with deep learning techniques improved feature extraction capabilities and reduced the deep neural networks size and computational requirements without the need for exhaustive deep neural networks architecture tuning and modification.


2020 ◽  
Vol 28 ◽  
pp. 233-257
Author(s):  
Serge Rosmorduc

We apply Deep Learning techniques to the task of automated transliteration of Late Egyptian. After a brief presentation of the technology used, we examine the result to highlight the capabilities of the system, which is able to deal with a wide range of problems, including grammatical and phraseological ones. We then proceed to extract signs values from what the system has automatically learnt.


Author(s):  
Prisilla Jayanthi ◽  
Muralikrishna Iyyanki

In deep learning, the main techniques of neural networks, namely artificial neural network, convolutional neural network, recurrent neural network, and deep neural networks, are found to be very effective for medical data analyses. In this chapter, application of the techniques, viz., ANN, CNN, DNN, for detection of tumors in numerical and image data of brain tumor is presented. First, the case of ANN application is discussed for the prediction of the brain tumor for which the disease symptoms data in numerical form is the input. ANN modelling was implemented for classification of human ethnicity. Next the detection of the tumors from images is discussed for which CNN and DNN techniques are implemented. Other techniques discussed in this study are HSV color space, watershed segmentation and morphological operation, fuzzy entropy level set, which are used for segmenting tumor in brain tumor images. The FCN-8 and FCN-16 models are used to produce a semantic segmentation on the various images. In general terms, the techniques of deep learning detected the tumors by training image dataset.


2019 ◽  
Author(s):  
Ammar Tareen ◽  
Justin B. Kinney

AbstractThe adoption of deep learning techniques in genomics has been hindered by the difficulty of mechanistically interpreting the models that these techniques produce. In recent years, a variety of post-hoc attribution methods have been proposed for addressing this neural network interpretability problem in the context of gene regulation. Here we describe a complementary way of approaching this problem. Our strategy is based on the observation that two large classes of biophysical models of cis-regulatory mechanisms can be expressed as deep neural networks in which nodes and weights have explicit physiochemical interpretations. We also demonstrate how such biophysical networks can be rapidly inferred, using modern deep learning frameworks, from the data produced by certain types of massively parallel reporter assays (MPRAs). These results suggest a scalable strategy for using MPRAs to systematically characterize the biophysical basis of gene regulation in a wide range of biological contexts. They also highlight gene regulation as a promising venue for the development of scientifically interpretable approaches to deep learning.


2020 ◽  
Vol 79 (41-42) ◽  
pp. 30387-30395
Author(s):  
Stavros Ntalampiras

Abstract Predicting the emotional responses of humans to soundscapes is a relatively recent field of research coming with a wide range of promising applications. This work presents the design of two convolutional neural networks, namely ArNet and ValNet, each one responsible for quantifying arousal and valence evoked by soundscapes. We build on the knowledge acquired from the application of traditional machine learning techniques on the specific domain, and design a suitable deep learning framework. Moreover, we propose the usage of artificially created mixed soundscapes, the distributions of which are located between the ones of the available samples, a process that increases the variance of the dataset leading to significantly better performance. The reported results outperform the state of the art on a soundscape dataset following Schafer’s standardized categorization considering both sound’s identity and the respective listening context.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1031
Author(s):  
Joseba Gorospe ◽  
Rubén Mulero ◽  
Olatz Arbelaitz ◽  
Javier Muguerza ◽  
Miguel Ángel Antón

Deep learning techniques are being increasingly used in the scientific community as a consequence of the high computational capacity of current systems and the increase in the amount of data available as a result of the digitalisation of society in general and the industrial world in particular. In addition, the immersion of the field of edge computing, which focuses on integrating artificial intelligence as close as possible to the client, makes it possible to implement systems that act in real time without the need to transfer all of the data to centralised servers. The combination of these two concepts can lead to systems with the capacity to make correct decisions and act based on them immediately and in situ. Despite this, the low capacity of embedded systems greatly hinders this integration, so the possibility of being able to integrate them into a wide range of micro-controllers can be a great advantage. This paper contributes with the generation of an environment based on Mbed OS and TensorFlow Lite to be embedded in any general purpose embedded system, allowing the introduction of deep learning architectures. The experiments herein prove that the proposed system is competitive if compared to other commercial systems.


2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Malte Seemann ◽  
Lennart Bargsten ◽  
Alexander Schlaefer

AbstractDeep learning methods produce promising results when applied to a wide range of medical imaging tasks, including segmentation of artery lumen in computed tomography angiography (CTA) data. However, to perform sufficiently, neural networks have to be trained on large amounts of high quality annotated data. In the realm of medical imaging, annotations are not only quite scarce but also often not entirely reliable. To tackle both challenges, we developed a two-step approach for generating realistic synthetic CTA data for the purpose of data augmentation. In the first step moderately realistic images are generated in a purely numerical fashion. In the second step these images are improved by applying neural domain adaptation. We evaluated the impact of synthetic data on lumen segmentation via convolutional neural networks (CNNs) by comparing resulting performances. Improvements of up to 5% in terms of Dice coefficient and 20% for Hausdorff distance represent a proof of concept that the proposed augmentation procedure can be used to enhance deep learning-based segmentation for artery lumen in CTA images.


Sign in / Sign up

Export Citation Format

Share Document