Deep Learning

Author(s):  
Semra Erpolat Taşabat ◽  
Olgun Aydin

Deep learning (DL) is a rising star of machine learning (ML) and artificial intelligence (AI) domains. Until 2006, many researchers had attempted to build deep neural networks (DNN), but most of them failed. In 2006, it was proven that deep neural networks are one of the most crucial inventions for the 21st century. Nowadays, DNN are being used as a key technology for many different domains: self-driven vehicles, smart cities, security, automated machines. In this chapter, brief information about DL theory is given, advantages and disadvantages of deep learning are discussed, most used types of DNN are mentioned, popular DL architectures and frameworks are glanced and aimed to build smart systems for the finance and real estate domains. Finally, a case study about image recognition using transfer learning is developed.

2021 ◽  
Author(s):  
Anwaar Ulhaq

Machine learning has grown in popularity and effectiveness over the last decade. It has become possible to solve complex problems, especially in artificial intelligence, due to the effectiveness of deep neural networks. While numerous books and countless papers have been written on deep learning, new researchers want to understand the field's history, current trends and envision future possibilities. This review paper will summarise the recorded work that resulted in such success and address patterns and prospects.


Algorithms ◽  
2021 ◽  
Vol 14 (2) ◽  
pp. 39
Author(s):  
Carlos Lassance ◽  
Vincent Gripon ◽  
Antonio Ortega

Deep Learning (DL) has attracted a lot of attention for its ability to reach state-of-the-art performance in many machine learning tasks. The core principle of DL methods consists of training composite architectures in an end-to-end fashion, where inputs are associated with outputs trained to optimize an objective function. Because of their compositional nature, DL architectures naturally exhibit several intermediate representations of the inputs, which belong to so-called latent spaces. When treated individually, these intermediate representations are most of the time unconstrained during the learning process, as it is unclear which properties should be favored. However, when processing a batch of inputs concurrently, the corresponding set of intermediate representations exhibit relations (what we call a geometry) on which desired properties can be sought. In this work, we show that it is possible to introduce constraints on these latent geometries to address various problems. In more detail, we propose to represent geometries by constructing similarity graphs from the intermediate representations obtained when processing a batch of inputs. By constraining these Latent Geometry Graphs (LGGs), we address the three following problems: (i) reproducing the behavior of a teacher architecture is achieved by mimicking its geometry, (ii) designing efficient embeddings for classification is achieved by targeting specific geometries, and (iii) robustness to deviations on inputs is achieved via enforcing smooth variation of geometry between consecutive latent spaces. Using standard vision benchmarks, we demonstrate the ability of the proposed geometry-based methods in solving the considered problems.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Lara Lloret Iglesias ◽  
Pablo Sanz Bellón ◽  
Amaia Pérez del Barrio ◽  
Pablo Menéndez Fernández-Miranda ◽  
David Rodríguez González ◽  
...  

AbstractDeep learning is nowadays at the forefront of artificial intelligence. More precisely, the use of convolutional neural networks has drastically improved the learning capabilities of computer vision applications, being able to directly consider raw data without any prior feature extraction. Advanced methods in the machine learning field, such as adaptive momentum algorithms or dropout regularization, have dramatically improved the convolutional neural networks predicting ability, outperforming that of conventional fully connected neural networks. This work summarizes, in an intended didactic way, the main aspects of these cutting-edge techniques from a medical imaging perspective.


2021 ◽  
Author(s):  
Ramy Abdallah ◽  
Clare E. Bond ◽  
Robert W.H. Butler

<p>Machine learning is being presented as a new solution for a wide range of geoscience problems. Primarily machine learning has been used for 3D seismic data processing, seismic facies analysis and well log data correlation. The rapid development in technology with open-source artificial intelligence libraries and the accessibility of affordable computer graphics processing units (GPU) makes the application of machine learning in geosciences increasingly tractable. However, the application of artificial intelligence in structural interpretation workflows of subsurface datasets is still ambiguous. This study aims to use machine learning techniques to classify images of folds and fold-thrust structures. Here we show that convolutional neural networks (CNNs) as supervised deep learning techniques provide excellent algorithms to discriminate between geological image datasets. Four different datasets of images have been used to train and test the machine learning models. These four datasets are a seismic character dataset with five classes (faults, folds, salt, flat layers and basement), folds types with three classes (buckle, chevron and conjugate), fault types with three classes (normal, reverse and thrust) and fold-thrust geometries with three classes (fault bend fold, fault propagation fold and detachment fold). These image datasets are used to investigate three machine learning models. One Feedforward linear neural network model and two convolutional neural networks models (Convolution 2d layer transforms sequential model and Residual block model (ResNet with 9, 34, and 50 layers)). Validation and testing datasets forms a critical part of testing the model’s performance accuracy. The ResNet model records the highest performance accuracy score, of the machine learning models tested. Our CNN image classification model analysis provides a framework for applying machine learning to increase structural interpretation efficiency, and shows that CNN classification models can be applied effectively to geoscience problems. The study provides a starting point to apply unsupervised machine learning approaches to sub-surface structural interpretation workflows.</p>


2021 ◽  
Vol 6 (5) ◽  
pp. 10-15
Author(s):  
Ela Bhattacharya ◽  
D. Bhattacharya

COVID-19 has emerged as the latest worrisome pandemic, which is reported to have its outbreak in Wuhan, China. The infection spreads by means of human contact, as a result, it has caused massive infections across 200 countries around the world. Artificial intelligence has likewise contributed to managing the COVID-19 pandemic in various aspects within a short span of time. Deep Neural Networks that are explored in this paper have contributed to the detection of COVID-19 from imaging sources. The datasets, pre-processing, segmentation, feature extraction, classification and test results which can be useful for discovering future directions in the domain of automatic diagnosis of the disease, utilizing artificial intelligence-based frameworks, have been investigated in this paper.


First Monday ◽  
2019 ◽  
Author(s):  
Niel Chah

Interest in deep learning, machine learning, and artificial intelligence from industry and the general public has reached a fever pitch recently. However, these terms are frequently misused, confused, and conflated. This paper serves as a non-technical guide for those interested in a high-level understanding of these increasingly influential notions by exploring briefly the historical context of deep learning, its public presence, and growing concerns over the limitations of these techniques. As a first step, artificial intelligence and machine learning are defined. Next, an overview of the historical background of deep learning reveals its wide scope and deep roots. A case study of a major deep learning implementation is presented in order to analyze public perceptions shaped by companies focused on technology. Finally, a review of deep learning limitations illustrates systemic vulnerabilities and a growing sense of concern over these systems.


2017 ◽  
Author(s):  
Najib J. Majaj ◽  
Denis G. Pelli

ABSTRACTToday many vision-science presentations employ machine learning, especially the version called “deep learning”. Many neuroscientists use machine learning to decode neural responses. Many perception scientists try to understand how living organisms recognize objects. To them, deep neural networks offer benchmark accuracies for recognition of learned stimuli. Originally machine learning was inspired by the brain. Today, machine learning is used as a statistical tool to decode brain activity. Tomorrow, deep neural networks might become our best model of brain function. This brief overview of the use of machine learning in biological vision touches on its strengths, weaknesses, milestones, controversies, and current directions. Here, we hope to help vision scientists assess what role machine learning should play in their research.


2017 ◽  
Vol 1 (3) ◽  
pp. 83 ◽  
Author(s):  
Chandrasegar Thirumalai ◽  
Ravisankar Koppuravuri

In this paper, we will use deep neural networks for predicting the bike sharing usage based on previous years usage data. We will use because deep neural nets for getting higher accuracy. Deep neural nets are quite different from other machine learning techniques; here we can add many numbers of hidden layers to improve the accuracy of our prediction and the model can be trained in the way we want such that we can achieve the results we want. Nowadays many AI experts will say that deep learning is the best AI technique available now and we can achieve some unbelievable results using this technique. Now we will use that technique to predict bike sharing usage of a rental company to make sure they can take good business decisions based on previous years data.


Sensors ◽  
2020 ◽  
Vol 20 (24) ◽  
pp. 7228
Author(s):  
Francesco Delli Priscoli ◽  
Alessandro Giuseppi ◽  
Federico Lisi

In the last few years, with the exponential diffusion of smartphones, services for turn-by-turn navigation have seen a surge in popularity. Current solutions available in the market allow the user to select via an interface the desired transportation mode, for which an optimal route is then computed. Automatically recognizing the transportation system that the user is travelling by allows to dynamically control, and consequently update, the route proposed to the user. Such a dynamic approach is an enabling technology for multi-modal transportation planners, in which the optimal path and its associated transportation solutions are updated in real-time based on data coming from (i) distributed sensors (e.g., smart traffic lights, road congestion sensors, etc.); (ii) service providers (e.g., car-sharing availability, bus waiting time, etc.); and (iii) the user’s own device, in compliance with the development of smart cities envisaged by the 5G architecture. In this paper, we present a series of Machine Learning approaches for real-time Transportation Mode Recognition and we report their performance difference in our field tests. Several Machine Learning-based classifiers, including Deep Neural Networks, built on both statistical feature extraction and raw data analysis are presented and compared in this paper; the result analysis also highlights which features are proven to be the most informative ones for the classification.


2018 ◽  
Vol 164 ◽  
pp. 01015
Author(s):  
Indar Sugiarto ◽  
Felix Pasila

Deep learning (DL) has been considered as a breakthrough technique in the field of artificial intelligence and machine learning. Conceptually, it relies on a many-layer network that exhibits a hierarchically non-linear processing capability. Some DL architectures such as deep neural networks, deep belief networks and recurrent neural networks have been developed and applied to many fields with incredible results, even comparable to human intelligence. However, many researchers are still sceptical about its true capability: can the intelligence demonstrated by deep learning technique be applied for general tasks? This question motivates the emergence of another research discipline: neuromorphic computing (NC). In NC, researchers try to identify the most fundamental ingredients that construct intelligence behaviour produced by the brain itself. To achieve this, neuromorphic systems are developed to mimic the brain functionality down to cellular level. In this paper, a neuromorphic platform called SpiNNaker is described and evaluated in order to understand its potential use as a platform for a deep learning approach. This paper is a literature review that contains comparative study on algorithms that have been implemented in SpiNNaker.


Sign in / Sign up

Export Citation Format

Share Document