An overview of deep learning techniques

2018 ◽  
Vol 66 (9) ◽  
pp. 690-703 ◽  
Author(s):  
Michael Vogt

Abstract Deep learning is the paradigm that profoundly changed the artificial intelligence landscape within only a few years. Although accompanied by a variety of algorithmic achievements, this technology is disruptive mainly from the application perspective: It considerably pushes the border of tasks that can be automated, changes the way products are developed, and is available to virtually everyone. Subject of deep learning are artificial neural networks with a large number of layers. Compared to earlier approaches with ideally a single layer, this allows using massive computational resources to train black-box models directly on raw data with a minimum of engineering work. Most successful applications are found in visual image understanding, but also in audio and text modeling.

The recommender system is everywhere, and even streaming platform they have been looking for a maze of user available information handling products and services. Unfortunately, these black box systems do not have sufficient transparency, as they provide littlie description about the their prediction. In contrast, the white box system by its nature can produce a brief description. However, their predictions are less accurate than complex black box models. Recent research has shown that explanations are an important component in bringing powerful big data predictions and machine learning techniques to a mass audience without compromising trust.This paper proposes a new approach using semantic web technology to generate an explanation for the output of a black box recommender system. The developed model is trained to make predictions accompanied by explanations that are automatically extracted from the semantic network.


Author(s):  
Evren Dağlarli

The explainable artificial intelligence (xAI) is one of the interesting issues that has emerged recently. Many researchers are trying to deal with the subject with different dimensions and interesting results that have come out. However, we are still at the beginning of the way to understand these types of models. The forthcoming years are expected to be years in which the openness of deep learning models is discussed. In classical artificial intelligence approaches, we frequently encounter deep learning methods available today. These deep learning methods can yield highly effective results according to the data set size, data set quality, the methods used in feature extraction, the hyper parameter set used in deep learning models, the activation functions, and the optimization algorithms. However, there are important shortcomings that current deep learning models are currently inadequate. These artificial neural network-based models are black box models that generalize the data transmitted to it and learn from the data. Therefore, the relational link between input and output is not observable. This is an important open point in artificial neural networks and deep learning models. For these reasons, it is necessary to make serious efforts on the explainability and interpretability of black box models.


2019 ◽  
Vol 11 (12) ◽  
pp. 1499 ◽  
Author(s):  
David Griffiths ◽  
Jan Boehm

Over the past decade deep learning has driven progress in 2D image understanding. Despite these advancements, techniques for automatic 3D sensed data understanding, such as point clouds, is comparatively immature. However, with a range of important applications from indoor robotics navigation to national scale remote sensing there is a high demand for algorithms that can learn to automatically understand and classify 3D sensed data. In this paper we review the current state-of-the-art deep learning architectures for processing unstructured Euclidean data. We begin by addressing the background concepts and traditional methodologies. We review the current main approaches, including RGB-D, multi-view, volumetric and fully end-to-end architecture designs. Datasets for each category are documented and explained. Finally, we give a detailed discussion about the future of deep learning for 3D sensed data, using literature to justify the areas where future research would be most valuable.


Sports ◽  
2021 ◽  
Vol 10 (1) ◽  
pp. 5
Author(s):  
Alessio Rossi ◽  
Luca Pappalardo ◽  
Paolo Cintia

In the last decade, the number of studies about machine learning algorithms applied to sports, e.g., injury forecasting and athlete performance prediction, have rapidly increased. Due to the number of works and experiments already present in the state-of-the-art regarding machine-learning techniques in sport science, the aim of this narrative review is to provide a guideline describing a correct approach for training, validating, and testing machine learning models to predict events in sports science. The main contribution of this narrative review is to highlight any possible strengths and limitations during all the stages of model development, i.e., training, validation, testing, and interpretation, in order to limit possible errors that could induce misleading results. In particular, this paper shows an example about injury forecaster that provides a description of all the features that could be used to predict injuries, all the possible pre-processing approaches for time series analysis, how to correctly split the dataset to train and test the predictive models, and the importance to explain the decision-making approach of the white and black box models.


Author(s):  
A. V. N. Kameswari

Abstract: When humans see an image, their brain can easily tell what the image is about, but a computer cannot do it easily. Computer vision researchers worked on this a lot and they considered it impossible until now! With the advancement in Deep learning techniques, availability of huge datasets and computer power, we can build models that can generate captions for an image. Image Caption Generator is a popular research area of Deep Learning that deals with image understanding and a language description for that image. Generating well-formed sentences requires both syntactic and semantic understanding of the language. Being able to describe the content of an image using accurately formed sentences is a very challenging task, but it could also have a great impact, by helping visually impaired people better understand the content of images. The biggest challenge is most definitely being able to create a description that must capture not only the objects contained in an image, but also express how these objects relate to each other. This paper uses Flickr_8K dataset and Flickr8k_text folder that contains Flickr8k.token which is the main file of our dataset that contains image name and their respective caption separated by newline(“\n”). CNN is used for extracting features from the image. We will use the pre-trained model Xception. LSTM will use the information from CNN to help generate a description of the image. In our Flickr8k_text folder, we have Flickr_8k.trainImages.txt file that contains a list of 6000 images names that we will use for training. After CNN-LSTM model is defined we give an image file as parameter through command prompt for testing image caption generator and it generates the caption of an image and its accuracy is observed by calculating bleu score for generated and reference captions. Keywords: Image Caption Generator, Convolutional Neural Network, Long Short-Term Memory, Bleu score, Flickr_8K


2019 ◽  
Vol 8 (3) ◽  
pp. 4334-4340

The identification of the human being using iris based on image processing technique was one of better and older approach in human identification. Then to make human identification process an intelligent one using intelligent algorithms using machine learning techniques to train input images, extract features and classify the features based on classification techniques. The recent technology is enhancing for iris recognition based on deep learning networks, in which deeply train the images with number of layers, so that necessary features are extracted and then classify it and measure various parameters.


Recently, deep learning approaches have been getting more attention in many research fields. Medical imaging field has been attracting widely by deep learning techniques. An example of this field categories are images segmentations, images registration, images classification and retrieval of images database. This paper is presenting a number of experiments to classify rental images using Convolutional Neural Networks (CNN). These images of retinal could contain laser marks which left by the action of the laser on the surface of the retina. (CNN) is defined as trainable multi-stages architecture composed of multiple stages. The inputs and outputs of each stage are a set of arrays which called the feature maps. For the outputs, every feature map is representing a unique feature which extracted from all the regions which located on the input. Basically, every stage is consisted of three layers which are: a filter bank, a non linearity, and a layer of feature pooling. However, the classic (CNN) is normally consisting of three or less number of layers. The results accuracy were appropriate of more than 90%. As a summary of this paper, a number of considerations are listed for possible improvements and future developments.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Quentin Ferré ◽  
Jeanne Chèneby ◽  
Denis Puthier ◽  
Cécile Capponi ◽  
Benoît Ballester

Abstract Background Accurate identification of Transcriptional Regulator binding locations is essential for analysis of genomic regions, including Cis Regulatory Elements. The customary NGS approaches, predominantly ChIP-Seq, can be obscured by data anomalies and biases which are difficult to detect without supervision. Results Here, we develop a method to leverage the usual combinations between many experimental series to mark such atypical peaks. We use deep learning to perform a lossy compression of the genomic regions’ representations with multiview convolutions. Using artificial data, we show that our method correctly identifies groups of correlating series and evaluates CRE according to group completeness. It is then applied to the ReMap database’s large volume of curated ChIP-seq data. We show that peaks lacking known biological correlators are singled out and less confirmed in real data. We propose normalization approaches useful in interpreting black-box models. Conclusion Our approach detects peaks that are less corroborated than average. It can be extended to other similar problems, and can be interpreted to identify correlation groups. It is implemented in an open-source tool called atyPeak.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-12 ◽  
Author(s):  
Syed Atif Ali Shah ◽  
Irfan Uddin ◽  
Furqan Aziz ◽  
Shafiq Ahmad ◽  
Mahmoud Ahmad Al-Khasawneh ◽  
...  

Organizations can grow, succeed, and sustain if their employees are committed. The main assets of an organization are those employees who are giving it a required number of hours per month, in other words, those employees who are punctual towards their attendance. Absenteeism from work is a multibillion-dollar problem, and it costs money and decreases revenue. At the time of hiring an employee, organizations do not have an objective mechanism to predict whether an employee will be punctual towards attendance or will be habitually absent. For some organizations, it can be very difficult to deal with those employees who are not punctual, as firing may be either not possible or it may have a huge cost to the organization. In this paper, we propose Neural Networks and Deep Learning algorithms that can predict the behavior of employees towards punctuality at workplace. The efficacy of the proposed method is tested with traditional machine learning techniques, and the results indicate 90.6% performance in Deep Neural Network as compared to 73.3% performance in a single-layer Neural Network and 82% performance in Decision Tree, SVM, and Random Forest. The proposed model will provide a useful mechanism to organizations that are interested to know the behavior of employees at the time of hiring and can reduce the cost of paying to inefficient or habitually absent employees. This paper is a first study of its kind to analyze the patterns of absenteeism in employees using deep learning algorithms and helps the organization to further improve the quality of life of employees and hence reduce absenteeism.


Sign in / Sign up

Export Citation Format

Share Document