scholarly journals Winsorization for Robust Bayesian Neural Networks

Entropy ◽  
2021 ◽  
Vol 23 (11) ◽  
pp. 1546
Author(s):  
Somya Sharma ◽  
Snigdhansu Chatterjee

With the advent of big data and the popularity of black-box deep learning methods, it is imperative to address the robustness of neural networks to noise and outliers. We propose the use of Winsorization to recover model performances when the data may have outliers and other aberrant observations. We provide a comparative analysis of several probabilistic artificial intelligence and machine learning techniques for supervised learning case studies. Broadly, Winsorization is a versatile technique for accounting for outliers in data. However, different probabilistic machine learning techniques have different levels of efficiency when used on outlier-prone data, with or without Winsorization. We notice that Gaussian processes are extremely vulnerable to outliers, while deep learning techniques in general are more robust.

2017 ◽  
Vol 1 (3) ◽  
pp. 83 ◽  
Author(s):  
Chandrasegar Thirumalai ◽  
Ravisankar Koppuravuri

In this paper, we will use deep neural networks for predicting the bike sharing usage based on previous years usage data. We will use because deep neural nets for getting higher accuracy. Deep neural nets are quite different from other machine learning techniques; here we can add many numbers of hidden layers to improve the accuracy of our prediction and the model can be trained in the way we want such that we can achieve the results we want. Nowadays many AI experts will say that deep learning is the best AI technique available now and we can achieve some unbelievable results using this technique. Now we will use that technique to predict bike sharing usage of a rental company to make sure they can take good business decisions based on previous years data.


2017 ◽  
Vol 10 (13) ◽  
pp. 489 ◽  
Author(s):  
Saheb Ghosh ◽  
Sathis Kumar B ◽  
Kathir Deivanai

Deep learning methods are a great machine learning technique which is mostly used in artificial neural networks for pattern recognition. This project is to identify the Whales from under water Bioacoustics network using an efficient algorithm and data model, so that location of the whales can be send to the Ships travelling in the same region in order to avoid collision with the whale or disturbing their natural habitat as much as possible. This paper shows application of unsupervised machine learning techniques with help of deep belief network and manual feature extraction model for better results.


2021 ◽  
pp. 43-53
Author(s):  
admin admin ◽  
◽  
◽  
Adnan Mohsin Abdulazeez

Due to many new medical uses, the value of ECG classification is very demanding. There are some Machine Learning (ML) algorithms currently available that can be used for ECG data processing and classification. The key limitations of these ML studies, however, are the use of heuristic hand-crafted or engineered characteristics of shallow learning architectures. The difficulty lies in the probability of not having the most suitable functionality that will provide this ECG problem with good classification accuracy. One choice suggested is to use deep learning algorithms in which the first layer of CNN acts as a feature. This paper summarizes some of the key approaches of ECG classification in machine learning, assessing them in terms of the characteristics they use, the precision of classification important physiological keys ECG biomarkers derived from machine learning techniques, and statistical modeling and supported simulation.


Philosophies ◽  
2019 ◽  
Vol 4 (2) ◽  
pp. 27
Author(s):  
Jean-Louis Dessalles

Deep learning and other similar machine learning techniques have a huge advantage over other AI methods: they do function when applied to real-world data, ideally from scratch, without human intervention. However, they have several shortcomings that mere quantitative progress is unlikely to overcome. The paper analyses these shortcomings as resulting from the type of compression achieved by these techniques, which is limited to statistical compression. Two directions for qualitative improvement, inspired by comparison with cognitive processes, are proposed here, in the form of two mechanisms: complexity drop and contrast. These mechanisms are supposed to operate dynamically and not through pre-processing as in neural networks. Their introduction may bring the functioning of AI away from mere reflex and closer to reflection.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Rami R. Hallac ◽  
Jeon Lee ◽  
Mark Pressler ◽  
James R. Seaward ◽  
Alex A. Kane

AbstractQuantifying ear deformity using linear measurements and mathematical modeling is difficult due to the ear’s complex shape. Machine learning techniques, such as convolutional neural networks (CNNs), are well-suited for this role. CNNs are deep learning methods capable of finding complex patterns from medical images, automatically building solution models capable of machine diagnosis. In this study, we applied CNN to automatically identify ear deformity from 2D photographs. Institutional review board (IRB) approval was obtained for this retrospective study to train and test the CNNs. Photographs of patients with and without ear deformity were obtained as standard of care in our photography studio. Profile photographs were obtained for one or both ears. A total of 671 profile pictures were used in this study including: 457 photographs of patients with ear deformity and 214 photographs of patients with normal ears. Photographs were cropped to the ear boundary and randomly divided into training (60%), validation (20%), and testing (20%) datasets. We modified the softmax classifier in the last layer in GoogLeNet, a deep CNN, to generate an ear deformity detection model in Matlab. All images were deemed of high quality and usable for training and testing. It took about 2 hours to train the system and the training accuracy reached almost 100%. The test accuracy was about 94.1%. We demonstrate that deep learning has a great potential in identifying ear deformity. These machine learning techniques hold the promise in being used in the future to evaluate treatment outcomes.


2020 ◽  
Vol 79 (41-42) ◽  
pp. 30387-30395
Author(s):  
Stavros Ntalampiras

Abstract Predicting the emotional responses of humans to soundscapes is a relatively recent field of research coming with a wide range of promising applications. This work presents the design of two convolutional neural networks, namely ArNet and ValNet, each one responsible for quantifying arousal and valence evoked by soundscapes. We build on the knowledge acquired from the application of traditional machine learning techniques on the specific domain, and design a suitable deep learning framework. Moreover, we propose the usage of artificially created mixed soundscapes, the distributions of which are located between the ones of the available samples, a process that increases the variance of the dataset leading to significantly better performance. The reported results outperform the state of the art on a soundscape dataset following Schafer’s standardized categorization considering both sound’s identity and the respective listening context.


Vibration ◽  
2021 ◽  
Vol 4 (2) ◽  
pp. 341-356
Author(s):  
Jessada Sresakoolchai ◽  
Sakdirat Kaewunruen

Various techniques have been developed to detect railway defects. One of the popular techniques is machine learning. This unprecedented study applies deep learning, which is a branch of machine learning techniques, to detect and evaluate the severity of rail combined defects. The combined defects in the study are settlement and dipped joint. Features used to detect and evaluate the severity of combined defects are axle box accelerations simulated using a verified rolling stock dynamic behavior simulation called D-Track. A total of 1650 simulations are run to generate numerical data. Deep learning techniques used in the study are deep neural network (DNN), convolutional neural network (CNN), and recurrent neural network (RNN). Simulated data are used in two ways: simplified data and raw data. Simplified data are used to develop the DNN model, while raw data are used to develop the CNN and RNN model. For simplified data, features are extracted from raw data, which are the weight of rolling stock, the speed of rolling stock, and three peak and bottom accelerations from two wheels of rolling stock. In total, there are 14 features used as simplified data for developing the DNN model. For raw data, time-domain accelerations are used directly to develop the CNN and RNN models without processing and data extraction. Hyperparameter tuning is performed to ensure that the performance of each model is optimized. Grid search is used for performing hyperparameter tuning. To detect the combined defects, the study proposes two approaches. The first approach uses one model to detect settlement and dipped joint, and the second approach uses two models to detect settlement and dipped joint separately. The results show that the CNN models of both approaches provide the same accuracy of 99%, so one model is good enough to detect settlement and dipped joint. To evaluate the severity of the combined defects, the study applies classification and regression concepts. Classification is used to evaluate the severity by categorizing defects into light, medium, and severe classes, and regression is used to estimate the size of defects. From the study, the CNN model is suitable for evaluating dipped joint severity with an accuracy of 84% and mean absolute error (MAE) of 1.25 mm, and the RNN model is suitable for evaluating settlement severity with an accuracy of 99% and mean absolute error (MAE) of 1.58 mm.


2021 ◽  
Author(s):  
Rogini Runghen ◽  
Daniel B Stouffer ◽  
Giulio Valentino Dalla Riva

Collecting network interaction data is difficult. Non-exhaustive sampling and complex hidden processes often result in an incomplete data set. Thus, identifying potentially present but unobserved interactions is crucial both in understanding the structure of large scale data, and in predicting how previously unseen elements will interact. Recent studies in network analysis have shown that accounting for metadata (such as node attributes) can improve both our understanding of how nodes interact with one another, and the accuracy of link prediction. However, the dimension of the object we need to learn to predict interactions in a network grows quickly with the number of nodes. Therefore, it becomes computationally and conceptually challenging for large networks. Here, we present a new predictive procedure combining a graph embedding method with machine learning techniques to predict interactions on the base of nodes' metadata. Graph embedding methods project the nodes of a network onto a---low dimensional---latent feature space. The position of the nodes in the latent feature space can then be used to predict interactions between nodes. Learning a mapping of the nodes' metadata to their position in a latent feature space corresponds to a classic---and low dimensional---machine learning problem. In our current study we used the Random Dot Product Graph model to estimate the embedding of an observed network, and we tested different neural networks architectures to predict the position of nodes in the latent feature space. Flexible machine learning techniques to map the nodes onto their latent positions allow to account for multivariate and possibly complex nodes' metadata. To illustrate the utility of the proposed procedure, we apply it to a large dataset of tourist visits to destinations across New Zealand. We found that our procedure accurately predicts interactions for both existing nodes and nodes newly added to the network, while being computationally feasible even for very large networks. Overall, our study highlights that by exploiting the properties of a well understood statistical model for complex networks and combining it with standard machine learning techniques, we can simplify the link prediction problem when incorporating multivariate node metadata. Our procedure can be immediately applied to different types of networks, and to a wide variety of data from different systems. As such, both from a network science and data science perspective, our work offers a flexible and generalisable procedure for link prediction.


Author(s):  
V Umarani ◽  
A Julian ◽  
J Deepa

Sentiment analysis has gained a lot of attention from researchers in the last year because it has been widely applied to a variety of application domains such as business, government, education, sports, tourism, biomedicine, and telecommunication services. Sentiment analysis is an automated computational method for studying or evaluating sentiments, feelings, and emotions expressed as comments, feedbacks, or critiques. The sentiment analysis process can be automated using machine learning techniques, which analyses text patterns faster. The supervised machine learning technique is the most used mechanism for sentiment analysis. The proposed work discusses the flow of sentiment analysis process and investigates the common supervised machine learning techniques such as multinomial naive bayes, Bernoulli naive bayes, logistic regression, support vector machine, random forest, K-nearest neighbor, decision tree, and deep learning techniques such as Long Short-Term Memory and Convolution Neural Network. The work examines such learning methods using standard data set and the experimental results of sentiment analysis demonstrate the performance of various classifiers taken in terms of the precision, recall, F1-score, RoC-Curve, accuracy, running time and k fold cross validation and helps in appreciating the novelty of the several deep learning techniques and also giving the user an overview of choosing the right technique for their application.


Sign in / Sign up

Export Citation Format

Share Document