scholarly journals Convolution Neural Network: A Shallow Dive in to Deep Neural Net Technology

It is always beneficial to reassess the previously done work to create interest and develop understanding about the subject in importance. In computer vision, to perform the task of feature extraction, classification or segmentation, measurement and assessment of image structures (medical images, natural images etc.) is to be done very efficiently. In the field of image processing numerous techniques are available, but it is very difficult to perform these tasks due to noise and other variable artifacts. Various Deep machine learning algorithms are used to perform complex task of recognition and computer vision. Recently Convolutional Neural Networks (CNNs-back bone of numerous deep learning algorithms) have shown state of the art performance in high level computer vision tasks, such as object detection, object recognition, classification, machine translation, semantic segmentation, speech recognition, scene labelling, medical imaging, robotics and control, , natural language processing (NLP), bio-informatics, cybersecurity, and many others. Convolution neural networks is the attempt to combine mathematics to computer science with icing of biology on it. CNNs work in two parts. The first part is mathematics that supports feature extraction and second part is about classification and prediction at pixel level. This review is intended for those who want to grab the complete knowledge about CNN, their development form ancient age to modern state of art system of deep learning system. This review paper is organized in three steps: in the first step introduction about the concept is given along with necessary background information. In the second step other highlights and related work proposed by various authors is explained. Third step is the complete layer wise architecture of convolution networks. The last section is followed by detailed discussion on improvements, and challenges on these deep learning techniques. Most papers consider for this review are later than 2012 from when the history of convolution neural networks and deep learning begins

Landslides can easily be tragic to human life and property. Increase in the rate of human settlement in the mountains has resulted in safety concerns. Landslides have caused economic loss between 1-2% of the GDP in many developing countries. In this study, we discuss a deep learning approach to detect landslides. Convolutional Neural Networks are used for feature extraction for our proposed model. As there was no source of an exact and precise data set for feature extraction, therefore, a new data set was built for testing the model. We have tested and compared this work with our proposed model and with other machine-learning algorithms such as Logistic Regression, Random Forest, AdaBoost, K-Nearest Neighbors and Support Vector Machine. Our proposed deep learning model produces a classification accuracy of 96.90% outperforming the classical machine-learning algorithms.


Author(s):  
Bhavana D. ◽  
K. Chaitanya Krishna ◽  
Tejaswini K. ◽  
N. Venkata Vikas ◽  
A. N. V. Sahithya

The task of image caption generator is mainly about extracting the features and ongoings of an image and generating human-readable captions that translate the features of the objects in the image. The contents of an image can be described by having knowledge about natural language processing and computer vision. The features can be extracted using convolution neural networks which makes use of transfer learning to implement the exception model. It stands for extreme inception, which has a feature extraction base with 36 convolution layers. This shows accurate results when compared with the other CNNs. Recurrent neural networks are used for describing the image and to generate accurate sentences. The feature vector that is extracted by using the CNN is fed to the LSTM. The Flicker 8k dataset is used to train the network in which the data is labeled properly. The model will be able to generate accurate captions that nearly describe the activities carried in the image when an input image is given to it. Further, the authors use the BLEU scores to validate the model.


2019 ◽  
Vol 63 (4) ◽  
pp. 243-252 ◽  
Author(s):  
Jaret Hodges ◽  
Soumya Mohan

Machine learning algorithms are used in language processing, automated driving, and for prediction. Though the theory of machine learning has existed since the 1950s, it was not until the advent of advanced computing that their potential has begun to be realized. Gifted education is a field where machine learning has yet to be utilized, even though one of the underlying problems of gifted education is classification, which is an area where learning algorithms have become exceptionally accurate. We provide a brief overview of machine learning with a focus on neural networks and supervised learning, followed by a demonstration using simulated data and neural networks for classification issues with a practical explanation of the mechanics of the neural network and associated R code. Implications for gifted education are then discussed. Finally, the limitations of supervised learning are discussed. Code used in this article can be found at https://osf.io/4pa3b/


2019 ◽  
Vol 3 (2) ◽  
pp. 31-40 ◽  
Author(s):  
Ahmed Shamsaldin ◽  
Polla Fattah ◽  
Tarik Rashid ◽  
Nawzad Al-Salihi

At present, deep learning is widely used in a broad range of arenas. A convolutional neural networks (CNN) is becoming the star of deep learning as it gives the best and most precise results when cracking real-world problems. In this work, a brief description of the applications of CNNs in two areas will be presented: First, in computer vision, generally, that is, scene labeling, face recognition, action recognition, and image classification; Second, in natural language processing, that is, the fields of speech recognition and text classification.


2020 ◽  
Vol 70 (2) ◽  
pp. 234-238
Author(s):  
K.S. Imanbaev ◽  

Currently, deep learning of neural networks is one of the most popular methods for speech recognition, natural language processing, and computer vision. The article reviews the history of deep learning of neural networks and the current state in General. We consider algorithms for training neural networks used for deep training of neural networks, followed by fine-tuning using the method of back propagation of errors. Neural networks with large numbers of hidden layers, frequently occurring and disappearing gradients are very difficult to train. In this paper, we consider methods that successfully implement training of neural networks with large numbers of layers (more than one hundred) and vanishing gradients. A review of well-known libraries used for successful deep learning of neural networks is conducted.


2021 ◽  
Vol 2115 (1) ◽  
pp. 012042
Author(s):  
S Premanand ◽  
Sathiya Narayanan

Abstract The primary objective of this particular paper is to classify the health-related data without feature extraction in Machine Learning, which hinder the performance and reliability. The assumption of our work will be like, can we able to get better result for health-related data with the help of Tree based Machine Learning algorithms without extracting features like in Deep Learning. This study performs better classification with Tree based Machine Learning approach for the health-related medical data. After doing pre-processing, without feature extraction, i.e., from raw data signal with the help of Machine Learning algorithms we are able to get better results. The presented paper which has better result even when compared to some of the advanced Deep Learning architecture models. The results demonstrate that overall classification accuracy of Random Forest, XGBoost, LightGBM and CatBoost, Tree-based Machine Learning algorithms for normal and abnormal condition of the datasets was found to be 97.88%, 98.23%, 98.03% and 95.57% respectively.


Author(s):  
Amit Kumar Tyagi ◽  
Poonam Chahal

With the recent development in technologies and integration of millions of internet of things devices, a lot of data is being generated every day (known as Big Data). This is required to improve the growth of several organizations or in applications like e-healthcare, etc. Also, we are entering into an era of smart world, where robotics is going to take place in most of the applications (to solve the world's problems). Implementing robotics in applications like medical, automobile, etc. is an aim/goal of computer vision. Computer vision (CV) is fulfilled by several components like artificial intelligence (AI), machine learning (ML), and deep learning (DL). Here, machine learning and deep learning techniques/algorithms are used to analyze Big Data. Today's various organizations like Google, Facebook, etc. are using ML techniques to search particular data or recommend any post. Hence, the requirement of a computer vision is fulfilled through these three terms: AI, ML, and DL.


2020 ◽  
Author(s):  
Thomas R. Lane ◽  
Daniel H. Foil ◽  
Eni Minerali ◽  
Fabio Urbina ◽  
Kimberley M. Zorn ◽  
...  

<p>Machine learning methods are attracting considerable attention from the pharmaceutical industry for use in drug discovery and applications beyond. In recent studies we have applied multiple machine learning algorithms, modeling metrics and in some cases compared molecular descriptors to build models for individual targets or properties on a relatively small scale. Several research groups have used large numbers of datasets from public databases such as ChEMBL in order to evaluate machine learning methods of interest to them. The largest of these types of studies used on the order of 1400 datasets. We have now extracted well over 5000 datasets from CHEMBL for use with the ECFP6 fingerprint and comparison of our proprietary software Assay Central<sup>TM</sup> with random forest, k-Nearest Neighbors, support vector classification, naïve Bayesian, AdaBoosted decision trees, and deep neural networks (3 levels). Model performance <a>was</a> assessed using an array of five-fold cross-validation metrics including area-under-the-curve, F1 score, Cohen’s kappa and Matthews correlation coefficient. <a>Based on ranked normalized scores for the metrics or datasets all methods appeared comparable while the distance from the top indicated Assay Central<sup>TM</sup> and support vector classification were comparable. </a>Unlike prior studies which have placed considerable emphasis on deep neural networks (deep learning), no advantage was seen in this case where minimal tuning was performed of any of the methods. If anything, Assay Central<sup>TM</sup> may have been at a slight advantage as the activity cutoff for each of the over 5000 datasets representing over 570,000 unique compounds was based on Assay Central<sup>TM</sup>performance, but support vector classification seems to be a strong competitor. We also apply Assay Central<sup>TM</sup> to prospective predictions for PXR and hERG to further validate these models. This work currently appears to be the largest comparison of machine learning algorithms to date. Future studies will likely evaluate additional databases, descriptors and algorithms, as well as further refining methods for evaluating and comparing models. </p><p><b> </b></p>


Sign in / Sign up

Export Citation Format

Share Document