scholarly journals Micro Clustering Methodology for Document Objects using Deep Learning Techniques

Large data clustering and classification is a very challenging task in data mining. Various machine learning and deep learning systems have been proposed by many researchers on a different dataset. Data volume, data size and structure of data may affect the time complexity of the system. This paper described a new document object classification approach using deep learning (DL) and proposed a recurrent neural network (RNN) for classification with a micro-clustering approach.TF-IDF and a density-based approach are used to store the best features. The plane work used supervised learning method and it extracts features set called as BK of the desired classes. once the training part completed then proceeds to figure out the particular test instances with the help of the planned classification algorithm. Recurrent Neural Network categorized the particular test object according to their weights. The system can able to work on heterogeneous data set and generate the micro-clusters according to classified results. The system also carried out experimental analysis with classical machine learning algorithms. The proposed algorithm shows higher accuracy than the existing density-based approach on different data sets.

Sensors ◽  
2020 ◽  
Vol 20 (6) ◽  
pp. 1576 ◽  
Author(s):  
Li Zhu ◽  
Lianghao Huang ◽  
Linyu Fan ◽  
Jinsong Huang ◽  
Faming Huang ◽  
...  

Landslide susceptibility prediction (LSP) modeling is an important and challenging problem. Landslide features are generally uncorrelated or nonlinearly correlated, resulting in limited LSP performance when leveraging conventional machine learning models. In this study, a deep-learning-based model using the long short-term memory (LSTM) recurrent neural network and conditional random field (CRF) in cascade-parallel form was proposed for making LSPs based on remote sensing (RS) images and a geographic information system (GIS). The RS images are the main data sources of landslide-related environmental factors, and a GIS is used to analyze, store, and display spatial big data. The cascade-parallel LSTM-CRF consists of frequency ratio values of environmental factors in the input layers, cascade-parallel LSTM for feature extraction in the hidden layers, and cascade-parallel full connection for classification and CRF for landslide/non-landslide state modeling in the output layers. The cascade-parallel form of LSTM can extract features from different layers and merge them into concrete features. The CRF is used to calculate the energy relationship between two grid points, and the extracted features are further smoothed and optimized. As a case study, the cascade-parallel LSTM-CRF was applied to Shicheng County of Jiangxi Province in China. A total of 2709 landslide grid cells were recorded and 2709 non-landslide grid cells were randomly selected from the study area. The results show that, compared with existing main traditional machine learning algorithms, such as multilayer perception, logistic regression, and decision tree, the proposed cascade-parallel LSTM-CRF had a higher landslide prediction rate (positive predictive rate: 72.44%, negative predictive rate: 80%, total predictive rate: 75.67%). In conclusion, the proposed cascade-parallel LSTM-CRF is a novel data-driven deep learning model that overcomes the limitations of traditional machine learning algorithms and achieves promising results for making LSPs.


2019 ◽  
Vol 8 (2) ◽  
pp. 5073-5081

Prediction of student performance is the significant part in processing the educational data. Machine learning algorithms are leading the role in this process. Deep learning is one of the important concepts of machine learning algorithm. In this paper, we applied the deep learning technique for prediction of the academic excellence of the students using R Programming. Keras and Tensorflow libraries utilized for making the model using neural network on the Kaggle dataset. The data is separated into testing data training data set. Plot the neural network model using neuralnet method and created the Deep Learning model using two hidden layers using ReLu activation function and one output layer using softmax activation function. After fine tuning process until the stable changes; this model produced accuracy as 85%.


Author(s):  
E. Yu. Shchetinin

The recognition of human emotions is one of the most relevant and dynamically developing areas of modern speech technologies, and the recognition of emotions in speech (RER) is the most demanded part of them. In this paper, we propose a computer model of emotion recognition based on an ensemble of bidirectional recurrent neural network with LSTM memory cell and deep convolutional neural network ResNet18. In this paper, computer studies of the RAVDESS database containing emotional speech of a person are carried out. RAVDESS-a data set containing 7356 files. Entries contain the following emotions: 0 – neutral, 1 – calm, 2 – happiness, 3 – sadness, 4 – anger, 5 – fear, 6 – disgust, 7 – surprise. In total, the database contains 16 classes (8 emotions divided into male and female) for a total of 1440 samples (speech only). To train machine learning algorithms and deep neural networks to recognize emotions, existing audio recordings must be pre-processed in such a way as to extract the main characteristic features of certain emotions. This was done using Mel-frequency cepstral coefficients, chroma coefficients, as well as the characteristics of the frequency spectrum of audio recordings. In this paper, computer studies of various models of neural networks for emotion recognition are carried out on the example of the data described above. In addition, machine learning algorithms were used for comparative analysis. Thus, the following models were trained during the experiments: logistic regression (LR), classifier based on the support vector machine (SVM), decision tree (DT), random forest (RF), gradient boosting over trees – XGBoost, convolutional neural network CNN, recurrent neural network RNN (ResNet18), as well as an ensemble of convolutional and recurrent networks Stacked CNN-RNN. The results show that neural networks showed much higher accuracy in recognizing and classifying emotions than the machine learning algorithms used. Of the three neural network models presented, the CNN + BLSTM ensemble showed higher accuracy.


Kursor ◽  
2020 ◽  
Vol 10 (4) ◽  
Author(s):  
Felisia Handayani ◽  
Metty Mustikasari

Sentiment analysis is computational research of the opinions of many people who are textually expressed against a particular topic. Twitter is the most popular communication tool among Internet users today to express their opinions. Deep Learning is a solution to allow computers to learn from experience and understand the world in terms of the hierarchy concept. Deep Learning objectives replace manual assignments with learning. The development of deep learning has a set of algorithms that focus on learning data representation. The recurrent Neural Network is one of the machine learning methods included in Deep learning because the data is processed through multi-players. RNN is also an algorithm that can recall the input with internal memory, therefore it is suitable for machine learning problems involving sequential data. The study aims to test models that have been created from tweets that are positive, negative, and neutral sentiment to determine the accuracy of the models. The models have been created using the Recurrent Neural Network when applied to tweet classifications to mark the individual classes of Indonesian-language tweet data sentiment. From the experiments conducted, results on the built system showed that the best test results in the tweet data with the RNN method using Confusion Matrix are with Precision 0.618, Recall 0.507 and Accuracy 0.722 on the data amounted to 3000 data and comparative data training and data testing of ratio data 80:20


2022 ◽  
pp. 1559-1575
Author(s):  
Mário Pereira Véstias

Machine learning is the study of algorithms and models for computing systems to do tasks based on pattern identification and inference. When it is difficult or infeasible to develop an algorithm to do a particular task, machine learning algorithms can provide an output based on previous training data. A well-known machine learning model is deep learning. The most recent deep learning models are based on artificial neural networks (ANN). There exist several types of artificial neural networks including the feedforward neural network, the Kohonen self-organizing neural network, the recurrent neural network, the convolutional neural network, the modular neural network, among others. This article focuses on convolutional neural networks with a description of the model, the training and inference processes and its applicability. It will also give an overview of the most used CNN models and what to expect from the next generation of CNN models.


2022 ◽  
pp. 27-50
Author(s):  
Rajalaxmi Prabhu B. ◽  
Seema S.

A lot of user-generated data is available these days from huge platforms, blogs, websites, and other review sites. These data are usually unstructured. Analyzing sentiments from these data automatically is considered an important challenge. Several machine learning algorithms are implemented to check the opinions from large data sets. A lot of research has been undergone in understanding machine learning approaches to analyze sentiments. Machine learning mainly depends on the data required for model building, and hence, suitable feature exactions techniques also need to be carried. In this chapter, several deep learning approaches, its challenges, and future issues will be addressed. Deep learning techniques are considered important in predicting the sentiments of users. This chapter aims to analyze the deep-learning techniques for predicting sentiments and understanding the importance of several approaches for mining opinions and determining sentiment polarity.


Author(s):  
Nirmal Yadav

Applying machine learning in life sciences, especially diagnostics, has become a key area of focus for researchers. Combining machine learning with traditional algorithms provides a unique opportunity of providing better solutions for the patients. In this paper, we present study results of applying the Ridgelet Transform method on retina images to enhance the blood vessels, then using machine learning algorithms to identify cases of Diabetic Retinopathy (DR). The Ridgelet transform provides better results for line singularity of image function and, thus, helps to reduce artefacts along the edges of the image. The Ridgelet Transform method, when compared with earlier known methods of image enhancement, such as Wavelet Transform and Contourlet Transform, provided satisfactory results. The transformed image using the Ridgelet Transform method with pre-processing quantifies the amount of information in the dataset. It efficiently enhances the generation of features vectors in the convolution neural network (CNN). In this study, a sample of fundus photographs was processed, which was obtained from a publicly available dataset. In pre-processing, first, CLAHE was applied, followed by filtering and application of Ridgelet transform on the patches to improve the quality of the image. Then, this processed image was used for statistical feature detection and classified by deep learning method to detect DR images from the dataset. The successful classification ratio was 98.61%. This result concludes that the transformed image of fundus using the Ridgelet Transform enables better detection by leveraging a transform-based algorithm and the deep learning.


Geophysics ◽  
2020 ◽  
Vol 85 (4) ◽  
pp. WA41-WA52 ◽  
Author(s):  
Dario Grana ◽  
Leonardo Azevedo ◽  
Mingliang Liu

Among the large variety of mathematical and computational methods for estimating reservoir properties such as facies and petrophysical variables from geophysical data, deep machine-learning algorithms have gained significant popularity for their ability to obtain accurate solutions for geophysical inverse problems in which the physical models are partially unknown. Solutions of classification and inversion problems are generally not unique, and uncertainty quantification studies are required to quantify the uncertainty in the model predictions and determine the precision of the results. Probabilistic methods, such as Monte Carlo approaches, provide a reliable approach for capturing the variability of the set of possible models that match the measured data. Here, we focused on the classification of facies from seismic data and benchmarked the performance of three different algorithms: recurrent neural network, Monte Carlo acceptance/rejection sampling, and Markov chain Monte Carlo. We tested and validated these approaches at the well locations by comparing classification predictions to the reference facies profile. The accuracy of the classification results is defined as the mismatch between the predictions and the log facies profile. Our study found that when the training data set of the neural network is large enough and the prior information about the transition probabilities of the facies in the Monte Carlo approach is not informative, machine-learning methods lead to more accurate solutions; however, the uncertainty of the solution might be underestimated. When some prior knowledge of the facies model is available, for example, from nearby wells, Monte Carlo methods provide solutions with similar accuracy to the neural network and allow a more robust quantification of the uncertainty, of the solution.


eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Daniel Griffith ◽  
Alex S Holehouse

The rise of high-throughput experiments has transformed how scientists approach biological questions. The ubiquity of large-scale assays that can test thousands of samples in a day has necessitated the development of new computational approaches to interpret this data. Among these tools, machine learning approaches are increasingly being utilized due to their ability to infer complex nonlinear patterns from high-dimensional data. Despite their effectiveness, machine learning (and in particular deep learning) approaches are not always accessible or easy to implement for those with limited computational expertise. Here we present PARROT, a general framework for training and applying deep learning-based predictors on large protein datasets. Using an internal recurrent neural network architecture, PARROT is capable of tackling both classification and regression tasks while only requiring raw protein sequences as input. We showcase the potential uses of PARROT on three diverse machine learning tasks: predicting phosphorylation sites, predicting transcriptional activation function of peptides generated by high-throughput reporter assays, and predicting the fibrillization propensity of amyloid beta with data generated by deep mutational scanning. Through these examples, we demonstrate that PARROT is easy to use, performs comparably to state-of-the-art computational tools, and is applicable for a wide array of biological problems.


2021 ◽  

<p>Water being a precious commodity for every person around the world needs to be quality monitored continuously for ensuring safety whilst usage. The water data collected from sensors in water plants are used for water quality assessment. The anomaly present in the water data seriously affects the performance of water quality assessment. Hence it needs to be addressed. In this regard, water data collected from sensors have been subjected to various anomaly detection approaches guided by Machine Learning (ML) and Deep Learning framework. Standard machine learning algorithms have been used extensively in water quality analysis and these algorithms in general converge quickly. Considering the fact that manual feature selection has to be done for ML algorithms, Deep Learning (DL) algorithm is proposed which involve implicit feature learning. A hybrid model is formulated that takes advantage of both and presented it is data invariant too. This novel Hybrid Convolutional Neural Network (CNN) and Extreme Learning Machine (ELM) approach is used to detect presence of anomalies in sensor collected water data. The experiment of the proposed CNN-ELM model is carried out using the publicly available dataset GECCO 2019. The findings proved that the model has improved the water quality assessment of the sensor water data collected by detecting the anomalies efficiently and achieves F1 score of 0.92. This model can be implemented in water quality assessment.</p>


Sign in / Sign up

Export Citation Format

Share Document