scholarly journals Semantic Data Pre-Processing for Machine Learning Based Bankruptcy Prediction Computational Model

Author(s):  
Natalia Yerashenia ◽  
Alexander Bolotov ◽  
David Chan ◽  
Gabriele Pierantoni
2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Deepa S.N.

Purpose Limitations encountered with the models developed in the previous studies had occurrences of global minima; due to which this study developed a new intelligent ubiquitous computational model that learns with gradient descent learning rule and operates with auto-encoders and decoders to attain better energy optimization. Ubiquitous machine learning computational model process performs training in a better way than regular supervised learning or unsupervised learning computational models with deep learning techniques, resulting in better learning and optimization for the considered problem domain of cloud-based internet-of-things (IOTs). This study aims to improve the network quality and improve the data accuracy rate during the network transmission process using the developed ubiquitous deep learning computational model. Design/methodology/approach In this research study, a novel intelligent ubiquitous machine learning computational model is designed and modelled to maintain the optimal energy level of cloud IOTs in sensor network domains. A new intelligent ubiquitous computational model that learns with gradient descent learning rule and operates with auto-encoders and decoders to attain better energy optimization is developed. A new unified deterministic sine-cosine algorithm has been developed in this study for parameter optimization of weight factors in the ubiquitous machine learning model. Findings The newly developed ubiquitous model is used for finding network energy and performing its optimization in the considered sensor network model. At the time of progressive simulation, residual energy, network overhead, end-to-end delay, network lifetime and a number of live nodes are evaluated. It is elucidated from the results attained, that the ubiquitous deep learning model resulted in better metrics based on its appropriate cluster selection and minimized route selection mechanism. Research limitations/implications In this research study, a novel ubiquitous computing model derived from a new optimization algorithm called a unified deterministic sine-cosine algorithm and deep learning technique was derived and applied for maintaining the optimal energy level of cloud IOTs in sensor networks. The deterministic levy flight concept is applied for developing the new optimization technique and this tends to determine the parametric weight values for the deep learning model. The ubiquitous deep learning model is designed with auto-encoders and decoders and their corresponding layers weights are determined for optimal values with the optimization algorithm. The modelled ubiquitous deep learning approach was applied in this study to determine the network energy consumption rate and thereby optimize the energy level by increasing the lifetime of the sensor network model considered. For all the considered network metrics, the ubiquitous computing model has proved to be effective and versatile than previous approaches from early research studies. Practical implications The developed ubiquitous computing model with deep learning techniques can be applied for any type of cloud-assisted IOTs in respect of wireless sensor networks, ad hoc networks, radio access technology networks, heterogeneous networks, etc. Practically, the developed model facilitates computing the optimal energy level of the cloud IOTs for any considered network models and this helps in maintaining a better network lifetime and reducing the end-to-end delay of the networks. Social implications The social implication of the proposed research study is that it helps in reducing energy consumption and increases the network lifetime of the cloud IOT based sensor network models. This approach helps the people in large to have a better transmission rate with minimized energy consumption and also reduces the delay in transmission. Originality/value In this research study, the network optimization of cloud-assisted IOTs of sensor network models is modelled and analysed using machine learning models as a kind of ubiquitous computing system. Ubiquitous computing models with machine learning techniques develop intelligent systems and enhances the users to make better and faster decisions. In the communication domain, the use of predictive and optimization models created with machine learning accelerates new ways to determine solutions to problems. Considering the importance of learning techniques, the ubiquitous computing model is designed based on a deep learning strategy and the learning mechanism adapts itself to attain a better network optimization model.


2012 ◽  
pp. 535-578
Author(s):  
Jie Tang ◽  
Duo Zhang ◽  
Limin Yao ◽  
Yi Li

This chapter aims to give a thorough investigation of the techniques for automatic semantic annotation. The Semantic Web provides a common framework that allows data to be shared and reused across applications, enterprises, and community boundaries. However, lack of annotated semantic data is a bottleneck to make the Semantic Web vision a reality. Therefore, it is indeed necessary to automate the process of semantic annotation. In the past few years, there was a rapid expansion of activities in the semantic annotation area. Many methods have been proposed for automating the annotation process. However, due to the heterogeneity and the lack of structure of the Web data, automated discovery of the targeted or unexpected knowledge information still present many challenging research problems. In this chapter, we study the problems of semantic annotation and introduce the state-of-the-art methods for dealing with the problems. We will also give a brief survey of the developed systems based on the methods. Several real-world applications of semantic annotation will be introduced as well. Finally, some emerging challenges in semantic annotation will be discussed.


2016 ◽  
Vol 49 (2) ◽  
pp. 325-341 ◽  
Author(s):  
Dong Zhao ◽  
Chunyu Huang ◽  
Yan Wei ◽  
Fanhua Yu ◽  
Mingjing Wang ◽  
...  

IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Min Sue Park ◽  
Hwijae Son ◽  
Chongseok Hyun ◽  
Hyung Ju Hwang

2021 ◽  
Author(s):  
Sian Xiao ◽  
Hao Tian ◽  
Peng Tao

Allostery is a fundamental process in regulating proteins’ activity. The discovery, design and development of allosteric drugs demand for better identification of allosteric sites. Several computational methods have been developed previously to predict allosteric sites using static pocket features and protein dynamics. Here, we present a computational model using automated machine learning for allosteric site prediction. Our model, PASSer2.0, advanced the previous results and performed well across multiple indicators with 89.2% of allosteric pockets appeared among the top 3 positions. The trained machine learning model has been integrated with the Protein Allosteric Sites Server (https://passer.smu.edu) to facilitate allosteric drug discovery.


2019 ◽  
Vol 7 (3) ◽  
pp. 104-111
Author(s):  
C. Punitha Devi ◽  
T. Vigneswari ◽  
C. Nancy ◽  
E. Priyanka ◽  
R. Yamuna

Author(s):  
Chih-Fong Tsai ◽  
Yu-Hsin Lu ◽  
Yu-Feng Hsu

It is very important for financial institutions which are capable of accurately predicting business failure. In literature, numbers of bankruptcy prediction models have been developed based on statistical and machine learning techniques. In particular, many machine learning techniques, such as neural networks, decision trees, etc. have shown better prediction performances than statistical ones. However, advanced machine learning techniques, such as classifier ensembles and stacked generalization have not been fully examined and compared in terms of their bankruptcy prediction performances. The aim of this chapter is to compare two different machine learning techniques, one statistical approach, two types of classifier ensembles, and three stacked generalization classifiers over three related datasets. The experimental results show that classifier ensembles by weighted voting perform the best in term of predication accuracy. On the other hand, for Type II errors on average stacked generalization and single classifiers perform better than classifier ensembles.


Sign in / Sign up

Export Citation Format

Share Document