optimal prediction
Recently Published Documents


TOTAL DOCUMENTS

267
(FIVE YEARS 72)

H-INDEX

27
(FIVE YEARS 3)

2022 ◽  
Vol 14 (1) ◽  
pp. 0-0

Identifying chronic obstructive pulmonary disease (COPD) severity stages is of great importance to control the related mortality rates and reduce the associated costs. This study aims to build prediction models for COPD stages and, to compare the relative performance of five machine learning algorithms to determine the optimal prediction algorithm. This research is based on data collected from a private hospital in Egypt for the two calendar years 2018 and 2019. Five machine learning algorithms were used for the comparison. The F1 score, specificity, sensitivity, accuracy, positive predictive value and negative predictive value were the performance measures used for algorithms comparison. Analysis included 211 patients’ records. Our results show that the best performing algorithm in most of the disease stages is the PNN with the optimal prediction accuracy and hence it can be considered as a powerful prediction tool used by decision makers in predicting severity stages of COPD.


2022 ◽  
Vol 13 (1) ◽  
pp. 0-0

The weather has a serious impact on the environment as it affects to change day to day life. In recent days, many algorithms were proposed to predict the weather. Although various machine learning algorithms predict the weather, the optimal prediction of weather is not addressed. Optimal Prediction of weather is required as it has a serious impact on human life. Thus this domain invites an optimal system that can forecast weather thereby saving human life. To optimally predict the changes in weather, a metaheuristic algorithm called Whale Optimization Algorithm (WOA) is integrated with machine learning algorithm K- Nearest Neighbor (K-NN). Whale optimization is an algorithm inspired by the social behavior of whales. The proposed WOAK-NN is compared with K-NN. The integration of WOA with K-NN aims to maximize accuracy, F-measure and minimize mean absolute error. Also, the time complexity of WOAK-NN is compared with K-NN and observed that when the dataset is large, WOAK-NN requires minimum time for an optimal prediction.


2021 ◽  
Vol 12 (1) ◽  
pp. 366
Author(s):  
Jessie C. Martín Sujo ◽  
Elisabet Golobardes i Ribé ◽  
Xavier Vilasís Cardona

A new predictive support tool for the publishing industry is presented in this note. It consists of a combined model of Artificial Intelligence techniques (CAIT) that seeks the most optimal prediction of the number of book copies, finding out which is the best segmentation of the book market, using data from the networks social and the web. Predicted sales appear to be more accurate, applying machine learning techniques such as clustering (in this specific case, KMeans) rather than using current publishing industry expert’s segmentation. This identification has important implications for the publishing sector since the forecast will adjust more to the behavior of the stakeholders than to the skills or knowledge acquired by the experts, which is a certain way that may not be sufficient and/or variable throughout the period.


2021 ◽  
Vol 72 ◽  
pp. 613-665
Author(s):  
Vu-Linh Nguyen ◽  
Eyke Hüllermeier

In contrast to conventional (single-label) classification, the setting of multilabel classification (MLC) allows an instance to belong to several classes simultaneously. Thus, instead of selecting a single class label, predictions take the form of a subset of all labels. In this paper, we study an extension of the setting of MLC, in which the learner is allowed to partially abstain from a prediction, that is, to deliver predictions on some but not necessarily all class labels. This option is useful in cases of uncertainty, where the learner does not feel confident enough on the entire label set. Adopting a decision-theoretic perspective, we propose a formal framework of MLC with partial abstention, which builds on two main building blocks: First, the extension of underlying MLC loss functions so as to accommodate abstention in a proper way, and second the problem of optimal prediction, that is, finding the Bayes-optimal prediction minimizing this generalized loss in expectation. It is well known that different (generalized) loss functions may have different risk-minimizing predictions, and finding the Bayes predictor typically comes down to solving a computationally complexity optimization problem. In the most general case, given a prediction of the (conditional) joint distribution of possible labelings, the minimizer of the expected loss needs to be found over a number of candidates which is exponential in the number of class labels. We elaborate on properties of risk minimizers for several commonly used (generalized) MLC loss functions, show them to have a specific structure, and leverage this structure to devise efficient methods for computing Bayes predictors. Experimentally, we show MLC with partial abstention to be effective in the sense of reducing loss when being allowed to abstain.


2021 ◽  
pp. 1183-1193
Author(s):  
Yingxun Wang ◽  
TANG Xuyang ◽  
CAI Zhihao ◽  
Jiang Zhao

2021 ◽  
Author(s):  
Nagaraj Honnikoll ◽  
Ishwar Baidari

Abstract Boosting is a generally known technique to convert a group of weak learners into a powerful ensemble. To reach this desired objective successfully, the modules are trained with distinct data samples and the hypotheses are combined in order to achieve an optimal prediction. To make use of boosting technique in online condition is a new approach. It motivates to meet the requirements due to its success in offline conditions. This work presents new online boosting method. We make use of mean error rate of individual base learners to achieve effective weight distribution of the instances to closely match the behavior of OzaBoost. Experimental results show that, in most of the situations, the proposed method achieves better accuracies, outperforming the other state-of-art methods.


SAGE Open ◽  
2021 ◽  
Vol 11 (4) ◽  
pp. 215824402110581
Author(s):  
Shaikh Abdul Waheed ◽  
P. Sheik Abdul Khader

Earlier studies established the role of demographic and temperamental features (DTFs) in the adaptation of childhood stuttering. However, these studies have been short on examining the latent interrelationships among DTFs and not utilizing them in predicting this disorder. This research article endeavors to examine latent interrelationships among DTFs in relation to childhood-stuttering. The purpose of the present is also to analyze whether DTFs can be utilized in predicting the likely risk of this speech disorder. Historical data on childhood stuttering was utilized for performing the invloved experiments of this research. “Structural-Equation-Modeling” (SEM) was applied to examine latent interrelationships among DTFs in relation to stuttering. The predictive analytics approach was employed to ensure whether DTFs of children can be utilized for predicting the likely risk of childhood-stuttering. SEM-based path analysis explored potential latent interrelationships among DTFs by separating them into categories of background and intermediate. By utilizing the same set of the DTFs, predictive models were able to classify children into stuttering and non-stuttering groups with optimal prediction accuracy. The outcomes of this study showed how the stuttering related historical data can be utilized in offering healthcare solutions for individuals with stuttering disorder. The outcomes of the present study also suggest that historical data on stuttering is a very rich source of hidden trends and patterns concerning this disorder. These hidden trends and patterns can be captured by applying a different type of structural and predictive modeling to understand the cause-and-effect relationship among variables in relation to stuttering. The SEM utilizes the cause-and-effect relationship among variables to explore latent-interrelationships between them. While predictive modeling utilizes the cause-and-effect relationship among variables to predict the possible risk of stuttering with optimal prediction accuracy.


Author(s):  
Bingchun Liu ◽  
Xiaogang Yu ◽  
Qingshan Wang ◽  
Shijie Zhao ◽  
Lei Zhang

NO2 pollution has caused serious impact on people's production and life, and the management task is very difficult. Accurate prediction of NO2 concentration is of great significance for air pollution management. In this paper, a NO2 concentration prediction model based on long short-term memory neural network (LSTM) is constructed with daily NO2 concentration in Beijing as the prediction target and atmospheric pollutants and meteorological factors as the input indicators. Firstly, the parameters and architecture of the model are adjusted to obtain the optimal prediction model. Secondly, three different sets of input indicators are built on the basis of the optimal prediction model to enter the model learning. Finally, the impact of different input indicators on the accuracy of the model is judged. The results show that the LSTM model has high application value in NO2 concentration prediction. The maximum temperature and O3 among the three input indicators improve the prediction accuracy while the NO2 historical low-frequency data reduce the prediction accuracy.


Sign in / Sign up

Export Citation Format

Share Document