Adaptive Prediction Method Based on Alternating Decision Forests with Considerations for Generalization Ability

2017 ◽  
Vol 16 (3) ◽  
pp. 384-391
Author(s):  
Shotaro Misawa ◽  
Kenta Mikawa ◽  
Masayuki Goto
2013 ◽  
Vol 7 (3) ◽  
pp. 683-685
Author(s):  
Anil Mishra ◽  
Ms. Savita Shiwani

Images are an important part of today's digital world. However, due to the large quantity of data needed to represent modern imagery the storage of such data can be expensive. Thus, work on efficient image storage (image compression) has the potential to reduce storage costs and enable new applications.This lossless image compression has uses in medical, scientific and professional video processing applications.Compression is a process, in which given size of data is compressed to a smaller size. Storing and sending images to its original form can present a problem in terms of storage space and transmission speed.Compression is efficient for storing and transmission purpose.In this paper we described a new lossless adaptive prediction based algorithm for continuous tone images. In continuous tone images spatial redundancy exists.Our approach is to develop a new backward adaptive prediction techniques to reduce spatial redundancy in a image.The new prediction technique known as Modifed Gradient Adjusted Predictor (MGAP) is developed. MGAP is based on the prediction method used in Context based Adaptive Lossless Image Coding (CALIC). An adaptive selection method which selects the predictor in a slope bin in terms of minimum entropy improves the compression performance.


2006 ◽  
Vol 55 (4) ◽  
pp. 1666
Author(s):  
Meng Qing-Fang ◽  
Zhang Qiang ◽  
Mu Wen-Ying

2014 ◽  
Vol 945-949 ◽  
pp. 2495-2498 ◽  
Author(s):  
Fang Dai ◽  
Gao Hua Liao

At present, the mine has only realized the real-time monitoring of gas, but not the prediction of gas.There were some limitation of the traditional prediction method, such as modeling subjectivism and statistical prediction. Because it can dynamically adjust the parameters of the model, adaptive prediction method can get the current time according to the prediction error of data and the current time, real-time fault prediction model parameters, this is a very consistent with the prediction method for practical use.This paper presents the gas emission chaos time series method by using volterra series prediction, and on the basis to establish time-series prediction models. The results show that the method not only avoids the phase space reconstruction, but also avoid the points in the neighborhood search, in real-time, with very high efficiency.


Author(s):  
Li Mao ◽  
Deyu Qi ◽  
Weiwei Lin ◽  
Chaoyue Zhu

It is difficult to analyze the workload in complex cloud computing environments with a single prediction algorithm as each algorithm has its own shortcomings. A self-adaptive prediction algorithm combining the advantages of linear regression (LR) and a BP neural network to predict workloads in clouds is proposed in this paper. The main idea of the self-adaptive prediction algorithm is to choose the better prediction method of the future workload. Some experiments of prediction algorithms are conducted with workloads on the public cloud servers. The experimental results show that the proposed algorithm has a relatively high accuracy on the workload predictions compared with the BP neural network and LR. Furthermore, in order to use the proposed algorithm in a cloud data center, a dynamic scheduling architecture of cloud resources is designed to improve resource utilization and reduce energy consumption.


2014 ◽  
Vol 644-650 ◽  
pp. 5341-5345
Author(s):  
Xuan Wang ◽  
Fa Zhang

In the last decade, RNA interference (RNAi) by small interfering RNAs (siRNAs) has become a hot topic in both molecular biology and bioinformatics. The success of RNAi gene silencing depends on the specificity of siRNAs for particular mRNA sequences. As a targeted gene could have thousands of potential siRNAs, finding the most efficient siRNAs among them constitutes a huge challenge. Previous studies such as rules scoring or machine learning aim to optimize the selection of target siRNAs. However, these methods have low accuracy or poor generalization ability, when they used new datasets to test. In this study, a siRNA efficacy prediction method using BP neural network (BP-GA) was proposed. For more efficient siRNA candidate prediction, twenty rational design rules our defined were used to filter siRNA candidate and they were used in the neural network model as input parameters. Furthermore, the performance optimization of network model has been done by using genetic algorithm and setting optimal training parameters. The BP-GA was trained on 2431 siRNA records and tested using a new public dataset. Compared with existing rules scoring and BP methods, BP-GA has higher prediction accuracy and better generalization ability.


Sign in / Sign up

Export Citation Format

Share Document