scholarly journals Optimization of PBFT Algorithm Based on Improved C4.5

2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Xiandong Zheng ◽  
Wenlong Feng ◽  
Mengxing Huang ◽  
Siling Feng

Aiming at the problems of PBFT algorithm of consortium blockchain, such as high communication overhead, low consensus efficiency, and random selection of leader nodes, an optimized algorithm of PBFT is proposed. Firstly, the algorithm improves C4.5 and introduces weighted average information gain to overcome the mutual influence between conditional attributes and improve the classification accuracy. Then classify the nodes with improved C4.5, and select the ones with a high trust level to form the main consensus group. Finally, the integral voting mechanism is introduced to determine the leader node. Experimental results show that compared with traditional PBFT algorithm, the communication times of the improved PBFT algorithm are reduced greatly, which effectively alleviates the problem that the number of nodes in traditional PBFT  algorithm increases and the traffic volume is too large, and significantly reduces the probability of the leader node doing evil and improves the consensus efficiency.

2003 ◽  
Vol 17 (1) ◽  
pp. 1-14 ◽  
Author(s):  
Peggy A. Hite ◽  
John Hasseldine

This study analyzes a random selection of Internal Revenue Service (IRS) office audits from October 1997 to July 1998, the type of audit that concerns most taxpayers. Taxpayers engage paid preparers in order to avoid this type of audit and to avoid any resulting tax adjustments. The study examines whether there are more audit adjustments and penalty assessments on tax returns with paid-preparer assistance than on tax returns without paid-preparer assistance. By comparing the frequency of adjustments on IRS office audits, the study finds that there are significantly fewer tax adjustments on paid-preparer returns than on self-prepared returns. Moreover, CPA-prepared returns resulted in fewer audit adjustments than non CPA-prepared returns.


2021 ◽  
Vol 13 (8) ◽  
pp. 4236
Author(s):  
Tim Lu

The selection of advanced manufacturing technologies (AMTs) is an essential yet complex decision that requires careful consideration of various performance criteria. In real-world applications, there are cases that observations are difficult to measure precisely, observations are represented as linguistic terms, or the data need to be estimated. Since the growth of engineering sciences has been the key reason for the increased utilization of AMTs, this paper develops a fuzzy network data envelopment analysis (DEA) to the selection of AMT alternatives considering multiple decision-makers (DMs) and weight restrictions when the input and output data are represented as fuzzy numbers. By viewing the multiple DMs as a network one, the data provided by each DM can then be taken into account in evaluating the overall performances of AMT alternatives. In the solution process, we obtain the overall and DMs efficiency scores of each AMT alternative at the same time, and a relationship in which the former is a weighted average of the latter is also derived. Since the final evaluation results of AMTs are fuzzy numbers, a ranking procedure is employed to determine the most preferred one. An example is used to illustrate the applicability of the proposed methodology.


2012 ◽  
Vol 22 (03) ◽  
pp. 1250007 ◽  
Author(s):  
PEDRO RODRÍGUEZ ◽  
MARÍA CECILIA RIVARA ◽  
ISAAC D. SCHERSON

A novel parallelization of the Lepp-bisection algorithm for triangulation refinement on multicore systems is presented. Randomization and wise use of the memory hierarchy are shown to highly improve algorithm performance. Given a list of selected triangles to be refined, random selection of candidates together with pre-fetching of Lepp-submeshes lead to a scalable and efficient multi-core parallel implementation. The quality of the refinement is shown to be preserved.


2014 ◽  
Vol 20 (2) ◽  
pp. 193-209 ◽  
Author(s):  
Guiwu Wei ◽  
Xiaofei Zhao

With respect to decision making problems by using probabilities, immediate probabilities and information that can be represented with linguistic labels, some new decision analysis are proposed. Firstly, we shall develop three new aggregation operators: generalized probabilistic 2-tuple weighted average (GP-2TWA) operator, generalized probabilistic 2-tuple ordered weighted average (GP-2TOWA) operator and generalized immediate probabilistic 2-tuple ordered weighted average (GIP-2TOWA) operator. These operators use the weighted average (WA) operator, the ordered weighted average (OWA) operator, linguistic information, probabilistic information and immediate probabilistic information. They are quite useful because they can assess the uncertain information within the problem by using both linguistic labels and the probabilistic information that considers the attitudinal character of the decision maker. In these approaches, alternative appraisal values are calculated by the aggregation of 2-tuple linguistic information. Thus, the ranking of alternative or selection of the most desirable alternative(s) is obtained by the comparison of 2-tuple linguistic information. Finally, we give an illustrative example about selection of strategies to verify the developed approach and to demonstrate its feasibility and practicality.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Kent McFadzien ◽  
Lawrence W. Sherman

PurposeThe purpose of this paper is to demonstrate a “maintenance pathway” for ensuring a low false negative rate in closing investigations unlikely to lead to a clearance (detection).Design/methodology/approachA randomised controlled experiment testing solvability factors for non-domestic cases of minor violence.FindingsA random selection of 788 cases, of which 428 would have been screened out, were sent forward for full investigation. The number of cases actually detected was 22. A total of 19 of these were from the 360 recommended for allocation. This represents an improvement of accuracy over the original tests of the model three years earlier.Research limitations/implicationsThis study shows how the safety of an investigative triage tool can be checked on a continuous basis for accuracy in predicting the cases unlikely to be solved if referred for full investigations.Practical implicationsThis safety check pathway means that many more cases can be closed after preliminary investigations, thus saving substantial time for working on cases more likely to yield a detection if sufficient time is put into the cases.Social implicationsMore offenders may be caught and brought to justice by using triage with a safety backstop for accurate forecasting.Originality/valueThis is the first published study of a maintenance pathway based on a random selection of cases that would otherwise not have been investigated. If widely applied, it could yield far greater time for police to pursue high-harm, serious violence.


2018 ◽  
Vol 2018 ◽  
pp. 1-17 ◽  
Author(s):  
Yunna Wu ◽  
Lei Qin ◽  
Chuanbo Xu ◽  
Shaoyu Ji

Site selection of waste-to-energy (WtE) plant is critically important in the whole life cycle. Some research has been launched in the WtE plant site selection, but there is still a serious problem called Not In My Back Yard (NIMBY) effect that needs to be solved. To solve the problem, an improved multigroup VIKOR method is proposed to choose the optimal site and compromised sites. In the proposed method, the public satisfaction is fully considered where the public is invited as an evaluation group far more than creating general indicators to represent the public acceptance. First of all, an elaborate criteria system is built to evaluate site options comprehensively and the weights of criteria are identified by Analytic Hierarchy Process (AHP) method. Then, the interval 2-tuple linguistic information is adopted to assess the ratings for the established criteria. The interval 2-tuple linguistic ordered weighted averaging (ITL-OWA) operator is utilized to aggregate the opinions of evaluation committee while the opinions of the public are aggregated using weighted average operator. Finally, a case from south China which shows the computational procedure and the effectiveness of the proposed method is proved. Last but not least, a sensitivity analysis is conducted by comparing the results with different weights of evaluation group assessments.


2012 ◽  
Vol 2 (1) ◽  
pp. 3
Author(s):  
David Alan Rhoades ◽  
Paul G. Somerville ◽  
Felipe Dimer de Oliveira ◽  
Hong Kie Thio

The Every Earthquake a Precursor According to Scale (EEPAS) long-range earthquake forecasting model has been shown to be informative in several seismically active regions, including New Zealand, California and Japan. In previous applications of the model, the tectonic setting of earthquakes has been ignored. Here we distinguish crustal, plate interface, and slab earthquakes and apply the model to earthquakes with magnitude M≥4 in the Japan region from 1926 onwards. The target magnitude range is M≥ 6; the fitting period is 1966-1995; and the testing period is 1996-2005. In forecasting major slab earthquakes, it is optimal to use only slab and interface events as precursors. In forecasting major interface events, it is optimal to use only interface events as precursors. In forecasting major crustal events, it is optimal to use only crustal events as precursors. For the smoothed-seismicity component of the EEPAS model, it is optimal to use slab and interface events for earthquakes in the slab, interface events only for earthquakes on the interface, and crustal and interface events for crustal earthquakes. The optimal model parameters indicate that the precursor areas for slab earthquakes are relatively small compared to those for earthquakes in other tectonic categories, and that the precursor times and precursory earthquake magnitudes for crustal earthquakes are relatively large. The optimal models fit the learning data sets better than the raw EEPAS model, with an average information gain per earthquake of about 0.4. The average information gain is similar in the testing period, although it is higher for crustal earthquakes and lower for slab and interface earthquakes than in the learning period. These results show that earthquake interactions are stronger between earthquakes of similar tectonic types and that distinguishing tectonic types improves forecasts by enhancing the depth resolution where tectonic categories of earthquakes are vertically separated. However, when depth resolution is ignored, the model formed by aggregating the optimal forecasts for each tectonic category performs no better than the raw EEPAS model.


Sign in / Sign up

Export Citation Format

Share Document