weighted error
Recently Published Documents


TOTAL DOCUMENTS

47
(FIVE YEARS 10)

H-INDEX

9
(FIVE YEARS 1)

2021 ◽  
Vol 27 (1) ◽  
pp. 47
Author(s):  
Wahid Luthfi ◽  
Surian Pinem

VALIDATION OF SRAC CODE SYSTEM FOR NEUTRONIC PARAMETERS CALCULATION OF THE PWR MOX/UO2 CORE BENCHMARK. Determination of neutronic parameter value is an important part in determining reactor safety, so accurate calculation results can be obtained. This study is focused on the validation of SRAC code system in the calculation of neutronic parameters value of a PWR (Pressurized Water Reactor) reactor core. MOX/UO2 Core Benchmark was choosed because it is used by several researchers as a reference core for code validation in the determination of neutronic parameters of a reactor core. The neutronic parameters calculated include critical boron concentration, delayed neutron fraction, and Power Peaking Factor (PPF) and its distribution in axial and radial directions. When compared with reference data, the calculation results of the critical boron concentration value show that there is a difference of 22.5 ppm on SRAC code system. Meanwhile, differences in power per fuel element (assembly power error) value of power-weighted error (PWE) and error-weighted error (EWE) is 2.93% and 3.94%, respectively. Maximum difference between PPF value in axial direction with reference reaches a value of 4.57%. SRAC calculation results also show consistency with the calculation results of other program packages or code. Results of this study indicate that SRAC code system is still quite accurate for the calculation of neutronic parameters of PWR reactor core benchmark. Therefore, SRAC code system can be used to calculate neutronic parameters of PWR reactor core, especially when using MOX (mixed oxide) fuel.Keywords: Neutronic parameter, critical boron concentration, power peaking factor, SRAC code system.


Author(s):  
Hoda Nikpour ◽  
Agnar Aamodt

AbstractThis paper presents the inference and reasoning methods in a Bayesian supported knowledge-intensive case-based reasoning (CBR) system called BNCreek. The inference and reasoning process in this system is a combination of three methods. The semantic network inference methods and the CBR method are employed to handle the difficulties of inferencing and reasoning in uncertain domains. The Bayesian network inference methods are employed to make the process more accurate. An experiment from oil well drilling as a complex and uncertain application domain is conducted. The system is evaluated against expert estimations and compared with seven other corresponding systems. The normalized discounted cumulative gain (NDCG) as a rank-based metric, the weighted error (WE), and root-square error (RSE) as the statistical metrics are employed to evaluate different aspects of the system capabilities. The results show the efficiency of the developed inference and reasoning methods.


Author(s):  
Yuanman Li ◽  
Jiantao Zhou ◽  
Junyang Chen ◽  
Jinyu Tian ◽  
Li Dong ◽  
...  

Author(s):  
H. Amini ◽  
S. Mehrdad

Abstract. Similar to all infrastructural works, in order to directly prepare a map, one must act in a whole-to-part way. First, a framework containing certain coordinated points which can be used as base points for subsidiary measurements must be provided, relying on which various surveying tasks can be carried out. By means of solutions, the observation errors in determining the stable points should be propagated between all the observations. In the past, classical methods have been used due to the lack of facilities that can perform numerous calculations in a short time. In this project, we analyzed the accuracy of traditional or classical methods of error propagation in comparison with the Least Squares using simulated observational data with different accuracies. Then, with the output of different methods, the error ellipses are drawn, according to which, these outputs are compared with each other in terms of accuracy. Bowditch method resembled the results of the Least Squares in many cases while Transit method generally showed poorer accuracy and a dependence on the direction of the adjustments. Bowditch method was found to be getting closer to or even more accurate than the Lest Squares when increasing. The whole methods reached a better performance when the accuracy of angular and longitudinal observations were of the same order. Moreover, the Doubly-braced Quadrilateral and the Least Squares with constant weight were of equal accuracies, however, the accuracy of the true-weighted error propagation method outperformed the other methods.


Entropy ◽  
2020 ◽  
Vol 22 (7) ◽  
pp. 771 ◽  
Author(s):  
Piotr Oziablo ◽  
Dorota Mozyrska ◽  
Małgorzata Wyrwas

In this paper, we discuss the implementation and tuning algorithms of a variable-, fractional-order Proportional–Integral–Derivative (PID) controller based on Grünwald–Letnikov difference definition. All simulations are executed for the third-order plant with a delay. The results of a unit step response for all described implementations are presented in a graphical and tabular form. As the qualitative criteria, we use three different error values, which are the following: a summation of squared error (SSE), a summation of squared time weighted error (SSTE) and a summation of squared time-squared weighted error (SST2E). Besides three types of error values, obtained results are additionally evaluated on the basis of an overshoot and a rise time of the output signals achieved by systems with the designed controllers.


2020 ◽  
Vol 34 (04) ◽  
pp. 3105-3112 ◽  
Author(s):  
Afshin Abdi ◽  
Faramarz Fekri

In distributed training of deep models, the transmission volume of stochastic gradients (SG) imposes a bottleneck in scaling up the number of processing nodes. On the other hand, the existing methods for compression of SGs have two major drawbacks. First, due to the increase in the overall variance of the compressed SG, the hyperparameters of the learning algorithm must be readjusted to ensure the convergence of the training. Further, the convergence rate of the resulting algorithm still would be adversely affected. Second, for those approaches for which the compressed SG values are biased, there is no guarantee for the learning convergence and thus an error feedback is often required. We propose Quantized Compressive Sampling (QCS) of SG that addresses the above two issues while achieving an arbitrarily large compression gain. We introduce two variants of the algorithm: Unbiased-QCS and MMSE-QCS and show their superior performance w.r.t. other approaches. Specifically, we show that for the same number of communication bits, the convergence rate is improved by a factor of 2 relative to state of the art. Next, we propose to improve the convergence rate of the distributed training algorithm via a weighted error feedback. Specifically, we develop and analyze a method to both control the overall variance of the compressed SG and prevent the staleness of the updates. Finally, through simulations, we validate our theoretical results and establish the superior performance of the proposed SG compression in the distributed training of deep models. Our simulations also demonstrate that our proposed compression method expands substantially the region of step-size values for which the learning algorithm converges.


2019 ◽  
Vol 105 (4) ◽  
pp. 657-667
Author(s):  
Sungmok Hwang

This study proposes a sound source localization method using binaural input signals. The method is based on the head-related transfer function (HRTF) database and the interaural transfer function (ITF) obtained from two measured input signals. An algorithm to reduce the effect of background noise on the localization performance in a noisy environment was adopted in the proposed localization method. Weighted error functions (WEFs), defined using the ITF and the ratio of HRTFs for two ears, were used with a special frequency weighting function derived to reduce the effect of noise and to render the WEF a physical meaning. Computer simulations confirmed that the weighting function can effectively reduce the effect of background noise on the localization performance even if the noise power is very high. Localization tests in an actual room confirmed that both the azimuth and elevation angles of sound source can be estimated simultaneously with high accuracy. In particular, the front-back and updown confusions, which are critical limitations for conventional localization methods, could be resolved using two input signals.


Sign in / Sign up

Export Citation Format

Share Document