scholarly journals Performance Improvement for Multi-recycling HTGR Incorporated with MA burning

2021 ◽  
Vol 2048 (1) ◽  
pp. 012026
Author(s):  
Y Fukaya ◽  
M Goto ◽  
X L Yan

Abstract Multi-recycling HTGR has been investigated by JAEA in order to reduce the environmental burden and non-proliferation of Pu. In the previous study, it is found that all actinoids except neptunium, which is not problematic from the viewpoint of toxicity and proliferation, can be recycled. However, the cycle length is slightly decreased compared with uranium fueled core due to the cumulated fertile TRU in the fuel composition. In the present study, Pu recycling HTGR is designed to incorporate with MA burning. As a result, the cycle length is increased by approximately 15% compared with TRU multi-recycling core, and if the MA burner can be achieved by IFR, the cost is decreased by 0.14 yen/kWh.

2017 ◽  
Vol 79 (4) ◽  
Author(s):  
Suharjito Suharjito ◽  
Adrianus B. Kurnadi

Database for Online Transaction Processing (OLTP) application is used by almost every corporations that has adopted computerisation to support their operational day to day business. Compression in the storage or file-systems layer has not been widely adopted for OLTP database because of the concern that it might decrease database performance. OLTP compression in the database layer is available commercially but it has a significant licence cost that reduces the cost saving of compression. In this research, transparent file-system compression with LZ4, LZJB and ZLE algorithm have been tested to improve performance of OLTP application. Using Swing-bench as the benchmark tool and Oracle database 12c, The result indicated that on OLTP workload, LZJB was the most optimal compression algorithm with performance improvement up to 49% and consistent reduction of maximum response time and CPU utilisation overhead, while LZ4 was the compression with the highest compression ratio and ZLE was the compression with the lowest CPU utilisation overhead. In terms of compression ratio, LZ4 can deliver the highest compression ratio which is 5.32, followed by LZJB, 4.92; and ZLE, 1.76. Furthermore, it is found that there is indeed a risk of reduced performance and/or an increase of maximum response time.


Author(s):  
Chandra K. Jaggi ◽  
Anuj Sharma ◽  
Reena Jain

This chapter introduces an economic order quantity inventory model under the condition of permissible delay in payments in fuzzy environment. All the parameters of the model, excluding permissible delay period and cycle length, are taken to be trapezoidal Fuzzy numbers. The arithmetic operations are defined under the function principle. The cost function has been defuzzified using signed distance method and thereby solved to obtain the optimal replenishment period. The numerical example is presented to show the validity of the model followed by sensitivity analysis.


2014 ◽  
Vol 2014 ◽  
pp. 1-14 ◽  
Author(s):  
Fan Deng ◽  
Ping Chen ◽  
Li-Yong Zhang ◽  
Xian-Qing Wang ◽  
Sun-De Li ◽  
...  

In conventional centralized authorization models, the evaluation performance of policy decision point (PDP) decreases obviously with the growing numbers of rules embodied in a policy. Aiming to improve the evaluation performance of PDP, a distributed policy evaluation engine called XDPEE is presented. In this engine, the unicity of PDP in the centralized authorization model is changed by increasing the number of PDPs. A policy should be decomposed into multiple subpolicies each with fewer rules by using a decomposition method, which can have the advantage of balancing the cost of subpolicies deployed to each PDP. Policy decomposition is the key problem of the evaluation performance improvement of PDPs. A greedy algorithm withO(nlgn)time complexity for policy decomposition is constructed. In experiments, the policy of the LMS, VMS, and ASMS in real applications is decomposed separately into multiple subpolicies based on the greedy algorithm. Policy decomposition guarantees that the cost of subpolicies deployed to each PDP is equal or approximately equal. Experimental results show that (1) the method of policy decomposition improves the evaluation performance of PDPs effectively and that (2) the evaluation time of PDPs reduces with the growing numbers of PDPs.


2019 ◽  
Vol 39 (4) ◽  
pp. 414-420
Author(s):  
Jorge Pérez-Martín ◽  
Iñigo Bermejo ◽  
Francisco J. Díez

Background. Several methods, such as the half-cycle correction and the life-table method, were developed to attenuate the error introduced in Markov models by the discretization of time. Elbasha and Chhatwal have proposed alternative “corrections” based on numerical integration techniques. They present an example whose results suggest that the trapezoidal rule, which is equivalent to the half-cycle correction, is not as accurate as Simpson’s 1/3 and 3/8 rules. However, they did not take into consideration the impact of discontinuities. Objective. To propose a method for evaluating Markov models with discontinuities. Design. Applying the trapezoidal rule, we derive a method that consists of adjusting the model by setting the cost at each point of discontinuity to the mean of the left and right limits of the cost function. We then take from the literature a model with a cycle length of 1 year and a discontinuity on the cost function and compare our method with other “corrections” using as the gold standard an equivalent model with a cycle length of 1 day. Results. As expected, for this model, the life-table method is more accurate than assuming that transitions occur at the beginning or the end of cycles. The application of numerical integration techniques without taking into account the discontinuity causes large errors. The model with averaged cost values yields very small errors, especially for the trapezoidal and the 1/3 Simpson rules. Conclusion. In the case of discontinuities, we recommend applying the trapezoidal rule on an averaged model because this method has a mathematical justification, and in our empirical evaluation, it was more accurate than the sophisticated 3/8 Simpson rule.


2003 ◽  
Vol 13 (04) ◽  
pp. 575-587 ◽  
Author(s):  
HOLGER BISCHOF ◽  
SERGEI GORLATCH ◽  
EMANUEL KITZELMANN

Skeletons are reusable, parameterized program components with well-defined semantics and pre-packaged efficient parallel implementation. This paper develops a new, provably cost-optimal implementation of the DS (double-scan) skeleton for programming divide-and-conquer algorithms. Our implementation is based on a novel data structure called plist (pointed list); implementation's performance is estimated using an analytical model. We demonstrate the use of the DS skeleton for parallelizing a tridiagonal system solver and report experimental results for its MPI implementation on a Cray T3E and a Linux cluster: they confirm the performance improvement achieved by the cost-optimal implementation and demonstrate its good predictability by our performance model.


2018 ◽  
Author(s):  
Alexey Ruzhnikov ◽  
Jose Alzate ◽  
Rudra Singh ◽  
Khalid Al-Wahedi ◽  
Rashid AL-Kindi

2007 ◽  
Vol 08 (02) ◽  
pp. 119-132
Author(s):  
SATOSHI FUJITA ◽  
AKIRA OHTSUBO ◽  
MASAYA MITO

In this paper, we propose three techniques to improve the cost/performance of the skip graph that was recently proposed by Aspnes and Shah. The skip graph, which is a distributed data structure that could efficiently support find, insert, and delete operations of a key drawn from a totally ordered set, consists of N nodes each of which is connected with exactly log 2 N nodes determined by a set of random binary vectors called membership vectors. In the following, we will extend the construction of the skip graph in the following two directions: 1) proposal of a subgraph of the skip graph which realizes a graceful degradation of the routing performance when the number of neighbors reduces from log 2 N, and 2) proposal of a supergraph of the skip graph which realizes a significant performance improvement when the number of neighbors increases from log 2 N. The performance of those extended graphs will be evaluated analytically.


2019 ◽  
Vol 35 (S1) ◽  
pp. 55-55
Author(s):  
Yogesh Gurav ◽  
Bhavani Shankara Bagepally ◽  
Montarat Thavorncharoensap ◽  
Usa Chaikledkaew

IntroductionDue to epidemiological transition, a rise in hepatitis A outbreaks among adults in the state of Kerala, India has been noted. This has intensified the need for hepatitis A vaccination (HAV), but evidence regarding the cost effectiveness of HAV, which is essential to guide policy decisions, is lacking. This study was undertaken to evaluate the cost effectiveness of HAV among adults in Kerala state.MethodsTo determine the cost effectiveness of HAV from a societal and a payer perspective, a Markov model was constructed with a cycle length of two months. The lifetime costs and outcomes for HAV and no vaccination were compared using a discount rate of 3 percent. Data for the model input parameters of cost, coverage, and effectiveness were derived from the published literatures. One-way and probabilistic sensitivity analyses were applied. A threshold based on the per capita gross domestic product (GDP) was used (1 GDP = INR 127,702.48 [USD 1,886.03]).ResultsThe incremental cost-effectiveness ratios for both societal and payer perspectives were negative, indicating that HAV was dominant, being less costly and more effective than no vaccination. The discount rates and utility values for adults with HAV were the most sensitive parameters.ConclusionsA HAV strategy would be cost-saving, compared with no vaccination, in the Kerala state of India.


2018 ◽  
Vol 7 (3.27) ◽  
pp. 440
Author(s):  
S H. Jamshak ◽  
M Dev Anand ◽  
S B. Akshay ◽  
S Arun ◽  
J Prajeev ◽  
...  

Redesigning of a system is a modification of existing system for reducing the disadvantages over the system and improves the features for getting more output that are desired. For redesigning the existing conventional heat exchanger by the new designed heat exchanger. Conventional heat exchanger has a major disadvantage consuming more space, high cost and maintenance is difficult. Plate heat exchanger has a major advantage over a conventional heat exchanger are that the liquid are exposed to main larger surface area because the liquids spread out over the plate. It is a specialized design well suited to transferring heat between low pressure liquids. The plate produces an extremely large surface area which allows for the fasten possible transfer and studying the plate material, gasket material, chevron angle, and surface enlargement angle. Design a plate heat exchanger by according more advantages and evaluate the cost of newly designed heat exchanger and compared to existing heat exchangers and validate the designs by flow analysis.  


Sign in / Sign up

Export Citation Format

Share Document