scholarly journals Calcium based sorbent calcination and sintering reaction models overview

2018 ◽  
Vol 72 (6) ◽  
pp. 329-339
Author(s):  
Ivan Tomanovic ◽  
Srdjan Belosevic ◽  
Aleksandar Milicevic ◽  
Nenad Crnomarkovic

Several models considering the pulverized sorbent reactions with pollutant gases were developed over the past years. In this paper, we present a detailed overview of available models for direct furnace injection of pulverized calcium sorbent suitable for potential application in CFD codes, with respect to implementation difficulty and computational resources demand. Depending on the model, variations in result accuracy, data output, and computational power required may occur. Some authors separate the model of calcination reaction, combined with the sintering model, and afterwards model the sulfation. Other authors assume the calcination to be instantaneous, and focus the modelling efforts toward the sulfation reaction, adding the sintering effects as a parameter in the efficiency coefficient. Simple models quantify the reaction effects, while more complex models attempt to describe and explain internal particle reactions through different approaches to modelling of the particle internal structure.


Author(s):  
Marco Seiz ◽  
Philipp Offenhäuser ◽  
Stefan Andersson ◽  
Johannes Hötzer ◽  
Henrik Hierl ◽  
...  

AbstractWith ever-increasing computational power, larger computational domains are employed and thus the data output grows as well. Writing this data to disk can become a significant part of runtime if done serially. Even if the output is done in parallel, e.g., via MPI I/O, there are many user-space parameters for tuning the performance. This paper focuses on the available parameters for the Lustre file system and the Cray MPICH implementation of MPI I/O. Experiments on the Cray XC40 Hazel Hen using a Cray Sonexion 2000 Lustre file system were conducted. In the experiments, the core count, the block size and the striping configuration were varied. Based on these parameters, heuristics for striping configuration in terms of core count and block size were determined, yielding up to a 32-fold improvement in write rate compared to the default. This corresponds to 85 GB/s of the peak bandwidth of 202.5 GB/s. The heuristics are shown to be applicable to a small test program as well as a complex application.



Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3472 ◽  
Author(s):  
Yuan Wu ◽  
Xiangxu Chen ◽  
Jiajun Shi ◽  
Kejie Ni ◽  
Liping Qian ◽  
...  

Blockchain has emerged as a decentralized and trustable ledger for recording and storing digital transactions. The mining process of Blockchain, however, incurs a heavy computational workload for miners to solve the proof-of-work puzzle (i.e., a series of the hashing computation), which is prohibitive from the perspective of the mobile terminals (MTs). The advanced multi-access mobile edge computing (MEC), which enables the MTs to offload part of the computational workloads (for solving the proof-of-work) to the nearby edge-servers (ESs), provides a promising approach to address this issue. By offloading the computational workloads via multi-access MEC, the MTs can effectively increase their successful probabilities when participating in the mining game and gain the consequent reward (i.e., winning the bitcoin). However, as a compensation to the ESs which provide the computational resources to the MTs, the MTs need to pay the ESs for the corresponding resource-acquisition costs. Thus, to investigate the trade-off between obtaining the computational resources from the ESs (for solving the proof-of-work) and paying for the consequent cost, we formulate an optimization problem in which the MTs determine their acquired computational resources from different ESs, with the objective of maximizing the MTs’ social net-reward in the mining process while keeping the fairness among the MTs. In spite of the non-convexity of the formulated problem, we exploit its layered structure and propose efficient distributed algorithms for the MTs to individually determine their optimal computational resources acquired from different ESs. Numerical results are provided to validate the effectiveness of our proposed algorithms and the performance of our proposed multi-access MEC for Blockchain.



Author(s):  
Rafael Nogueras ◽  
Carlos Cotta

Computational environments emerging from the pervasiveness of networked devices offer a plethora of opportunities and challenges. The latter arise from their dynamic, inherently volatile nature that tests the resilience of algorithms running on them. Here we consider the deployment of population-based optimization algorithms on such environments, using the island model of memetic algorithms for this purpose. These memetic algorithms are endowed with self-★ properties that give them the ability to work autonomously in order to optimize their performance and to react to the instability of computational resources. The main focus of this work is analyzing the performance of these memetic algorithms when the underlying computational substrate is not only volatile but also heterogeneous in terms of the computational power of each of its constituent nodes. To this end, we use a simulated environment that allows experimenting with different volatility rates and heterogeneity scenarios (that is, different distributions of computational power among computing nodes), and we study different strategies for distributing the search among nodes. We observe that the addition of self-scaling and self-healing properties makes the memetic algorithm very robust to both system instability and computational heterogeneity. Additionally, a strategy based on distributing single islands on each computational node is shown to perform globally better than placing many such islands on each of them (either proportionally to their computing power or subject to an intermediate compromise).



Author(s):  
Mamta Bisht ◽  
Richa Gupta

Script recognition is the first necessary preliminary step for text recognition. In the deep learning era, for this task two essential requirements are the availability of a large labeled dataset for training and computational resources to train models. But if we have limitations on these requirements then we need to think of alternative methods. This provides an impetus to explore the field of transfer learning, in which the previously trained model knowledge established in the benchmark dataset can be reused in another smaller dataset for another task, thus saving computational power as it requires to train only less number of parameters from the total parameters in the model. Here we study two pre-trained models and fine-tune them for script classification tasks. Firstly, the VGG-16 pre-trained model is fine-tuned for publically available CVSI-15 and MLe2e datasets for script recognition. Secondly, a well-performed model on Devanagari handwritten characters dataset has been adopted and fine-tuned for the Kaggle Devanagari numeral dataset for numeral recognition. The performance of proposed fine-tune models is related to the nature of the target dataset as similar or dissimilar from the original dataset and it has been analyzed with widely used optimizers.



2017 ◽  
Author(s):  
Cherry May R. Mateo ◽  
Dai Yamazaki ◽  
Hyungjun Kim ◽  
Adisorn Champathong ◽  
Jai Vaze ◽  
...  

Abstract. Global-scale River Models (GRMs) are core tools for providing consistent estimates of global flood hazard, especially in data-scarce regions. Due to former limitations in computational power and input datasets, most GRMs have been developed to use simplified representation of flow physics and run at coarse spatial resolutions. With increasing computational power and improved datasets, the application of GRMs to finer resolutions is becoming a reality. To support development in this direction, the suitability of GRMs for application to finer resolutions needs to be assessed. This study investigates the impacts of spatial resolution and flow connectivity representation on the predictive capability of a GRM, CaMa-Flood, in simulating the 2011 extreme flood in Thailand. Analyses show that when single downstream connectivity (SDC) is assumed, simulation results deteriorate with finer spatial resolution; Nash–Sutcliffe Efficiency coefficient decreased by more than 35 % between simulation results at 10 km resolution and 1 km resolution. When multiple downstream connectivity (MDC) is represented, simulation results slightly improve with finer spatial resolution. The SDC simulations result in excessive backflows on very flat floodplains due to the restrictive flow directions in finer resolutions. MDC channels attenuated these effects by maintaining flow connectivity and flow capacity between floodplains in varying spatial resolutions. While a regional-scale flood was chosen as a test case, these findings are universal and can be extended to global-scale simulations. These results demonstrate that a GRM can be used for higher resolution simulations of large-scale floods, provided that MDC in rivers and floodplains is adequately represented in the model structure.



Author(s):  
Roberto Pierdicca ◽  
Emanuele Frontoni ◽  
Maria Paola Puggioni ◽  
Eva Savina Malinverni ◽  
Marina Paolanti

Augmented and virtual reality proved to be valuable solutions to convey contents in a more appealing and interactive way. Given the improvement of mobile and smart devices in terms of both usability and computational power, contents can be easily conveyed with a realism level never reached in the past. Despite the tremendous number of researches related with the presentation of new fascinating applications of ancient goods and artifacts augmentation, few papers are focusing on the real effect these tools have on learning. Within the framework of SmartMarca project, this chapter focuses on assessing the potential of AR/VR applications specifically designed for cultural heritage. Tests have been conducted on classrooms of teenagers to whom different learning approaches served as an evaluation method about the effectiveness of using these technologies for the education process. The chapter argues on the necessity of developing new tools to enable users to become producers of contents of AR/VR experiences.



Author(s):  
Glenn Harvel ◽  
Wendy Hardman

Nuclear Engineering Education has seen a recent surge in activity in the past 10 years in Canada due in part to a Nuclear Renaissance. The Nuclear Industry workforce is also aging significantly and requires a significant turnover of staff due to the expected retirements in the next few years. The end result is that more students need to be prepared for work in all aspects of the Nuclear Industry. The traditional training model used for nuclear engineering education has been an option in an existing undergraduate program such as Chemical Engineering, Engineering Physics, or Mechanical Engineering with advanced training in graduate school. The education model was mostly lecture style with a small number of experimental laboratories due to the small number of research reactors that could be used for experimentation. While the traditional education model has worked well in the past, there are significantly more advanced technologies available today that can be used to enhance learning in the classroom. Most of the advancement in nuclear education learning has been through the use of computers and simulation related tasks. These have included use of industry codes, or simpler tools for analysis of the complex models used in the Nuclear Industry. While effective, these tools address the analytical portion of the program and do not address many of the other skills needed for nuclear engineers. In this work, a set of tools are examined that can be used to augment or replace the traditional lecture method. These tools are Mediasite, Adobe Connect, Elluminate, and Camtasia. All four tools have recording capabilities that allow the students to experience the exchange of information in different ways. The students now have more options in how they obtain and share information. Students can receive information in class, review it later at home or while in transit, or view/participate it live at a remote location. These different options allow for more flexibility in delivery of material. The purpose of this paper is to compare recent experiences with each of these tools in providing Nuclear Engineering Education and to determine the various constraints and impacts on delivery.



2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Naived George Eapen ◽  
Debabrata Samanta ◽  
Manjit Kaur ◽  
Jehad F. Al-Amri ◽  
Mehedi Masud

The increase in computational power in recent years has opened a new door for image processing techniques. Three-dimensional object recognition, identification, pose estimation, and mapping are becoming popular. The need for real-world objects to be mapped into three-dimensional spatial representation is greatly increasing, especially considering the heap jump we obtained in the past decade in virtual reality and augmented reality. This paper discusses an algorithm to convert an array of captured images into estimated 3D coordinates of their external mappings. Elementary methods for generating three-dimensional models are also discussed. This framework will help the community in estimating three-dimensional coordinates of a convex-shaped object from a series of two-dimension images. The built model could be further processed for increasing the resemblance of the input object in terms of its shapes, contour, and texture.



Machine learning in recent years has become an integral part of our day to day life and the ease of use has improved a lot in the past decade.There are various ways to make the model to work in smaller devices.A modest method to advance any machine learning algorithm to work in smaller devices is to provide the output of large complex models as input to smaller models which can be easily deployed into mobile phones .We provided a framework where the large models can even learn the domain knowledge which is integrated as first-order logic rules and explicitly includes that knowledge into the smaller model by simultaneously training of both the models.This can be achieved by transfer learning where the knowledge learned by one model can be used to teach the other model.Domain knowledge integration is the most critical part here and it can be done by using some of the constraint principles where the scope of the data is reduced based upon the constraints mentioned. One of the best representation of domain knowledge is logic rules where the knowledge is encoded as predicates.This framework provides a way to integrate human knowledge into deep neural networks that can be easily deployed into any devices.



Geophysics ◽  
1984 ◽  
Vol 49 (10) ◽  
pp. 1586-1595 ◽  
Author(s):  
W. C. Chew ◽  
S. Barone ◽  
B. Anderson ◽  
C. Hennessy

This paper presents the calculation of the diffraction of axisymmetric borehole waves by bed boundary discontinuities. The bed boundary is assumed to be horizontal and the inhomogeneities to be axially symmetric. In such a geometry, an axially symmetric source will produce only axially symmetric waves. Since the borehole is an open structure, the mode spectrum consists of a discrete part as well as a continuum. The scattering of a continuum of waves by bed boundaries is difficult to treat. The approach used in the past in treating this class of problem has been approximate in nature, or highly numerical, such as the finite‐element method. We present here a systematic way to approximate the continuum of modes by discrete modes. After discretization, the scattering problem can be treated simply. Since the approach is systematic, it allows derivation of the solution to any desired degree of accuracy in theory; but in practice, it is limited by the computational resources available. We also show that our approach is variational and satisfies both the reciprocity theorem and energy conservation.



Sign in / Sign up

Export Citation Format

Share Document