computational element
Recently Published Documents


TOTAL DOCUMENTS

12
(FIVE YEARS 2)

H-INDEX

2
(FIVE YEARS 0)

2020 ◽  
Vol 3 (2) ◽  
pp. 164-180
Author(s):  
◽  
Ehsan Mousavi Khaneghah ◽  
Araz R. Aliev ◽  

The resource discovery in Exascale systems should support the occurrence of dynamic nature in each stakeholder's elements in the resource discovery process. The occurrence of dynamic and interactive nature in the accountable computational element creates challenges in executing the activities related to resource discovery, such as the continuation of the response to the request, granting access rights, and the resource allocation to the process. In the case of a lack of management and dynamic and interactive event control in the accountable computational element, the process of activities related to the resource discovery will fail. In this paper, we first examine the concept function of resource discovery in the accountable computational element. Then, to analyze the effects of occurrence, dynamic, and interactive event effects on resource discovery function in the accountable computational element are discussed. The purpose of this paper is to analyze the use of the traditional resource discovery in the Exascale distributed systems and investigate the factors that should be considered in the function of resource discovery management to have the possibility of application in the Exascale distributed system.


Neuromorphic computing is a non-von Neumann architecture which is also referred to as artificial neural network and that allows electronic system to function in the same manner as that of the human brain. In this paper we have developed neural core architecture analogous to that of the human brain. Each neural core has its own computational element neuron, memory to store information and local clock generator for synchronous functioning of neuron along with asynchronous input-output port and its port controller. The neuron model used here is a tailor-made of IBM TrueNorth’s neuron block. Our design methodology includes both synchronous and asynchronous circuit in order to build an event-driven neural network core. We have first simulated our design using Neuroph studio in order to calculate the weights and bias value and then used these weights for hardware implementation. With that we have successfully demonstrated the working of neural core using XOR application. It was designed in VHDL language and simulated in Xilinx ISE software.


Author(s):  
Stefan Leschka ◽  
Clemens Krautwald ◽  
Hocine Oumeraci

Tsunami propagation and inundation are commonly simulated using large-scale depth-averaged models. In such models, the quadratic friction law with a selected Manning’s coefficient is generally applied to account for the effect of bottom surface roughness in each computational element. Buildings and tree vegetation in coastal areas are usually smaller than the computational element size. Using empirical Manning’s coefficients to account for such large objects is not physically sound and, particularly in tsunami inundation modelling, this may result in large uncertainties. Therefore, an improved understanding of the processes associated with the hydraulic resistance of the so-called macro-roughness elements (MRE) is required. Relevant parameters such as shape, height and arrangement of the MRE should be investigated through laboratory experiments or numerical tests using a well-validated three-dimensional CFD model. Given the correlation of such parameters to the MRE-induced hydraulic resistance, empirical formulae were developed and directly implemented as sink terms in depth-averaged numerical solvers such as non-linear shallow-water (NLSW) models.


10.29007/t48n ◽  
2018 ◽  
Author(s):  
Claudio Angione ◽  
Giovanni Carapezza ◽  
Jole Costanza ◽  
Pietro Lio ◽  
Giuseppe Nicosia

If Turing were a first-year graduate student interested in computers,he would probably migrate into the field of computational biology. During his studies, he presenteda work about a mathematical and computational model of the morphogenesis process, in which chemical substancesreact together. Moreover, a protein can be thought of as a computational element, i.e. a processing unit, able totransform an input into an output signal. Thus, in a biochemical pathway, an enzyme reads the amount of reactants (substrates)and converts them in products. In this work, we consider the biochemical pathway in unicellular organisms (e.g. bacteria) as a living computer, and we are able to program it in order to obtain desired outputs.The genome sequence is thought of as an executable code specified by a set of commands in a sort of ad-hoc low-level programming language. Each combination of genes is coded as a string of bits $y \in \left \{ 0 , 1 \right \}^L$, each of which represents a gene set. By turning off a gene set, we turn off the chemical reaction associated with it. Through an optimal executable code stored in the ``memory'' of bacteria, we are able to simultaneously maximise the concentration of two or more metabolites of interest.Finally, we use the Robustness Analysis and a new Sensitivity Analysis method to investigate both the fragility of the computation carried out by bacteria and the most important entities in the mathematical relations used to model them.


2017 ◽  
Author(s):  
Birgit Kriener ◽  
Rishidev Chaudhuri ◽  
Ila R. Fiete

Identifying the maximal element (max,argmax) in a set is a core computational element in inference, decision making, optimization, action selection, consensus, and foraging. Running sequentially through a list of N fluctuating items takes N log(N) time to accurately find the max, prohibitively slow for large N. The power of computation in the brain is ascribed in part to its parallelism, yet it is theoretically unclear whether leaky and noisy neurons can perform a distributed computation that cuts the required time of a serial computation by a factor of N, a benchmark for parallel computation. We show that conventional winner-take-all neural networks fail the parallelism benchmark and in the presence of noise altogether fail to produce a winner when N is large. We introduce the nWTA network, in which neurons are equipped with a second nonlinearity that prevents weakly active neurons from contributing inhibition. Without parameter fine-tuning or re-scaling as the number of options N varies, the nWTA network converges N times faster than the serial strategy at equal accuracy, saturating the parallelism benchmark. The nWTA network self-adjusts integration time with task difficulty to maintain fixed accuracy without parameter change. Finally, the circuit generically exhibits Hick's law for decision speed. Our work establishes that distributed computation that saturates the parallelism benchmark is possible in networks of noisy, finite-memory neurons.


2012 ◽  
pp. 1903-1923
Author(s):  
Ali Dogru ◽  
Pinar Senkul ◽  
Ozgur Kaya

The amazing evolution fuelled by the introduction of the computational element has already changed our lives and continues to do so. Initially, the fast advancement in hardware partially enabled an appreciation for software potency. This meant that engineers had to have a better command over this field that was crucial in the solution of current and future problems and requirements. However, software development has been reported as not adequate, or mature enough. Intelligence can help closing this gap. This chapter introduces the historical and modern aspects of software engineering within the artificial intelligence perspective. Also an illustrative example is included that demonstrates a rule-based approach for the development of fault management systems.


Author(s):  
Ali Dogru ◽  
Pinar Senkul ◽  
Ozgur Kaya

The amazing evolution fuelled by the introduction of the computational element has already changed our lives and continues to do so. Initially, the fast advancement in hardware partially enabled an appreciation for software potency. This meant that engineers had to have a better command over this field that was crucial in the solution of current and future problems and requirements. However, software development has been reported as not adequate, or mature enough. Intelligence can help closing this gap. This chapter introduces the historical and modern aspects of software engineering within the artificial intelligence perspective. Also an illustrative example is included that demonstrates a rule-based approach for the development of fault management systems.


Author(s):  
Georg Pingen ◽  
David Meyer

The design of flow channels and surfaces to promote maximum heat transfer is of great importance, for example, in electronic cooling applications. In order to design surfaces optimized for maximum heat transfer under known flow conditions, a coupled thermal-fluid topology optimization approach based on the thermal lattice Boltzmann method (LBM) is introduced. Based on prior hydrodynamic topology optimization work, a Brinkman type porosity model is used. Every computational element/LBM node is varied continuously from fluid to solid, as traditionally done in fluidic topology optimization. This allows the formation of new boundaries and the generation of new designs. In addition, the present approach varies the thermal diffusivity from that of a fluid to that of a solid, permitting topology optimization for heat transfer applications while considering the thermal properties of both fluid and structure. The formulation of the optimization problem and sensitivity analysis is discussed and illustrated for a 2D example applicable to electronic cooling.


Sign in / Sign up

Export Citation Format

Share Document