scholarly journals GERARD: GEneral RApid Resolution of Digital Mazes Using a Memristor Emulator

Author(s):  
Pablo Dopazo ◽  
Carol de Benito ◽  
Oscar Camps ◽  
Stavros G. Stavrinides ◽  
Rodrigo Picos

In this paper, a system of searching for optimal paths is developed and concreted on a FPGA. It is based on a memristive emulator, used as a delay element, by configuring the test graph as a memristor network. A parallel algorithm is applied to reduce computing time and increase efficiency. The operation of the algorithm in Matlab is checked beforehand and then exported to two different Intel FPGAs: a DE0-Nano board and an Arria 10 GX 220 FPGA. In both cases reliable results are obtained quickly and conveniently, even for the case of a 300x300 nodes maze.

Physics ◽  
2021 ◽  
Vol 4 (1) ◽  
pp. 1-11
Author(s):  
Pablo Dopazo ◽  
Carola de Benito ◽  
Oscar Camps ◽  
Stavros G. Stavrinides ◽  
Rodrigo Picos

Memristive technology is a promising game-changer in computers and electronics. In this paper, a system exploring the optimal paths through a maze, utilizing a memristor-based setup, is developed and concreted on a FPGA (field-programmable gate array) device. As a memristor, a digital emulator has been used. According to the proposed approach, the memristor is used as a delay element, further configuring the test graph as a memristor network. A parallel algorithm is then applied, successfully reducing computing time and increasing the system’s efficiency. The proposed system is simple, easy to scale up and capable of implementing different graph configurations. The operation of the algorithm in the MATLAB (matrix laboratory) programming enviroment is checked beforehand and then exported to two different Intel FPGAs: a DE0-Nano board and an Arria 10 GX 220 FPGA. In both cases, reliable results are obtained quickly and conveniently, even for the case of a 300 × 300 nodes maze.


2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Ming Cai ◽  
Yifan Yao ◽  
Haibo Wang

The complexity of the 3D buildings and road networks gives the simulation of urban noise difficulty and significance. To solve the problem of computing complexity, a systematic methodology for computing urban traffic noise maps under 3D complex building environments is presented on a supercomputer. A parallel algorithm focused on controlling the compute nodes of the supercomputer is designed. Moreover, a rendering method is provided to visualize the noise map. In addition, a strategy for obtaining a real-time dynamic noise map is elaborated. Two efficiency experiments are implemented. One experiment involves comparing the expansibility of the parallel algorithm with various numbers of compute nodes and various computing scales to determine the expansibility. With an increase in the number of compute nodes, the computing time increases linearly, and an increased computing scale leads to computing efficiency increases. The other experiment is a comparison of the computing speed between a supercomputer and a normal computer; the computing node of Tianhe-2 is found to be six times faster than that of a normal computer. Finally, the traffic noise suppression effect of buildings is analyzed. It is found that the building groups have obvious shielding effect on traffic noise.


Author(s):  
Rui Zou ◽  
Sourabh Bhattacharya

In this work, we analyze approximations of capture sets [1] for a visibility based pursuit-evasion game. In contrast to the capture problem, the pursuer tries to maintain a line-of-sight with the evader in free space in our problem. We extend the concept of U set initially proposed in [2] for holonomic players to the scenario in which the pursuer is holonomic. The problem of computing the U set is reduced to that of computing time-optimal paths for the non-holonomic vehicles to an arbitrary line. We characterize the primitives for time-optimal paths for the Dubin’s vehicle, Reed-shepps car and a Differential Drive robot. Based on these primitives, we construct the optimal paths and provide an algorithm to compute the U set.


2001 ◽  
Vol 11 (01) ◽  
pp. 125-138 ◽  
Author(s):  
H. MONGELLI ◽  
S. W. SONG

Given a text and a pattern, the problem of pattern matching consists of determining all the positions of the text where the pattern occurs. When the text and the pattern are matrices, the matching is termed bidimensional. There are variations of this problem where we allow the matching using a somehow modified pattern. A modification that we will allow is that the pattern can be scaled. We propose a new parallel algorithm for this problem, under the CGM (Coarse Grained Multicomputer) model. This algorithm requires linear local computing time in the input, linear memory and uses only one communication round, during which at most a linear amount of data is exchanged. To be the best of our knowledge, there are no known parallel algorithms for the bidimensional pattern matching problem with scaling in the literature. This proposed algorithm was implemented in C, using the PVM interface and was executed on a Parsytec PowerXplorer parallel machine. The experimental results obtained were very promising and showed significant speedups.


2011 ◽  
Vol 08 (03) ◽  
pp. 597-609 ◽  
Author(s):  
Y. T. ZHOU ◽  
Z. H. HE ◽  
Z. G. WU

An adaptive parallel algorithm for hierarchical clustering based on PRAM model was presented. The following approaches were devised to produce the optimized clustered data set, including the data preprocessing based on "90-10" rule to decrease the size of the data set, progressively the parallel algorithm to create Euclid minimum spanning trees on absolute graph, and the algorithm that determined the split strategies and dealt with the memory conflicts. The data set was clustered based on the noncollision memory, the lowest cost, and weakest PRAM-EREW model. N data sets were clustered in O((λn)2/p) time (0.1 ≤ λ ≤ 0.3) by performing this algorithm using p processors (1 ≤ p ≤ n/ log (n)). The parallel hierarchical clustering algorithm based on PRAM model was adaptive, and of noncollision memory. The computing time could be significantly reduced after original inputting data was effectually preprocessed through the improved preprocessing methods presented in this paper.


Author(s):  
M.A. O'Keefe ◽  
Sumio Iijima

We have extended the multi-slice method of computating many-beam lattice images of perfect crystals to calculations for imperfect crystals using the artificial superlattice approach. Electron waves scattered from faulted regions of crystals are distributed continuously in reciprocal space, and all these waves interact dynamically with each other to give diffuse scattering patterns.In the computation, this continuous distribution can be sampled only at a finite number of regularly spaced points in reciprocal space, and thus finer sampling gives an improved approximation. The larger cell also allows us to defocus the objective lens further before adjacent defect images overlap, producing spurious computational Fourier images. However, smaller cells allow us to sample the direct space cell more finely; since the two-dimensional arrays in our program are limited to 128X128 and the sampling interval shoud be less than 1/2Å (and preferably only 1/4Å), superlattice sizes are limited to 40 to 60Å. Apart from finding a compromis superlattice cell size, computing time must be conserved.


Author(s):  
P.-F. Staub ◽  
C. Bonnelle ◽  
F. Vergand ◽  
P. Jonnard

Characterizing dimensionally and chemically nanometric structures such as surface segregation or interface phases can be performed efficiently using electron probe (EP) techniques at very low excitation conditions, i.e. using small incident energies (0.5<E0<5 keV) and low incident overvoltages (1<U0<1.7). In such extreme conditions, classical analytical EP models are generally pushed to their validity limits in terms of accuracy and physical consistency, and Monte-Carlo simulations are not convenient solutions as routine tools, because of their cost in computing time. In this context, we have developed an intermediate procedure, called IntriX, in which the ionization depth distributions Φ(ρz) are numerically reconstructed by integration of basic macroscopic physical parameters describing the electron beam/matter interaction, all of them being available under pre-established analytical forms. IntriX’s procedure consists in dividing the ionization depth distribution into three separate contributions:


Sign in / Sign up

Export Citation Format

Share Document