single processor
Recently Published Documents


TOTAL DOCUMENTS

208
(FIVE YEARS 21)

H-INDEX

18
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Cyrille Mascart ◽  
Gilles Scarella ◽  
Patricia Reynaud-Bouret ◽  
Alexandre Muzy

We present here a new algorithm based on a random model for simulating efficiently large brain neuronal networks. Model parameters (mean firing rate, number of neurons, synaptic connection probability and postsynaptic duration) are easy to calibrate further on real data experiments. Based on time asynchrony assumption, both computational and memory complexities are proved to be theoretically linear with the number of neurons. These results are experimentally validated by sequential simulations of millions of neurons and billions of synapses in few minutes on a single processor desktop computer.


2021 ◽  
Author(s):  
Lubomir Jirasek

A two-step partitioning algorithm for FE meshes is proposed in this work for the purposes of time savings. A direct method based on the concept of 'separateness' was applied first, followed by a partition optimization process that used a Genetic Algorithm (GA). A total of 9 applications were evaluated to demonstrate the durability, versatility, and effectiveness of this partitioning algorithm with respect to interface node count and subdomain load balance. Beyond this wingbox optimization problem was performed on a single processor using a GA to demonstrate the possible time savings of the method. With a 30% decrease in compute time witnessed, it can be said with confidence that the propose partitioning algorithm was a success.


2021 ◽  
Author(s):  
Lubomir Jirasek

A two-step partitioning algorithm for FE meshes is proposed in this work for the purposes of time savings. A direct method based on the concept of 'separateness' was applied first, followed by a partition optimization process that used a Genetic Algorithm (GA). A total of 9 applications were evaluated to demonstrate the durability, versatility, and effectiveness of this partitioning algorithm with respect to interface node count and subdomain load balance. Beyond this wingbox optimization problem was performed on a single processor using a GA to demonstrate the possible time savings of the method. With a 30% decrease in compute time witnessed, it can be said with confidence that the propose partitioning algorithm was a success.


Author(s):  
Goodhead T. Abraham ◽  
Evans F. Osaisai ◽  
Nicholas S. Dienagha

As Grid computing continues to make inroads into different spheres of our lives and multicore computers become ubiquitous, the need to leverage the gains of multicore computers for the scheduling of Grid jobs becomes a necessity. Most Grid schedulers remain sequential in nature and are inadequate in meeting up with the growing data and processing need of the Grid. Also, the leakage of Moore’s dividend continues as most computing platforms still depend on the underlying hardware for increased performance. Leveraging the Grid for the data challenge of the future requires a shift away from the traditional sequential method. This work extends the work of [1] on a quadcore system. A random method was used to group machines and the total processing power of machines in each group was computed, a size proportional to speed method is then used to estimates the size of jobs for allocation to machine groups. The MinMin scheduling algorithm was implemented within the groups to schedule a range of jobs while varying the number of groups and threads. The experiment was executed on a single processor system and on a quadcore system. Significant improvement was achieved using the group method on the quadcore system compared to the ordinary MinMin on the quadcore. We also find significant performance improvement with increasing groups. Thirdly, we find that the MinMin algorithm also gained marginally from the quadcore system meaning that it is also scalable.


2021 ◽  
Author(s):  
Ad Stoffelen ◽  
Gert-Jan Marseille ◽  
Weicheng Ni ◽  
Alexis Mouche ◽  
Federica Polverari ◽  
...  

<p>How strong does the wind blows in a hurricane proves a question that is difficult to answer, but has far-reaching consequences for satellite meteorology, weather forecasting and hurricane advisories. Moreover, huge year-to-year variability in extremes challenges evidence for changing hurricane climatology in a changing climate. Tropical circulation conditions, such as El Nino and the Madden Julian Oscillation, are associated with the large year-to-year variability and their link to climate change is poorly understood, though of great societal interest. Since hurricanes are sparsely sampled, satellite instruments are in principle very useful to monitor climate change. However, their stability over time in quality and quantity (sampling) needs to be guaranteed. Moreover, to use the longest possible satellite record, satellite instrument intercalibration of the extremes is needed [6]. This applies for a single instrument using a single processor version (calibration, Quality Control, Geophysical Model Function, retrieval) for change detection over a decade typically and the use of overlapping single-instrument/single-processor series for climate analyses. Currently, systematic inconsistencies in the extremes exist, as illustrated within the European Union (EU) Copernicus Climate Change Windstorm Information Service (C3S WISC*) and European organisation for the exploitatrion of Meteorological Satellites (EUMETSAT) C-band High and Extreme-Force Speeds (CHEFS^) projects. Besides for the scatterometers ERS, QuikScat, ASCAT and OSCAT, these instrument series may be extended to passive microwave wind instruments from 1979, if proven reliable at the extremes?</p><p>In the EUMETSAT CHEFS project, KNMI, ICM and IFREMER worked with international colleagues to improve the detection of hurricane-force winds. To calibrate the diverse available satellite, airplane and model winds, in-situ wind speed references are needed. Unfortunately, these prove rather inconsistent in the wind speed range of 15 to 25 m/s, casting doubt on the higher winds too. However, dropsondes are used as reference operationally at high and extreme winds in nowcasting and in the European Space Agency (ESA) project MAXSS satellite intercalibration is further investigated based on dropsondes to serve this community. However, from a scientific point of view, we should perhaps put more confidence in the moored buoy references? This would favor accuracy in drag parameterizations and physical modelling and observation of the extremes. This dilemma will be presented to initiate a discussion with the international community gathered at EGU ’21.</p><p>* Windstorm Information Service: https://wisc.climate.copernicus.eu/ </p><p>^ C-band High and Extreme-Force Speeds: https://www.eumetsat.int/chefs</p>


2021 ◽  
Vol 31 (3) ◽  
Author(s):  
Ajay Jasra ◽  
Kody J. H. Law ◽  
Deng Lu

AbstractWe consider the problem of estimating a parameter $$\theta \in \Theta \subseteq {\mathbb {R}}^{d_{\theta }}$$ θ ∈ Θ ⊆ R d θ associated with a Bayesian inverse problem. Typically one must resort to a numerical approximation of gradient of the log-likelihood and also adopt a discretization of the problem in space and/or time. We develop a new methodology to unbiasedly estimate the gradient of the log-likelihood with respect to the unknown parameter, i.e. the expectation of the estimate has no discretization bias. Such a property is not only useful for estimation in terms of the original stochastic model of interest, but can be used in stochastic gradient algorithms which benefit from unbiased estimates. Under appropriate assumptions, we prove that our estimator is not only unbiased but of finite variance. In addition, when implemented on a single processor, we show that the cost to achieve a given level of error is comparable to multilevel Monte Carlo methods, both practically and theoretically. However, the new algorithm is highly amenable to parallel computation.


Author(s):  
Alexander Kostin

A very fast scheduling system is proposed and experimentally investigated. The system consists of a job shop manager and dynamic models of machines. A schedule is created in the course of a close cooperation with models of the machines that generate driving events for the scheduler. The system is implemented with a new class of extended Petri nets and runs in the environment of the Petri-net tool WINSIM. The scheduler creates a schedule sequentially, without any form of enumerative search. To investigate the scheduler performance, a large number of experiments were conducted with the use of few strategies. Due to a unique mechanism of monitoring of triggering events in the Petri net, the developed scheduler runs at least hundreds of times faster than any known single-processor job shop scheduler.


Author(s):  
Natalia S. Grigoreva ◽  

The problem of minimizing the maximum delivery times while scheduling tasks on a single processor is a classical combinatorial optimization problem. Each task ui must be processed without interruption for t(ui) time units on the machine, which can process at most one task at time. Each task uw; has a release time r(ui), when the task is ready for processing, and a delivery time g(ui). Its delivery begins immediately after processing has been completed. The objective is to minimize the time, by which all jobs are delivered. In the Graham notation this problem is denoted by 1|rj,qi|Cmax, it has many applications and it is NP-hard in a strong sense. The problem is useful in solving owshop and jobshop scheduling problems. The goal of this article is to propose a new 3/2-approximation algorithm, which runs in O(n log n) times for scheduling problem 1|rj.qi|Cmax. An example is provided which shows that the bound of 3/2 is accurate. To compare the effectiveness of proposed algorithms, random generated problems of up to 5000 tasks were tested.


2021 ◽  
Vol 51 (5) ◽  
Author(s):  
Andre Rozemberg Peixoto Simões ◽  
Charles Frederick Nicholson ◽  
Janderson Damaceno dos Reis ◽  
Roberto Max Protil ◽  
André Luiz Julien Ferraz ◽  
...  

ABSTRACT: The current study explores variables associated with the loyalty of dairy farmers to dairy processors in the Brazilian context. A multivariate discrete choice (Logit) model and alternative formulations assess the associations between loyalty metrics and farm and processor characteristics for a sample of 32 dairy farmers in 16 municipalities at the Zona da Mata in Minas Gerais. Twenty-two dairy processors were identified as milk buyers in the area studied, but each farmer indicated that they could sell to an average of five alternative buyers of milk. Farmers’ attributes such as production scale or the technological level are not statistically significantly associated with loyalty in this sample. The current milk price paid to farmers in our sample is not associated with increased loyalty (sales to a single processor for 6 or more years) in all estimated models; although, further research on this impact is merited to inform buyer-pricing policy. Variables associated with increased loyalty include payment of premiums for quality, farmer years of experience and cooperation among farmers in the purchase of inputs. Delayed payment is associated with reduced loyalty. We could not determine the effect of participation in technical assistance programs offered by processors on loyalty, because in our sample all farmers received free university-provided technical assistance. The payment of a premium based on milk volume was also unassociated with loyalty determination. The small size of our sample limits the ability to generalize our results but provides exploratory results that facilitate future investigation.


Sign in / Sign up

Export Citation Format

Share Document