scholarly journals MATSim-T

Author(s):  
Michael Balmer ◽  
Marcel Rieser ◽  
Konrad Meister ◽  
David Charypar ◽  
Nicolas Lefebvre ◽  
...  

Micro-simulations for transport planning are becoming increasingly important in traffic simulation, traffic analysis, and traffic forecasting. In the last decades the shift from using typically aggregated data to more detailed, individual based, complex data (e.g. GPS tracking) andthe continuously growing computer performance on fixed price level leads to the possibility of using microscopic models for large scale planning regions. This chapter presents such a micro-simulation. The work is part of the research project MATSim (Multi Agent Transport Simulation, http://matsim.org). In the chapter here the focus lies on design and implementation issues as well as on computational performance of different parts of the system. Based on a study of Swiss daily traffic – ca. 2.3 million individuals using motorized individual transport producing about 7.1 million trips, assigned to a Swiss network model with about 60,000 links, simulated and optimized completely time-dynamic for a complete workday – it is shown that the system is able to generate those traffic patterns in about 36 hours computation time.

2013 ◽  
Vol 433-435 ◽  
pp. 1853-1856
Author(s):  
Ting Hong Zhao ◽  
Peng Fei Zhang ◽  
Hui Min Hou

rrigation district informative construction is an effective way to improve the management and to rational allocate and effectively utility irrigation water resources. This paper is directed against the characteristics such as large-scale monitoring data amount, complex data types, high real-time requirement, strong spatial correlation, etc. combine Multi-Agent theory with irrigation district information system together, and use GSM communication network as the communication network of system, established an agricultural irrigation district information system based on Multi-Agent and GSM, which can full utility intelligent of Agent and the good communication coordination of Multi-Agent system, so to provide comprehensive technical support for irrigation management and decision making.


2018 ◽  
Author(s):  
Pavel Pokhilko ◽  
Evgeny Epifanovsky ◽  
Anna I. Krylov

Using single precision floating point representation reduces the size of data and computation time by a factor of two relative to double precision conventionally used in electronic structure programs. For large-scale calculations, such as those encountered in many-body theories, reduced memory footprint alleviates memory and input/output bottlenecks. Reduced size of data can lead to additional gains due to improved parallel performance on CPUs and various accelerators. However, using single precision can potentially reduce the accuracy of computed observables. Here we report an implementation of coupled-cluster and equation-of-motion coupled-cluster methods with single and double excitations in single precision. We consider both standard implementation and one using Cholesky decomposition or resolution-of-the-identity of electron-repulsion integrals. Numerical tests illustrate that when single precision is used in correlated calculations, the loss of accuracy is insignificant and pure single-precision implementation can be used for computing energies, analytic gradients, excited states, and molecular properties. In addition to pure single-precision calculations, our implementation allows one to follow a single-precision calculation by clean-up iterations, fully recovering double-precision results while retaining significant savings.


2021 ◽  
Vol 22 (5) ◽  
pp. 2659
Author(s):  
Gianluca Costamagna ◽  
Giacomo Pietro Comi ◽  
Stefania Corti

In the last decade, different research groups in the academic setting have developed induced pluripotent stem cell-based protocols to generate three-dimensional, multicellular, neural organoids. Their use to model brain biology, early neural development, and human diseases has provided new insights into the pathophysiology of neuropsychiatric and neurological disorders, including microcephaly, autism, Parkinson’s disease, and Alzheimer’s disease. However, the adoption of organoid technology for large-scale drug screening in the industry has been hampered by challenges with reproducibility, scalability, and translatability to human disease. Potential technical solutions to expand their use in drug discovery pipelines include Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR) to create isogenic models, single-cell RNA sequencing to characterize the model at a cellular level, and machine learning to analyze complex data sets. In addition, high-content imaging, automated liquid handling, and standardized assays represent other valuable tools toward this goal. Though several open issues still hamper the full implementation of the organoid technology outside academia, rapid progress in this field will help to prompt its translation toward large-scale drug screening for neurological disorders.


2019 ◽  
Vol 17 (06) ◽  
pp. 947-975 ◽  
Author(s):  
Lei Shi

We investigate the distributed learning with coefficient-based regularization scheme under the framework of kernel regression methods. Compared with the classical kernel ridge regression (KRR), the algorithm under consideration does not require the kernel function to be positive semi-definite and hence provides a simple paradigm for designing indefinite kernel methods. The distributed learning approach partitions a massive data set into several disjoint data subsets, and then produces a global estimator by taking an average of the local estimator on each data subset. Easy exercisable partitions and performing algorithm on each subset in parallel lead to a substantial reduction in computation time versus the standard approach of performing the original algorithm on the entire samples. We establish the first mini-max optimal rates of convergence for distributed coefficient-based regularization scheme with indefinite kernels. We thus demonstrate that compared with distributed KRR, the concerned algorithm is more flexible and effective in regression problem for large-scale data sets.


Symmetry ◽  
2020 ◽  
Vol 12 (4) ◽  
pp. 631
Author(s):  
Chunyang Hu

In this paper, deep reinforcement learning (DRL) and knowledge transfer are used to achieve the effective control of the learning agent for the confrontation in the multi-agent systems. Firstly, a multi-agent Deep Deterministic Policy Gradient (DDPG) algorithm with parameter sharing is proposed to achieve confrontation decision-making of multi-agent. In the process of training, the information of other agents is introduced to the critic network to improve the strategy of confrontation. The parameter sharing mechanism can reduce the loss of experience storage. In the DDPG algorithm, we use four neural networks to generate real-time action and Q-value function respectively and use a momentum mechanism to optimize the training process to accelerate the convergence rate for the neural network. Secondly, this paper introduces an auxiliary controller using a policy-based reinforcement learning (RL) method to achieve the assistant decision-making for the game agent. In addition, an effective reward function is used to help agents balance losses of enemies and our side. Furthermore, this paper also uses the knowledge transfer method to extend the learning model to more complex scenes and improve the generalization of the proposed confrontation model. Two confrontation decision-making experiments are designed to verify the effectiveness of the proposed method. In a small-scale task scenario, the trained agent can successfully learn to fight with the competitors and achieve a good winning rate. For large-scale confrontation scenarios, the knowledge transfer method can gradually improve the decision-making level of the learning agent.


Author(s):  
Vinay Sriram ◽  
David Kearney

High speed infrared (IR) scene simulation is used extensively in defense and homeland security to test sensitivity of IR cameras and accuracy of IR threat detection and tracking algorithms used commonly in IR missile approach warning systems (MAWS). A typical MAWS requires an input scene rate of over 100 scenes/second. Infrared scene simulations typically take 32 minutes to simulate a single IR scene that accounts for effects of atmospheric turbulence, refraction, optical blurring and charge-coupled device (CCD) camera electronic noise on a Pentium 4 (2.8GHz) dual core processor [7]. Thus, in IR scene simulation, the processing power of modern computers is a limiting factor. In this paper we report our research to accelerate IR scene simulation using high performance reconfigurable computing. We constructed a multi Field Programmable Gate Array (FPGA) hardware acceleration platform and accelerated a key computationally intensive IR algorithm over the hardware acceleration platform. We were successful in reducing the computation time of IR scene simulation by over 36%. This research acts as a unique case study for accelerating large scale defense simulations using a high performance multi-FPGA reconfigurable computer.


Sign in / Sign up

Export Citation Format

Share Document