high performance computers
Recently Published Documents


TOTAL DOCUMENTS

255
(FIVE YEARS 35)

H-INDEX

21
(FIVE YEARS 2)

2022 ◽  
Vol 15 (1) ◽  
pp. 63
Author(s):  
Natarajan Arul Murugan ◽  
Artur Podobas ◽  
Davide Gadioli ◽  
Emanuele Vitali ◽  
Gianluca Palermo ◽  
...  

Drug discovery is the most expensive, time-demanding, and challenging project in biopharmaceutical companies which aims at the identification and optimization of lead compounds from large-sized chemical libraries. The lead compounds should have high-affinity binding and specificity for a target associated with a disease, and, in addition, they should have favorable pharmacodynamic and pharmacokinetic properties (grouped as ADMET properties). Overall, drug discovery is a multivariable optimization and can be carried out in supercomputers using a reliable scoring function which is a measure of binding affinity or inhibition potential of the drug-like compound. The major problem is that the number of compounds in the chemical spaces is huge, making the computational drug discovery very demanding. However, it is cheaper and less time-consuming when compared to experimental high-throughput screening. As the problem is to find the most stable (global) minima for numerous protein–ligand complexes (on the order of 106 to 1012), the parallel implementation of in silico virtual screening can be exploited to ensure drug discovery in affordable time. In this review, we discuss such implementations of parallelization algorithms in virtual screening programs. The nature of different scoring functions and search algorithms are discussed, together with a performance analysis of several docking softwares ported on high-performance computing architectures.


2021 ◽  
Vol 11 (6) ◽  
Author(s):  
Agastya P. Bhati ◽  
Shunzhou Wan ◽  
Dario Alfè ◽  
Austin R. Clyde ◽  
Mathis Bode ◽  
...  

The race to meet the challenges of the global pandemic has served as a reminder that the existing drug discovery process is expensive, inefficient and slow. There is a major bottleneck screening the vast number of potential small molecules to shortlist lead compounds for antiviral drug development. New opportunities to accelerate drug discovery lie at the interface between machine learning methods, in this case, developed for linear accelerators, and physics-based methods. The two in silico methods, each have their own advantages and limitations which, interestingly, complement each other. Here, we present an innovative infrastructural development that combines both approaches to accelerate drug discovery. The scale of the potential resulting workflow is such that it is dependent on supercomputing to achieve extremely high throughput. We have demonstrated the viability of this workflow for the study of inhibitors for four COVID-19 target proteins and our ability to perform the required large-scale calculations to identify lead antiviral compounds through repurposing on a variety of supercomputers.


2021 ◽  
Vol 11 (18) ◽  
pp. 8291
Author(s):  
Piotr Cybulski ◽  
Zbigniew Zieliński

Widespread access to low-cost, high computing power allows for increased computerization of everyday life. However, high-performance computers alone cannot meet the demands of systems such as the Internet of Things or multi-agent robotic systems. For this reason, modern design methods are needed to develop new and extend existing projects. Because of high interest in this subject, many methodologies for designing the aforementioned systems have been developed. None of them, however, can be considered the default one to which others are compared to. Any useful methodology must provide some tools, versatility, and capability to verify its results. This paper presents an algorithm for verifying the correctness of multi-agent systems modeled as tracking bigraphical reactive systems and checking whether a behavior policy for the agents meets non-functional requirements. Memory complexity of methods used to construct behavior policies is also discussed, and a few ways to reduce it are proposed. Detailed examples of algorithm usage have been presented involving non-functional requirements regarding time and safety of behavior policy execution.


Author(s):  
Ignacio Martinez-Alpiste ◽  
Gelayol Golcarenarenji ◽  
Qi Wang ◽  
Jose Maria Alcaraz-Calero

AbstractMachine learning algorithms based on convolutional neural networks (CNNs) have recently been explored in a myriad of object detection applications. Nonetheless, many devices with limited computation resources and strict power consumption constraints are not suitable to run such algorithms designed for high-performance computers. Hence, a novel smartphone-based architecture intended for portable and constrained systems is designed and implemented to run CNN-based object recognition in real time and with high efficiency. The system is designed and optimised by leveraging the integration of the best of its kind from the state-of-the-art machine learning platforms including OpenCV, TensorFlow Lite, and Qualcomm Snapdragon informed by empirical testing and evaluation of each candidate framework in a comparable scenario with a high demanding neural network. The final system has been prototyped combining the strengths from these frameworks and led to a new machine learning-based object recognition execution environment embedded in a smartphone with advantageous performance compared with the previous frameworks.


2021 ◽  
Vol 263 (3) ◽  
pp. 3854-3860
Author(s):  
Qichen Tan ◽  
Haoyu Bian ◽  
Siyang Zhong ◽  
Xin Zhang

The operation of the rapidly growing unmanned aerial vehicles (UAV) and the promising urban aerial mobility (UAM) could have a significant noise impact on the environment. In this work, we developed a cloud-based noise simulator to efficiently assess the environmental impact of UAM and UAV. The noise sources and long-distance propagation are computed by the propeller noise prediction models and an advanced Gaussian beam tracing method, respectively, in local high-performance computers. Users can define the working conditions and vehicle layer through a platform with a user-friendly graphical interface. In addition, the noise level distribution at the observers of interest such as the buildings can be visualized. By employing advanced interpolation methods or autonomous learning algorithms, the computations are efficiently accelerated such that the noise distributions are simultaneously displayed during flights of the vehicles. To better measure the noise impact on human perception, various noise metrics will be output for further analysis. By conducting the virtual flights using the simulator, the noise impact in each flight state and atmospheric condition of different vehicles can be predicted, which will then facilitate the low-noise flights for both UAV and UAM.


Author(s):  
S. Majid Nazemi ◽  
Antonio Pappaterra ◽  
Willem Verleysen ◽  
Bart Vandevelde ◽  
Fabian Neugebauer ◽  
...  

Author(s):  
Esther Andrés-Pérez ◽  
Carlos Paulete-Periáñez

AbstractComputational fluid dynamics (CFD) simulations are nowadays been intensively used in aeronautical industries to analyse the aerodynamic performance of different aircraft configurations within a design process. These simulations allow to reduce time and cost compared to wind tunnel experiments or flight tests. However, for complex configurations, CFD simulations may still take several hours using high-performance computers to deliver results. For this reason, surrogate models are currently starting to be considered as a substitute of the CFD tool with a reasonable prediction. This paper presents a review on surrogate regression models for aerodynamic coefficient prediction, in particular for the prediction of lift and drag coefficients. To compare the behaviour of the regression models, three different aeronautical configurations have been used, a NACA0012 airfoil, a RAE2822 airfoil and 3D DPW wing. These databases are also freely provided to the scientific community to allow other researchers to make further comparison with other methods.


2021 ◽  
Author(s):  
Laura Mansfield ◽  
Peer Nowack ◽  
Apostolos Voulgarakis

<p>In order to make predictions on how the climate would respond to changes in global and regional emissions, we typically run simulations on Global Climate Models (GCMs) with perturbed emissions or concentration fields. These simulations are highly expensive and often require the availability of high-performance computers. Machine Learning (ML) can provide an alternative approach to estimating climate response to various emissions quickly and cheaply. </p><p>We will present a Gaussian process emulator capable of predicting the global map of temperature response to different types of emissions (both greenhouse gases and aerosol pollutants), trained on a carefully designed set of simulations from a GCM. This particular work involves making short-term predictions on 5 year timescales but can be linked to an emulator from previous work that predicts on decadal timescales. We can also examine uncertainties associated with predictions to find out where where the method could benefit from increased training data. This is a particularly useful asset when constructing emulators for complex models, such as GCMs, where obtaining training runs is costly. </p>


2021 ◽  
Author(s):  
Martin Schreiber

<p>Running simulations on high-performance computers faces new challenges due to e.g. the stagnating or even decreasing per-core speed. This poses new restrictions and therefore challenges on solving PDEs within a particular time frame in the strong scaling case. Here, disruptive mathematical reformulations, which e.g. exploit additional degrees of parallelism also along the time dimension, gained increasing interest over the last two decades.</p><p>This talk will cover various examples of our current research on (parallel-in-)time integration methods in the context of weather and climate simulations such as rational approximation of exponential integrators, multi-level time integration of spectral deferred correction (PFASST) as well as other methods.</p><p>These methods are realized and studied with numerics similar to the ones used by the European Centre for Medium-Range Weather Forecasts (ECMWF). Our results motivate further investigation for operational weather/climate systems in order to cope with the hardware imposed restrictions of future super computer architectures.</p><p>I gratefully acknowledge contributions and more from Jed Brown, Francois Hamon, Terry S. Haut, Richard Loft, Michael L. Minion, Pedro S. Peixoto, Nathanaël Schaeffer, Raphael Schilling</p>


Sign in / Sign up

Export Citation Format

Share Document