GPU-accelerated low-latency real-time searches for gravitational waves from compact binary coalescence

2012 ◽  
Vol 29 (23) ◽  
pp. 235018 ◽  
Author(s):  
Yuan Liu ◽  
Zhihui Du ◽  
Shin Kee Chung ◽  
Shaun Hooper ◽  
David Blair ◽  
...  
2020 ◽  
Vol 101 (8) ◽  
Author(s):  
Kyungmin Kim ◽  
Tjonnie G. F. Li ◽  
Rico K. L. Lo ◽  
Surabhi Sachdev ◽  
Robin S. H. Yuen

2013 ◽  
Vol 22 (11) ◽  
pp. 1360011 ◽  
Author(s):  
LINQING WEN ◽  
QI CHU

With the first detection of gravitational waves expected in the next decade, increasing efforts are made toward the electromagnetic follow-up observations of gravitational wave events. In this paper, I discuss the prospect of real-time detection and source localization for gravitational waves from neutron star–neutron star binary or neutron star–black hole binary coalescences before their merger. I show that several low-latency search pipelines are already under intensive development with the aim to provide real-time detections of these events. There will also be fast responding and/or wide-field electromagnetic telescopes available to help catch the electromagnetic or particle flashes possibly occurring during or immediately after their merger. It has been shown that a few coalescence events per year can be detected by advanced LIGO-VIRGO detector network tens of seconds before their merger. However, most of these events will have poor sky direction localization for the existing gravitational-wave detector network, making it extremely challenging for follow up observations by astronomical telescopes aiming at catching events around the merger time. A larger detector network including the planned detectors in Japan and in India will play an important role in improving the angular resolution and making prompt follow up observations much more realistic. A new detector at the Southern Hemisphere AIGO will further contribute significantly to this aspect.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3715
Author(s):  
Ioan Ungurean ◽  
Nicoleta Cristina Gaitan

In the design and development process of fog computing solutions for the Industrial Internet of Things (IIoT), we need to take into consideration the characteristics of the industrial environment that must be met. These include low latency, predictability, response time, and operating with hard real-time compiling. A starting point may be the reference fog architecture released by the OpenFog Consortium (now part of the Industrial Internet Consortium), but it has a high abstraction level and does not define how to integrate the fieldbuses and devices into the fog system. Therefore, the biggest challenges in the design and implementation of fog solutions for IIoT is the diversity of fieldbuses and devices used in the industrial field and ensuring compliance with all constraints in terms of real-time compiling, low latency, and predictability. Thus, this paper proposes a solution for a fog node that addresses these issues and integrates industrial fieldbuses. For practical implementation, there are specialized systems on chips (SoCs) that provides support for real-time communication with the fieldbuses through specialized coprocessors and peripherals. In this paper, we describe the implementation of the fog node on a system based on Xilinx Zynq UltraScale+ MPSoC ZU3EG A484 SoC.


Author(s):  
Olivier Jaubert ◽  
Javier Montalt‐Tordera ◽  
Dan Knight ◽  
Gerry J. Coghlan ◽  
Simon Arridge ◽  
...  

Electronics ◽  
2021 ◽  
Vol 10 (6) ◽  
pp. 689
Author(s):  
Tom Springer ◽  
Elia Eiroa-Lledo ◽  
Elizabeth Stevens ◽  
Erik Linstead

As machine learning becomes ubiquitous, the need to deploy models on real-time, embedded systems will become increasingly critical. This is especially true for deep learning solutions, whose large models pose interesting challenges for target architectures at the “edge” that are resource-constrained. The realization of machine learning, and deep learning, is being driven by the availability of specialized hardware, such as system-on-chip solutions, which provide some alleviation of constraints. Equally important, however, are the operating systems that run on this hardware, and specifically the ability to leverage commercial real-time operating systems which, unlike general purpose operating systems such as Linux, can provide the low-latency, deterministic execution required for embedded, and potentially safety-critical, applications at the edge. Despite this, studies considering the integration of real-time operating systems, specialized hardware, and machine learning/deep learning algorithms remain limited. In particular, better mechanisms for real-time scheduling in the context of machine learning applications will prove to be critical as these technologies move to the edge. In order to address some of these challenges, we present a resource management framework designed to provide a dynamic on-device approach to the allocation and scheduling of limited resources in a real-time processing environment. These types of mechanisms are necessary to support the deterministic behavior required by the control components contained in the edge nodes. To validate the effectiveness of our approach, we applied rigorous schedulability analysis to a large set of randomly generated simulated task sets and then verified the most time critical applications, such as the control tasks which maintained low-latency deterministic behavior even during off-nominal conditions. The practicality of our scheduling framework was demonstrated by integrating it into a commercial real-time operating system (VxWorks) then running a typical deep learning image processing application to perform simple object detection. The results indicate that our proposed resource management framework can be leveraged to facilitate integration of machine learning algorithms with real-time operating systems and embedded platforms, including widely-used, industry-standard real-time operating systems.


Sign in / Sign up

Export Citation Format

Share Document