sparse measurements
Recently Published Documents


TOTAL DOCUMENTS

101
(FIVE YEARS 38)

H-INDEX

13
(FIVE YEARS 3)

2021 ◽  
Vol 12 ◽  
Author(s):  
Felix Meister ◽  
Tiziano Passerini ◽  
Chloé Audigier ◽  
Èric Lluch ◽  
Viorel Mihalef ◽  
...  

Electroanatomic mapping is the gold standard for the assessment of ventricular tachycardia. Acquiring high resolution electroanatomic maps is technically challenging and may require interpolation methods to obtain dense measurements. These methods, however, cannot recover activation times in the entire biventricular domain. This work investigates the use of graph convolutional neural networks to estimate biventricular activation times from sparse measurements. Our method is trained on more than 15,000 synthetic examples of realistic ventricular depolarization patterns generated by a computational electrophysiology model. Using geometries sampled from a statistical shape model of biventricular anatomy, diverse wave dynamics are induced by randomly sampling scar and border zone distributions, locations of initial activation, and tissue conduction velocities. Once trained, the method accurately reconstructs biventricular activation times in left-out synthetic simulations with a mean absolute error of 3.9 ms ± 4.2 ms at a sampling density of one measurement sample per cm2. The total activation time is matched with a mean error of 1.4 ms ± 1.4 ms. A significant decrease in errors is observed in all heart zones with an increased number of samples. Without re-training, the network is further evaluated on two datasets: (1) an in-house dataset comprising four ischemic porcine hearts with dense endocardial activation maps; (2) the CRT-EPIGGY19 challenge data comprising endo- and epicardial measurements of 5 infarcted and 6 non-infarcted swines. In both setups the neural network recovers biventricular activation times with a mean absolute error of less than 10 ms even when providing only a subset of endocardial measurements as input. Furthermore, we present a simple approach to suggest new measurement locations in real-time based on the estimated uncertainty of the graph network predictions. The model-guided selection of measurement locations allows to reduce by 40% the number of measurements required in a random sampling strategy, while achieving the same prediction error. In all the tested scenarios, the proposed approach estimates biventricular activation times with comparable or better performance than a personalized computational model and significant runtime advantages.


Information ◽  
2021 ◽  
Vol 12 (10) ◽  
pp. 419
Author(s):  
Hualing Lin ◽  
Qiubi Sun

Accurately predicting the volatility of financial asset prices and exploring its laws of movement have profound theoretical and practical guiding significance for financial market risk early warning, asset pricing, and investment portfolio design. The traditional methods are plagued by the problem of substandard prediction performance or gradient optimization. This paper proposes a novel volatility prediction method based on sparse multi-head attention (SP-M-Attention). This model discards the two-dimensional modeling strategy of time and space of the classic deep learning model. Instead, the solution is to embed a sparse multi-head attention calculation module in the network. The main advantages are that (i) it uses the inherent advantages of the multi-head attention mechanism to achieve parallel computing, (ii) it reduces the computational complexity through sparse measurements and feature compression of volatility, and (iii) it avoids the gradient problems caused by long-range propagation and therefore, is more suitable than traditional methods for the task of analysis of long time series. In the end, the article conducts an empirical study on the effectiveness of the proposed method through real datasets of major financial markets. Experimental results show that the prediction performance of the proposed model on all real datasets surpasses all benchmark models. This discovery will aid financial risk management and the optimization of investment strategies.


Author(s):  
Christine Lodberg Hvas ◽  
Anne-Mette Hvas

AbstractMortality after aneurysmal subarachnoid hemorrhage (aSAH) is augmented by rebleeding and delayed cerebral ischemia (DCI). A range of assays evaluating the dynamic process of blood coagulation, from activation of clotting factors to fibrinolysis, has emerged and a comprehensive review of hemostasis and fibrinolysis following aSAH may reveal targets of treatment. We conducted a systematic review of existing literature assessing coagulation and fibrinolysis following aSAH, but prior to treatment. PubMed, Embase, and Web of Science were searched on November 18, 2020, without time boundaries. In total, 45 original studies were eventually incorporated into this systematic review, divided into studies presenting data only from conventional or quantitative assays (n = 22) and studies employing dynamic assays (n = 23). Data from conventional or quantitative assays indicated increased platelet activation, whereas dynamic assays detected platelet dysfunction possibly related to an increased risk of rebleeding. Secondary hemostasis was activated in conventional, quantitative, and dynamic assays and this was related to poor neurological outcome and mortality. Studies systematically investigating fibrinolysis were sparse. Measurements from conventional or quantitative assays, as well as dynamic fibrinolysis assays, revealed conflicting results with normal or increased lysis and changes were not associated with outcome. In conclusion, dynamic assays were able to detect reduced platelet function, not revealed by conventional or quantitative assays. Activation of secondary hemostasis was found in both dynamic and nondynamic assays, while changes in fibrinolysis were not convincingly demonstrable in either dynamic or conventional or quantitative assays. Hence, from a mechanistic point of view, desmopressin to prevent rebleeding and heparin to prevent DCI may hold potential as therapeutic options. As changes in fibrinolysis were not convincingly demonstrated and not related to outcome, the use of tranexamic acid prior to aneurysm closure is not supported by this review.


2021 ◽  
Author(s):  
Kang Liang ◽  
Yefang Jiang ◽  
Fan-Rui Meng

<p>Nitrogen (N) is one of the major pollutants to aquatic ecosystems. One of the key steps for efficient N reduction management at watershed scale is accurate quantification of N load. High frequency monitoring of stream water N concentration has not been common, and this has largely been the limiting factor for accurate estimation of N loading worldwide. N loads have often been estimated from sparse measurements. The objective of this study was to investigate the performance of the physical-based SWAT (Soil and Water Assessment Tool) model and three commonly used regression methods, namely LI (linear interpolation), WRTDS (Weighted Regression on Time, Discharge, and Season), and the LOADEST (LOAD ESTimator) on estimating nitrate load from sparse measurements through a case study in an agricultural watershed in eastern Canada. The range of daily nitrate load of SWAT and LOADEST was 0.05-1.29 and 0.14 - 1.35 t day<sup>-1</sup>, compared with 0.13 - 13.08 t day<sup>-1  </sup>and 0.15 - 16.75 t day<sup>-1 </sup>for LI and WRTDS, respectively. Mean daily nitrate load estimated by the four methods followed the order: WRTDS > LI > LOADEST > SWAT. The large discrepancies were mainly occurred during the non-growing season during which there was observation data available. As regression methods use concentration data from dry seasons to estimate the concentrations of wet seasons, there is a strong likelihood of overestimation of nitrate load for wet seasons. The results of this study shed new light on nitrate load estimation under conditions of different data availability. Under situations of limited water quality measurement, policy makers or researchers are likely to benefit from using hydrological models such as SWAT for constituent load estimation. However, the selection of the most appropriate method for load estimation should be seen as a dynamic process, and case by case evaluation is required especially when only sparsely measured data is available. As agri-environmental water quality issues become more pressing, it is critical that data collection strategies that encompass seasonal variation in streamflow and nitrate concentration be employed in regions like Atlantic Canada in the future.</p>


Author(s):  
Vimal Kumar A R ◽  
Shankar Coimbatore Subramanian ◽  
Rajesh Rajamani

Abstract This study uses a low-density solid-state flash lidar for estimating the trajectories of road vehicles in vehicle collision avoidance applications. Low-density flash lidars are inexpensive compared to the commonly used radars and pointcloud lidars, and have attracted the attention of vehicle manufacturers recently. However, tracking road vehicles using the sparse data provided by such sensors is challenging due to the few reflected measurement points obtained. In this paper, such challenges in the use of low-density flash lidars are identified and estimation algorithms to handle the same are presented. A method to use the amplitude information provided by the sensor for better localization of targets is evaluated using both physics-based simulations and experiments. A two-step hierarchical clustering algorithm is then employed to group multiple detections from a single object into one measurement, which is then associated with the corresponding object using a Joint Integrated Probabilistic Data Association (JIPDA) algorithm. A Kalman filter is used to estimate the longitudinal and lateral motion variables and the results are presented, which show that good tracking, especially in the lateral direction, can be achieved using the proposed algorithm despite the sparse measurements provided by the sensor.


Sign in / Sign up

Export Citation Format

Share Document