observational dataset
Recently Published Documents


TOTAL DOCUMENTS

36
(FIVE YEARS 10)

H-INDEX

10
(FIVE YEARS 1)

2021 ◽  
pp. 1-40

Abstract In this study, we compiled a high-quality, in situ observational dataset to evaluate snow depth simulations from 22 CMIP6 models across high-latitude regions of the Northern Hemisphere over the period 1955–2014. Simulated snow depths have low accuracy (RMSE = 17–36 cm) and are biased high, exceeding the observed baseline (1976–2005) on average (18 ± 16 cm) across the study area. Spatial climatological patterns based on observations are modestly reproduced by the models (NRMSDs of 0.77 ± 0.20). Observed snow depth during the cold season increased by about 2.0 cm over the study period, which is approximately 11% relative to the baseline. The models reproduce decreasing snow depth trends that contradict the observations, but they all indicate a precipitation increase during the cold season. The modeled snow depths are insensitive to precipitation but too sensitive to air temperature; these inaccurate sensitivities could explain the discrepancies between the observed and simulated snow depth trends. Based on our findings, we recommend caution when using and interpreting simulated changes in snow depth and associated impacts.


2021 ◽  
Vol 133 (6) ◽  
Author(s):  
Alessio Del Vigna ◽  
Linda Dimare ◽  
Davide Bracali Cioci

AbstractThe interest in the problem of small asteroids observed shortly before a deep close approach or an impact with the Earth has grown a lot in recent years. Since the observational dataset of such objects is very limited, they deserve dedicated orbit determination and hazard assessment methods. The currently available systems are based on the systematic ranging, a technique providing a two-dimensional manifold of orbits compatible with the observations, the so-called Manifold Of Variations. In this paper we first review the Manifold Of Variations method, to then show how this set of virtual asteroids can be used to predict the impact location of short-term impactors, and compare the results with those of already existent methods.


2021 ◽  
Author(s):  
Jing Yuan ◽  
Le Zhou ◽  
Qiao Wang ◽  
Dehe Yang ◽  
Zeren Zima ◽  
...  

<p>Lightning whistler waves, as an important tool for geospace exploration, can be found from the vast amount of electromagnetic satellite data. In recent years, with the development of computer vision and deep learning technologies, some advanced  algorithms have been developed to automatically identify lightning whistler waves from the massive archived data of electromagnetic satellites. However, these algorithms fail to automatically extract the dispersion coefficients of lightning whistlers(DCW). Since the DCW are depended on the propagation path of lightning and geospace environments, it is extremely important for further geospace exploration.</p><p>We proposed an algorithm that can automatically extract the dispersion coefficients of lightning whistler: (1) using two seconds time window on the SCM VLF data from the ZH-1 satellite to obtain segmented data; (2) generating time-frequency profile (TFP) of the segmented waveform by performing a band-pass filter and the short-time Fourier transform with a 94% overlap; (3) annotating the ground truth of the whistler with the rectangular boxes on the each time-frequency image to construct the training dataset; (4) building the YOLOV3 deep neural network and setting the training parameters; (5) inputting the training dataset to the YOLOV3 to train the whistler recognition model; (6) detecting the whistler from the unknown time-frequency image to extract the whistler area with the rectangle box as a sub-image; (7) conducting the BM3D algorithm to denoise the sub-image; (8) employing an adaptive threshold segmentation algorithm on the denoised sub-image to obtain the binary image which represents the whistler trace with the black pixel and other area with white pixel. (9) removing isolated points in the binary image with the open operation in morphology; (10) extracting lightning whistler trajectory region using connected domain analysis; (11) converting the trajectory coordinates from (t-f) to (f<sup>-0.5</sup>-t); (12) taking into account the Eckersley formula, which depicts the relationship between the scattering coefficient and the time frequency, we use the least-squares method on the converted trajectory coordinates to fit a straight line and obtain the slope of the line as the dispersion coefficient.</p><p>In order to evaluate the effectiveness of the proposed algorithm, we construct two dataset: a simulation set and an observational dataset. The simulation set is composed of 1000 lightning whistler trajectories, which are generated according to the Eckersley formula. The observational dataset containing 1000 actual single-trace lightning whistler, are generated by collecting the data from the SCM VLF from the ZH-1 satellite. The experiment results show that: the mean-square error on the simulation set is below 2.8x 10<sup>-4</sup>; The mean-square error on the observational dataset is below 2.1054x10<sup>-3</sup>.</p><p>Keywords: ZH-1 Satellite, SCM, Lightning Whistler, YOLOV3, Dispersion Coefficients</p>


2021 ◽  
Vol 2021 ◽  
pp. 1-6
Author(s):  
Ying Liu ◽  
Nidan Qiao ◽  
Yuksel Altinel

Dynamic decision-making was essential in the clinical care of surgical patients. Reinforcement learning (RL) algorithm is a computational method to find sequential optimal decisions among multiple suboptimal options. This review is aimed at introducing RL’s basic concepts, including three basic components: the state, the action, and the reward. Most medical studies using reinforcement learning methods were trained on a fixed observational dataset. This paper also reviews the literature of existing practical applications using reinforcement learning methods, which can be further categorized as a statistical RL study and a computational RL study. The review proposes several potential aspects where reinforcement learning can be applied in neurocritical and neurosurgical care. These include sequential treatment strategies of intracranial tumors and traumatic brain injury and intraoperative endoscope motion control. Several limitations of reinforcement learning are representations of basic components, the positivity violation, and validation methods.


2021 ◽  
Vol 9 (1) ◽  
pp. 147-171
Author(s):  
Evan T. R. Rosenman ◽  
Art B. Owen

Abstract The increasing availability of passively observed data has yielded a growing interest in “data fusion” methods, which involve merging data from observational and experimental sources to draw causal conclusions. Such methods often require a precarious tradeoff between the unknown bias in the observational dataset and the often-large variance in the experimental dataset. We propose an alternative approach, which avoids this tradeoff: rather than using observational data for inference, we use it to design a more efficient experiment. We consider the case of a stratified experiment with a binary outcome and suppose pilot estimates for the stratum potential outcome variances can be obtained from the observational study. We extend existing results to generate confidence sets for these variances, while accounting for the possibility of unmeasured confounding. Then, we pose the experimental design problem as a regret minimization problem subject to the constraints imposed by our confidence sets. We show that this problem can be converted into a concave maximization and solved using conventional methods. Finally, we demonstrate the practical utility of our methods using data from the Women’s Health Initiative.


Universe ◽  
2020 ◽  
Vol 6 (11) ◽  
pp. 219
Author(s):  
Elena Fedorova ◽  
B.I. Hnatyk ◽  
V.I. Zhdanov ◽  
A. Del Popolo

3C111 is BLRG with signatures of both FSRQ and Sy1 in X-ray spectrum. The significant X-ray observational dataset was collected for it by INTEGRAL, XMM-Newton, SWIFT, Suzaku and others. The overall X-ray spectrum of 3C 111 shows signs of a peculiarity with the large value of the high-energy cut-off typical rather for RQ AGN, probably due to the jet contamination. Separating the jet counterpart in the X-ray spectrum of 3C 111 from the primary nuclear counterpart can answer the question is this nucleus truly peculiar or this is a fake “peculiarity” due to a significant jet contribution. In view of this question, our aim is to estimate separately the accretion disk/corona and non-thermal jet emission in the 3C 111 X-ray spectra within different observational periods. To separate the disk/corona and jet contributions in total continuum, we use the idea that radio and X-ray spectra of jet emission can be described by a simple power-law model with the same photon index. This additional information allows us to derive rather accurate values of these contributions. In order to test these results, we also consider relations between the nuclear continuum and the line emission.


2020 ◽  
Vol 33 (10) ◽  
pp. 4083-4094
Author(s):  
Lan Luan ◽  
Paul W. Staten ◽  
Chi O. Ao ◽  
Qiang Fu

AbstractThe width of the tropical belt has been analyzed with a variety of metrics, often based on zonal-mean data from reanalyses. However, constraining the global and regional tropical width requires both a global spatial-resolving observational dataset and an appropriate metric to take advantage of such data. The tropical tropopause break is arguably such a metric. This study aims to evaluate the performance of different reanalyses and metrics with a focus on depicting regional tropical belt width. We choose four distinct tropopause-break metrics derived from global positioning system radio occultation (GPS-RO) satellite data and four modern reanalyses (ERA-Interim, MERRA-2, JRA-55, and CFSR). We show that reanalyses generally reproduce the regional tropical tropopause break to within 10° of that in GPS-RO data—but that the tropical width is somewhat sensitive (within 4°) to how data are averaged zonally, moderately sensitive (within 10°) to the dataset resolution, and more sensitive (20° over the Northern Hemisphere Atlantic Ocean during June–August) to the choice of metric. Reanalyses capture the poleward displacement of the tropical tropopause break over land and equatorward displacement over ocean during summertime, and the reverse during the wintertime. Reanalysis-based tropopause breaks are also generally well correlated with those from GPS-RO, although CFSR reproduces 14-yr trends much more closely than others (including ERA-Interim). However, it is hard to say which dataset is the best match of GPS-RO. We further find that the tropical tropopause break is representative of the subtropical jet latitude and the Northern Hemisphere edge of the Hadley circulation in terms of year-to-year variations.


2020 ◽  
Vol 40 (15) ◽  
pp. 6458-6472
Author(s):  
Peter Domonkos ◽  
John Coll ◽  
José Guijarro ◽  
Mary Curley ◽  
Elke Rustemeier ◽  
...  

2020 ◽  
Vol 35 (1) ◽  
pp. 149-168 ◽  
Author(s):  
Amanda Burke ◽  
Nathan Snook ◽  
David John Gagne II ◽  
Sarah McCorkle ◽  
Amy McGovern

Abstract In this study, we use machine learning (ML) to improve hail prediction by postprocessing numerical weather prediction (NWP) data from the new High-Resolution Ensemble Forecast system, version 2 (HREFv2). Multiple operational models and ensembles currently predict hail, however ML models are more computationally efficient and do not require the physical assumptions associated with explicit predictions. Calibrating the ML-based predictions toward familiar forecaster output allows for a combination of higher skill associated with ML models and increased forecaster trust in the output. The observational dataset used to train and verify the random forest model is the Maximum Estimated Size of Hail (MESH), a Multi-Radar Multi-Sensor (MRMS) product. To build trust in the predictions, the ML-based hail predictions are calibrated using isotonic regression. The target datasets for isotonic regression include the local storm reports and Storm Prediction Center (SPC) practically perfect data. Verification of the ML predictions indicates that the probability magnitudes output from the calibrated models closely resemble the day-1 SPC outlook and practically perfect data. The ML model calibrated toward the local storm reports exhibited better or similar skill to the uncalibrated predictions, while decreasing model bias. Increases in reliability and skill after calibration may increase forecaster trust in the automated hail predictions.


Sign in / Sign up

Export Citation Format

Share Document