scholarly journals Spatial evaluation of volcanic ash forecasts using satellite observations

2015 ◽  
Vol 15 (17) ◽  
pp. 24727-24749 ◽  
Author(s):  
N. J. Harvey ◽  
H. F. Dacre

Abstract. The decision to close airspace in the event of a volcanic eruption is based on hazard maps of predicted ash extent. These are produced using output from volcanic ash transport and dispersion (VATD) models. In this paper an objective metric to evaluate the spatial accuracy of VATD simulations relative to satellite retrievals of volcanic ash is presented. The metric is based on the fractions skill score (FSS). This measure of skill provides more information than traditional point-by-point metrics, such as success index and Pearson correlation coefficient, as it takes into the account spatial scale over which skill is being assessed. The FSS determines the scale over which a simulation has skill and can differentiate between a "near miss" and a forecast that is badly misplaced. The idealised scenarios presented show that even simulations with considerable displacement errors have useful skill when evaluated over neighbourhood scales of 200–700 km2. This method could be used to compare forecasts produced by different VATDs or using different model parameters, assess the impact of assimilating satellite retrieved ash data and evaluate VATD forecasts over a long time period.

2016 ◽  
Vol 16 (2) ◽  
pp. 861-872 ◽  
Author(s):  
N. J. Harvey ◽  
H. F. Dacre

Abstract. The decision to close airspace in the event of a volcanic eruption is based on hazard maps of predicted ash extent. These are produced using output from volcanic ash transport and dispersion (VATD) models. In this paper the fractions skill score has been used for the first time to evaluate the spatial accuracy of VATD simulations relative to satellite retrievals of volcanic ash. This objective measure of skill provides more information than traditional point-by-point metrics, such as success index and Pearson correlation coefficient, as it takes into the account spatial scale over which skill is being assessed. The FSS determines the scale over which a simulation has skill and can differentiate between a "near miss" and a forecast that is badly misplaced. The idealized scenarios presented show that even simulations with considerable displacement errors have useful skill when evaluated over neighbourhood scales of 200–700 (km)2. This method could be used to compare forecasts produced by different VATDs or using different model parameters, assess the impact of assimilating satellite-retrieved ash data and evaluate VATD forecasts over a long time period.


Kerntechnik ◽  
2021 ◽  
Vol 86 (2) ◽  
pp. 152-163
Author(s):  
T.-C. Wang ◽  
M. Lee

Abstract In the present study, a methodology is developed to quantify the uncertainties of special model parameters of the integral severe accident analysis code MAAP5. Here, the in-vessel hydrogen production during a core melt accident for Lungmen Nuclear Power Station of Taiwan Power Company, an advanced boiling water reactor, is analyzed. Sensitivity studies are performed to identify those parameters with an impact on the output parameter. For this, multiple calculations of MAAP5 are performed with input combinations generated from Latin Hypercube Sampling (LHS). The results are analyzed to determine the 95th percentile with 95% confidence level value of the amount of in-vessel hydrogen production. The calculations show that the default model options for IOXIDE and FGBYPA are recommended. The Pearson Correlation Coefficient (PCC) was used to determine the impact of model parameters on the target output parameters and showed that the three parameters TCLMAX, FCO, FOXBJ are highly influencing the in-vessel hydrogen generation. Suggestions of values of these three parameters are given.


2017 ◽  
Vol 12 (1) ◽  
Author(s):  
Ulrik B. Pedersen ◽  
Dimitrios-Alexios Karagiannis-Voules ◽  
Nicholas Midzi ◽  
Tkafira Mduluza ◽  
Samson Mukaratirwa ◽  
...  

Temperature, precipitation and humidity are known to be important factors for the development of schistosome parasites as well as their intermediate snail hosts. Climate therefore plays an important role in determining the geographical distribution of schistosomiasis and it is expected that climate change will alter distribution and transmission patterns. Reliable predictions of distribution changes and likely transmission scenarios are key to efficient schistosomiasis intervention-planning. However, it is often difficult to assess the direction and magnitude of the impact on schistosomiasis induced by climate change, as well as the temporal transferability and predictive accuracy of the models, as prevalence data is often only available from one point in time. We evaluated potential climate-induced changes on the geographical distribution of schistosomiasis in Zimbabwe using prevalence data from two points in time, 29 years apart; to our knowledge, this is the first study investigating this over such a long time period. We applied historical weather data and matched prevalence data of two schistosome species (<em>Schistosoma haematobium</em> and <em>S. mansoni</em>). For each time period studied, a Bayesian geostatistical model was fitted to a range of climatic, environmental and other potential risk factors to identify significant predictors that could help us to obtain spatially explicit schistosomiasis risk estimates for Zimbabwe. The observed general downward trend in schistosomiasis prevalence for Zimbabwe from 1981 and the period preceding a survey and control campaign in 2010 parallels a shift towards a drier and warmer climate. However, a statistically significant relationship between climate change and the change in prevalence could not be established.


2021 ◽  
Vol 26 (40) ◽  
Author(s):  
Jessica E Stockdale ◽  
Renny Doig ◽  
Joosung Min ◽  
Nicola Mulberry ◽  
Liangliang Wang ◽  
...  

Background Many countries have implemented population-wide interventions to control COVID-19, with varying extent and success. Many jurisdictions have moved to relax measures, while others have intensified efforts to reduce transmission. Aim We aimed to determine the time frame between a population-level change in COVID-19 measures and its impact on the number of cases. Methods We examined how long it takes for there to be a substantial difference between the number of cases that occur following a change in COVID-19 physical distancing measures and those that would have occurred at baseline. We then examined how long it takes to observe this difference, given delays and noise in reported cases. We used a susceptible-exposed-infectious-removed (SEIR)-type model and publicly available data from British Columbia, Canada, collected between March and July 2020. Results It takes 10 days or more before we expect a substantial difference in the number of cases following a change in COVID-19 control measures, but 20–26 days to detect the impact of the change in reported data. The time frames are longer for smaller changes in control measures and are impacted by testing and reporting processes, with delays reaching ≥ 30 days. Conclusion The time until a change in control measures has an observed impact is longer than the mean incubation period of COVID-19 and the commonly used 14-day time period. Policymakers and practitioners should consider this when assessing the impact of policy changes. Rapid, consistent and real-time COVID-19 surveillance is important to minimise these time frames.


Cancers ◽  
2021 ◽  
Vol 14 (1) ◽  
pp. 36
Author(s):  
Ilyass Moummad ◽  
Cyril Jaudet ◽  
Alexis Lechervy ◽  
Samuel Valable ◽  
Charlotte Raboutet ◽  
...  

Background: Magnetic resonance imaging (MRI) is predominant in the therapeutic management of cancer patients, unfortunately, patients have to wait a long time to get an appointment for examination. Therefore, new MRI devices include deep-learning (DL) solutions to save acquisition time. However, the impact of these algorithms on intensity and texture parameters has been poorly studied. The aim of this study was to evaluate the impact of resampling and denoising DL models on radiomics. Methods: Resampling and denoising DL model was developed on 14,243 T1 brain images from 1.5T-MRI. Radiomics were extracted from 40 brain metastases from 11 patients (2049 images). A total of 104 texture features of DL images were compared to original images with paired t-test, Pearson correlation and concordance-correlation-coefficient (CCC). Results: When two times shorter image acquisition shows strong disparities with the originals concerning the radiomics, with significant differences and loss of correlation of 79.81% and 48.08%, respectively. Interestingly, DL models restore textures with 46.15% of unstable parameters and 25.96% of low CCC and without difference for the first-order intensity parameters. Conclusions: Resampling and denoising DL models reconstruct low resolution and noised MRI images acquired quickly into high quality images. While fast MRI acquisition loses most of the radiomic features, DL models restore these parameters.


2011 ◽  
Vol 24 (23) ◽  
pp. 6210-6226 ◽  
Author(s):  
S. Zhang

Abstract A skillful decadal prediction that foretells varying regional climate conditions over seasonal–interannual to multidecadal time scales is of societal significance. However, predictions initialized from the climate-observing system tend to drift away from observed states toward the imperfect model climate because of the model biases arising from imperfect model equations, numeric schemes, and physical parameterizations, as well as the errors in the values of model parameters. Here, a simple coupled model that simulates the fundamental features of the real climate system and a “twin” experiment framework are designed to study the impact of initialization and parameter optimization on decadal predictions. One model simulation is treated as “truth” and sampled to produce “observations” that are assimilated into other simulations to produce observation-estimated states and parameters. The degree to which the model forecasts based on different estimates recover the truth is an assessment of the impact of coupled initial shocks and parameter optimization on climate predictions of interests. The results show that the coupled model initialization through coupled data assimilation in which all coupled model components are coherently adjusted by observations minimizes the initial coupling shocks that reduce the forecast errors on seasonal–interannual time scales. Model parameter optimization with observations effectively mitigates the model bias, thus constraining the model drift in long time-scale predictions. The coupled model state–parameter optimization greatly enhances the model predictability. While valid “atmospheric” forecasts are extended 5 times, the decadal predictability of the “deep ocean” is almost doubled. The coherence of optimized model parameters and states is critical to improve the long time-scale predictions.


Avalanche forecasting is an important measure required for the safety of the people residing in hilly regions. Snow avalanches are caused due to the changes that occur in the snow and weather conditions. The prominent changes, that cause the variations which further culminate into an avalanche, can be given higher significance in the forecasting model by application of appropriate weights. These weights are decided based on the relation of each weather parameter to snow avalanche occurrence by the forecaster with the help of historical data. A method is proposed in the current work that can help in removing this subjectivity by using correlation coefficients. Present work explores the use of Pearson correlation coefficient, Spearman rank correlation coefficient and Kendall Tau correlation coefficient to obtain the weighting factors for each parameter used for avalanche forecasting. These parameters are further used in the cosine similarity based nearest neighbour model for avalanche forecasting. Bias and Peirce’s Skill Score are performance measures used to evaluate the outcome of the experimental work.


2020 ◽  
Author(s):  
Jessica E Stockdale ◽  
Renny Doig ◽  
Joosung Min ◽  
Nicola Mulberry ◽  
Liangliang Wang ◽  
...  

AbstractBackgroundMany countries have implemented population-wide interventions such as physical distancing measures, in efforts to control COVID-19. The extent and success of such measures has varied. Many jurisdictions with declines in reported COVID-19 cases are moving to relax measures, while others are continuing to intensify efforts to reduce transmission.AimWe aim to determine the time frame between a change in COVID-19 measures at the population level and the observable impact of such a change on cases.MethodsWe examine how long it takes for there to be a substantial difference between the cases that occur following a change in control measures and those that would have occurred at baseline. We then examine how long it takes to detect a difference, given delays and noise in reported cases. We use changes in population-level (e.g., distancing) control measures informed by data and estimates from British Columbia, Canada.ResultsWe find that the time frames are long: it takes three weeks or more before we might expect a substantial difference in cases given a change in population-level COVID-19 control, and it takes slightly longer to detect the impacts of the change. The time frames are shorter (11-15 days) for dramatic changes in control, and they are impacted by noise and delays in the testing and reporting process, with delays reaching up to 25-40 days.ConclusionThe time until a change in broad control measures has an observed impact is longer than is typically understood, and is longer than the mean incubation period (time between exposure than onset) and the often used 14 day time period. Policy makers and public health planners should consider this when assessing the impact of policy change, and efforts should be made to develop rapid, consistent real-time COVID-19 surveillance.


Author(s):  
Natalie J. Harvey ◽  
Nathan Huntley ◽  
Helen Dacre ◽  
Michael Goldstein ◽  
David Thomson ◽  
...  

Abstract. Following the disruption to European airspace caused by the eruption of Eyjafjallajokull in 2010 there has been a move towards producing quantitative predictions of volcanic ash concentration using volcanic ash transport and dispersion simulators. However, there is no formal framework for determining the uncertainties on these predictions and performing many simulations using these complex models is computationally expensive. In this paper a Bayes linear emulation approach is applied to the Numerical Atmospheric-dispersion Modelling Environment (NAME) to better understand the influence of source and internal model parameters on the simulator output. Emulation is a statistical method for predicting the output of a computer simulator at new parameter choices without actually running the simulator. A multi-level emulation approach is applied to combine information from many evaluations of a computationally fast version of NAME with relatively few evaluations of a slower, more accurate, version. This approach is effective when it is not possible to run the accurate simulator many times and when there is also little prior knowledge about the influence of parameters. The approach is applied to the mean ash column loading in 75 geographical regions on 14 May 2010. Through this analysis it has been found that the parameters that contribute the most to the output uncertainty are initial plume rise height, mass eruption rate, free tropospheric turbulence levels and precipitation threshold for wet deposition. This information can be used to inform future model development and observational campaigns and routine monitoring. The analysis presented here suggests the need for further observational and theoretical research into parameterisation of atmospheric turbulence. Furthermore it can also be used to inform the most important parameter perturbations for a small operational ensemble of simulations. The use of an emulator also identifies the input and internal parameters that do not contribute significantly to simulator uncertainty. Finally, the analysis highlights that the fast, less accurate, version of NAME can provide useful information without needing the accurate version at all. This approach can easily be extended to other case studies, simulators or hazards.


2016 ◽  
Vol 67 (11) ◽  
pp. 1625 ◽  
Author(s):  
Jan Hesse ◽  
Jenni A. Stanley ◽  
Andrew G. Jeffs

Determining the impact of predators on juvenile spiny lobsters living on reefs is important for understanding recruitment processes that ultimately help determine the size of economically important lobster populations. The present study describes a novel approach for observing attempted predation on live juvenile spiny lobster (Jasus edwardsii) in situ, by presenting the lobster in a transparent container that was lit with infrared light to enable continuous monitoring, even at night, by video recording. This technique can be used to provide valuable information on overall relative predation pressure from comparative locations and habitats, as well as identify potential predators, their mode of predation, and the timing of their of predation activity. For example, predation attempts on juvenile J. edwardsii by the spotted wrasse (Notolabrus celidotus) were recorded only from 0500 to 1400 hours (daytime) and from 1900 to 2100 hours (dusk), whereas the activity by the northern conger eel (Conger wilsoni) was observed only for the period between 2100 and 0200 hours (nocturnal). This method of assessing predation of juvenile lobsters provides considerable advantages over previously used tethering methods, by allowing continuous observations over a long time period (≥24h), including night time, while also eliminating experimental mortality of juvenile lobsters.


Sign in / Sign up

Export Citation Format

Share Document