model output
Recently Published Documents


TOTAL DOCUMENTS

850
(FIVE YEARS 233)

H-INDEX

58
(FIVE YEARS 4)

2022 ◽  
Vol 12 ◽  
Author(s):  
Ning Shi ◽  
Niyati Naudiyal ◽  
Jinniu Wang ◽  
Narayan Prasad Gaire ◽  
Yan Wu ◽  
...  

Meconopsis punicea is an iconic ornamental and medicinal plant whose natural habitat has degraded under global climate change, posing a serious threat to the future survival of the species. Therefore, it is critical to analyze the influence of climate change on possible distribution of M. punicea for conservation and sustainable utilization of this species. In this study, we used MaxEnt ecological niche modeling to predict the potential distribution of M. punicea under current and future climate scenarios in the southeastern margin region of Qinghai-Tibet Plateau. Model projections under current climate show that 16.8% of the study area is suitable habitat for Meconopsis. However, future projections indicate a sharp decline in potential habitat for 2050 and 2070 climate change scenarios. Soil type was the most important environmental variable in determining the habitat suitability of M. punicea, with 27.75% contribution to model output. Temperature seasonality (16.41%), precipitation of warmest quarter (14.01%), and precipitation of wettest month (13.02%), precipitation seasonality (9.41%) and annual temperature range (9.24%) also made significant contributions to model output. The mean elevation of suitable habitat for distribution of M. punicea is also likely to shift upward in most future climate change scenarios. This study provides vital information for the protection and sustainable use of medicinal species like M. punicea in the context of global environmental change. Our findings can aid in developing rational, broad-scale adaptation strategies for conservation and management for ecosystem services, in light of future climate changes.


Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 454
Author(s):  
German Sternharz ◽  
Jonas Skackauskas ◽  
Ayman Elhalwagy ◽  
Anthony J. Grichnik ◽  
Tatiana Kalganova ◽  
...  

This paper introduces a procedure to compare the functional behaviour of individual units of electronic hardware of the same type. The primary use case for this method is to estimate the functional integrity of an unknown device unit based on the behaviour of a known and proven reference unit. This method is based on the so-called virtual sensor network (VSN) approach, where the output quantity of a physical sensor measurement is replicated by a virtual model output. In the present study, this approach is extended to model the functional behaviour of electronic hardware by a neural network (NN) with Long-Short-Term-Memory (LSTM) layers to encapsulate potential time-dependence of the signals. The proposed method is illustrated and validated on measurements from a remote-controlled drone, which is operated with two variants of controller hardware: a reference controller unit and a malfunctioning counterpart. It is demonstrated that the presented approach successfully identifies and describes the unexpected behaviour of the test device. In the presented case study, the model outputs a signal sample prediction in 0.14 ms and achieves a reconstruction accuracy of the validation data with a root mean square error (RMSE) below 0.04 relative to the data range. In addition, three self-protection features (multidimensional boundary-check, Mahalanobis distance, auxiliary autoencoder NN) are introduced to gauge the certainty of the VSN model output.


2021 ◽  
Author(s):  
Jiajin Zhang ◽  
Hanqing Chao ◽  
Mannudeep K Kalra ◽  
Ge Wang ◽  
Pingkun Yan

While various methods have been proposed to explain AI models, the trustworthiness of the generated explanation received little examination. This paper reveals that such explanations could be vulnerable to subtle perturbations on the input and generate misleading results. On the public CheXpert dataset, we demonstrate that specially designed adversarial perturbations can easily tamper saliency maps towards the desired explanations while preserving the original model predictions. AI researchers, practitioners, and authoritative agencies in the medical domain should use caution when explaining AI models because such an explanation could be irrelevant, misleading, and even adversarially manipulated without changing the model output.


2021 ◽  
Author(s):  
Faiza Azam ◽  
Jethro Betcke ◽  
Marion Schroedter-Homscheidt ◽  
Mireille Lefevre ◽  
Yves-Marie Saint-Drenan ◽  
...  

<p>The Copernicus Atmospheric Monitoring Service (CAMS) offers Solar radiation services (CRS) providing information on surface solar irradiance (SSI). The service is currently derived from Meteosat Second Generation (MSG) and the service evolution includes its extension to other parts of the globe. CRS provides clear and all sky time series combining satellite data products with numerical model output from CAMS on aerosols, water vapour and ozone. These products are available from 2004 until yesterday. A regular quality control of input parameters, quarterly benchmarking against ground measurements and automatic consistency checks ensure the service quality.</p> <p>Variability of solar surface irradiances in the 1-minute range is of interest especially for solar energy applications. The variability classes can be defined based on ground as well as satellite-based measurements. This study will present the evaluation of the CAMS CRS based on the eight variability classes derived from ground observations of direct normal irradiation (DNI) (Schroedter-Homscheidt et al., 2018). Such an analysis will help assess the impact of recent improvements in the derivation of all sky irradiance under different cloudy conditions.</p> <p>References:</p> <p>Schroedter-Homscheidt, M., S. Jung, M. Kosmale, 2018: Classifying ground-measured 1 minute temporal variability within hourly intervals for direct normal irradiances. – Meteorol. Z. 27, 2, 160–179. DOI:10.1127/metz/2018/0875.</p>


2021 ◽  
Author(s):  
Beth Baribault ◽  
Anne Collins

Using Bayesian methods to apply computational models of cognitive processes, or Bayesian cognitive modeling, is an important new trend in psychological research. The rise of Bayesian cognitive modeling has been accelerated by the introduction of software such as Stan and PyMC3 that efficiently automates the Markov chain Monte Carlo (MCMC) sampling used for Bayesian model fitting. Unfortunately, Bayesian cognitive models can struggle to pass the computational checks required of all Bayesian models. If any failures are left undetected, inferences about cognition based on model output may be biased or incorrect. As such, Bayesian cognitive models almost always require troubleshooting before being used for inference. Here, we present a deep treatment of the diagnostic checks and procedures that are critical for effective troubleshooting, but are often left underspecified by tutorial papers. After a conceptual introduction to Bayesian cognitive modeling and MCMC sampling, we outline the diagnostic metrics, procedures, and plots necessary to identify problems in model output with an emphasis on how these requirements have recently been improved. Throughout, we explain how the most commonly encountered problems may be remedied with specific, practical solutions. We also introduce matstanlib, our MATLAB modeling support library, and demonstrate how it facilitates troubleshooting of an example hierarchical Bayesian model of reinforcement learning implemented in Stan. With this comprehensive guide to techniques for detecting, identifying, and overcoming problems in fitting Bayesian cognitive models, psychologists across subfields can more confidently build and use Bayesian cognitive models.All code is freely available from github.com/baribault/matstanlib.


2021 ◽  
Author(s):  
Sabine Robrecht ◽  
Robert Osinski ◽  
Ute Dauert ◽  
Andreas Lambert ◽  
Stefan Gilge ◽  
...  

<p>Schlechte Luftqualität gefährdet die Gesundheit der Bevölkerung. Zur Information und zur Ergreifung kurzfristiger Maßnahmen zur Luftqualitätsverbesserung (z.B. Verkehrslenkung) ist eine möglichst genaue und – insbesondere in städtischen Gebieten – möglichst räumlich hochaufgelöste Luftqualitätsvorhersage notwendig. Numerische Luftqualitätsmodelle haben für diese Aufgabe in der Regel eine zu geringe räumliche Auflösung.</p> <p>Daher ist es Ziel des Projektes „LQ-Warn“ die Luftqualitätsvorhersage insbesondere im Hinblick auf die Überschreitung von Grenzwerten zu verbessern. Basierend auf den Modellergebnissen für Luftqualitätsparameter des Copernicus Atmospheric Monitoring Service (CAMS) werden zwei Ansätze verfolgt: Einerseits werden Vorhersagen mit dem regionalen chemischen Transportmodell „REM-CALGRID“ (RCG) unter Einbeziehung von CAMS-Ergebnissen und regionalen Emissionsdaten berechnet. Dabei kann eine hohe horizontale Auflösung von 2 km erzielt werden und Prognosen können für verschiedene Luftschadstoffe in stündlicher Auflösung mit bis zu 72 Stunden Vorlaufzeit erstellt werden, unter anderem für NO<sub>2</sub>, O<sub>3</sub>, PM<sub>10</sub> und PM<sub>2.5</sub>. Andererseits wird die statistische Post-Processing-Methode „Model Output Statistics“ (MOS) angewandt, um Punktvorhersagen für die Massenkonzentration der Spezies NO<sub>2</sub>, O<sub>3</sub>, PM<sub>10</sub> und PM<sub>2.5</sub> mit einer Vorlaufzeit von bis zu 96 Stunden zu berechnen. Dafür werden luftqualitätsbezogene Messungen, CAMS-Modellergebnisse und meteorologische Parameter aus dem numerischen Wettervorhersagemodell des ECMWF als Prädiktoren verwendet.</p> <p>Es werden erste Ergebnisse der mit den o.g. Ansätzen errechneten Vorhersagen präsentiert und die Vor- und Nachteile der jeweiligen Verfahren hervorgehoben. Durch die statistische Post-Processing-Methode MOS wird an den Vorhersagepunkten vor allem für die Massenkonzentration von O<sub>3 </sub>und NO<sub>2</sub> eine signifikante Verringerung des RMSE (Root Mean Square Error) im Vergleich zu den Vorhersagen des numerischen CAMS-Modells erreicht. Diese deutliche Verbesserung der Luftqualitätsvorhersage sinnvoll auf die Fläche auszudehnen ist jedoch noch eine Herausforderung. Im Gegensatz dazu zeigt die Vorhersage mit dem RCG-Modell eine geringere Verbesserung der Vorhersagegüte an einzelnen Vorhersagepunkten als der MOS-Ansatz. Stattdessen bietet das RCG-Modell zeitlich und räumlich konsistente Vorhersagen an allen Modellgitterpunkten. Kleinskalige Konzentrationsunterschiede können aufgrund der höheren Modellauflösung deutlich realistischer vorhergesagt werden als mit den CAMS-Vorhersagen. Ein weiterführendes Ziel des LQ-Warn-Projektes ist es die beiden Ansätze zu kombinieren, um die Vorteile beider nutzen zu können und eine präzise Luftqualitätsvorhersage flächendeckend für Deutschland bereitstellen zu können.</p>


2021 ◽  
Vol 9 ◽  
Author(s):  
Donald N. Christie ◽  
Frank J. Peel ◽  
Gillian M. Apps ◽  
David “Stan” Stanbrook

The stratal architecture of deep-water minibasins is dominantly controlled by the interplay of two factors, structure growth and sediment supply. In this paper we explore the utility of a reduced-complexity, fast computational method (Onlapse-2D) to simulate stratal geometry, using a process of iteration to match the model output to available subsurface control (well logs and 3D seismic data). This approach was used to model the Miocene sediments in two intersecting lines of section in a complex mini-basin in the deep-water Campeche Basin, offshore Mexico. A good first-pass match between model output and geological observations was obtained, allowing us to identify and separate the effects of two distinct phases of compressional folding and a longer-lasting episode of salt withdrawal/diapirism, and to determine the timing of these events. This modelling provides an indication of the relative contribution of background sedimentation (pelagic and hemipelagic) vs. sediment-gravity-flow deposition (e.g. turbidites) within each layer of the model. The inferred timing of the compressional events derived from the model is consistent with other geological observations within the basin. The process of iteration towards a best-fit model leaves significant but local residual mismatches at several levels in the stratigraphy; these correspond to surfaces with anomalous negative (erosional) or positive (constructive depositional) palaeotopography. We label these mismatch surfaces “informative discrepancies” because the magnitude of the mismatch allows us to estimate the geometry and magnitude of the local seafloor topography. Reduced-complexity simulation is shown to be a useful and effective approach, which, when combined with an existing seismic interpretation, provides insight into the geometry and timing of controlling processes, indicates the nature of the sediments (background vs. sediment-gravity-flow) and aids in the identification of key erosional or constructional surfaces within the stratigraphy.


2021 ◽  
Vol 8 ◽  
Author(s):  
Sophie Landon ◽  
Oliver Chalkley ◽  
Gus Breese ◽  
Claire Grierson ◽  
Lucia Marucci

Whole-cell modelling is a newly expanding field that has many applications in lab experiment design and predictive drug testing. Although whole-cell model output contains a wealth of information, it is complex and high dimensional and thus hard to interpret. Here, we present an analysis pipeline that combines machine learning, dimensionality reduction, and network analysis to interpret and visualise metabolic reaction fluxes from a set of single gene knockouts simulated in the Mycoplasma genitalium whole-cell model. We found that the reaction behaviours show trends that correlate with phenotypic classes of the simulation output, highlighting particular cellular subsystems that malfunction after gene knockouts. From a graphical representation of the metabolic network, we saw that there is a set of reactions that can be used as markers of a phenotypic class, showing their importance within the network. Our analysis pipeline can support the understanding of the complexity of in silico cells without detailed knowledge of the constituent parts, which can help to understand the effects of gene knockouts and, as whole-cell models become more widely built and used, aid genome design.


2021 ◽  
Vol 156 (A4) ◽  
Author(s):  
N Hifi ◽  
N Barltrop

This paper applies a newly developed methodology to calibrate the corrosion model within a structural reliability analysis. The methodology combines data from experience (measurements and expert judgment) and prediction models to adjust the structural reliability models. Two corrosion models published in the literature have been used to demonstrate the technique used for the model calibration. One model is used as a prediction for a future degradation and a second one to represent the inspection recorded data. The results of the calibration process are presented and discussed.


Sign in / Sign up

Export Citation Format

Share Document