A software framework for optimizing the design of spaceborne hyperspectral imager architectures

Author(s):  
Adam Erickson ◽  
Benjamin Poulter ◽  
David Thompson ◽  
Gregory Okin ◽  
Shawn Serbin ◽  
...  

<p>Quantifying the capacity, and uncertainty, of proposed spaceborne hyperspectral imagers to retrieve atmospheric and surface state information is necessary to optimize future satellite architectures for their science value. Given the vast potential joint trade-and-environment-space, modeling key ‘globally representative’ points in this <em>n</em>-dimensional space is a practical solution for improving computational tractability. Given guidance from policy instruments such as the NASA Decadal Survey and the recommended Designated Target Observables, or DOs, the downselect process can be viewed as a constrained multi-objective optimization. The need to simulate imager architecture performance to achieve downselect goals has motivated the development of new mathematical models for estimating radiometric and retrieval uncertainties provided conditions analogous to real-world environments. The goals can be met with recent advances that integrate mature atmospheric inversion approaches such as Optimal Estimation (OE) that includes joint atmospheric-surface state estimation (Thompson et al. 2018) and the EnMAP end-to-end simulation tool, EeteS (Segl et al. 2012), which utilizes OE for inversions. While surface-reflectance and retrieval simulation models are normally run in isolation on local computing environments, we extend tools to enable uncertainty quantification into new representative environments and thereby increase robustness of the downselect process by providing an advanced simulation model to the broader hyperspectral imaging community in software-as-a-service (SaaS). Here, we describe and demonstrate our instrument modeling web service and corresponding hyperspectral traceability analysis (HyperTrace) library for Python. The modeling service and underlying HyperTrace OE library are deployed on the NASA DISCOVER high-performance computing (HPC) infrastructure. An intermediate HTTP server communicates between FTP and HTTP servers, providing persistent archival of model inputs and outputs for subsequent meta-analyses. To facilitate enhanced community participation, users may simply transfer a folder containing ENVI format hyperspectral imagery and a corresponding JSON metadata file to the FTP server, from which it is pulled to a NASA DISCOVER server for processing, with statistical, graphical, and ENVI-formatted results subsequently returned to the FTP server where it is available for users to download. This activity provides an expanded capability for estimating the various science values of architectures under consideration for NASA’s Surface Biology and Geology Designated Observable.</p>

1998 ◽  
Vol 4 (1) ◽  
pp. 1-19 ◽  
Author(s):  
G. Zuccaro ◽  
I. Elishakoff ◽  
A. Baratta

The paper presents a novel approach to predict the response of earthquake-excited structures. The earthquake excitation is expanded in terms of series of deterministic functions. The coefficients of the series are represented as a point inN-dimensional space. Each available ccelerogram at a certain site is then represented as a point in the above space, modeling the available fragmentary historical data. The minimum volume ellipsoid, containing all points, is constructed. The ellipsoidal models of uncertainty, pertinent to earthquake excitation, are developed. The maximum response of a structure, subjected to the earthquake excitation, within ellipsoidal modeling of the latter, is determined. This procedure of determining least favorable response was termed in the literature (Elishakoff, 1991) as an antioptimization. It appears that under inherent uncertainty of earthquake excitation, antioptimization analysis is a viable alternative to stochastic approach.


Electronics ◽  
2019 ◽  
Vol 8 (12) ◽  
pp. 1501
Author(s):  
Juan Ruiz-Rosero ◽  
Gustavo Ramirez-Gonzalez ◽  
Rahul Khanna

There is a large number of tools for the simulation of traffic and routes in public transport systems. These use different simulation models (macroscopic, microscopic, and mesoscopic). Unfortunately, these simulation tools are limited when simulating a complete public transport system, which includes all its buses and routes (up to 270 for the London Underground). The processing times for these type of simulations increase in an unmanageable way since all the relevant variables that are required to simulate consistently and reliably the system behavior must be included. In this paper, we present a new simulation model for public transport routes’ simulation called Masivo. It runs the public transport stops’ operations in OpenCL work items concurrently, using a multi-core high performance platform. The performance results of Masivo show a speed-up factor of 10.2 compared with the simulator model running with one compute unit and a speed-up factor of 278 times faster than the validation simulator. The real-time factor achieved was 3050 times faster than the 10 h simulated duration, for a public transport system of 300 stops, 2400 buses, and 456,997 passengers.


2020 ◽  
Vol 189 (8) ◽  
pp. 861-869 ◽  
Author(s):  
Chuan Hong ◽  
Rui Duan ◽  
Lingzhen Zeng ◽  
Rebecca A Hubbard ◽  
Thomas Lumley ◽  
...  

Abstract Funnel plots have been widely used to detect small-study effects in the results of univariate meta-analyses. However, there is no existing visualization tool that is the counterpart of the funnel plot in the multivariate setting. We propose a new visualization method, the galaxy plot, which can simultaneously present the effect sizes of bivariate outcomes and their standard errors in a 2-dimensional space. We illustrate the use of the galaxy plot with 2 case studies, including a meta-analysis of hypertension trials with studies from 1979–1991 (Hypertension. 2005;45(5):907–913) and a meta-analysis of structured telephone support or noninvasive telemonitoring with studies from 1966–2015 (Heart. 2017;103(4):255–257). The galaxy plot is an intuitive visualization tool that can aid in interpreting results of multivariate meta-analysis. It preserves all of the information presented by separate funnel plots for each outcome while elucidating more complex features that may only be revealed by examining the joint distribution of the bivariate outcomes.


2018 ◽  
Vol 74 (12) ◽  
pp. 1129-1168 ◽  
Author(s):  
Rana Ashkar ◽  
Hassina Z. Bilheux ◽  
Heliosa Bordallo ◽  
Robert Briber ◽  
David J. E. Callaway ◽  
...  

The scattering of neutrons can be used to provide information on the structure and dynamics of biological systems on multiple length and time scales. Pursuant to a National Science Foundation-funded workshop in February 2018, recent developments in this field are reviewed here, as well as future prospects that can be expected given recent advances in sources, instrumentation and computational power and methods. Crystallography, solution scattering, dynamics, membranes, labeling and imaging are examined. For the extraction of maximum information, the incorporation of judicious specific deuterium labeling, the integration of several types of experiment, and interpretation using high-performance computer simulation models are often found to be particularly powerful.


2019 ◽  
Author(s):  
Evan C Carter

Meta-analysis represents the promise of cumulative science--that each successive study brings us greater understanding of a given phenomenon. As such, meta-analyses are highly influential and gaining in popularity. However, there are well-known threats to the validity of meta-analytic results, such as processes like publication bias and questionable research practices which can cause researchers to massively overestimate the evidence in support of a claim. There are many statistical methods to correct for such bias, but no single method has been found to be robust in all realistic conditions. Here, I describe a method that merges statistical simulation and deep learning to achieve an unprecedented level of robust meta-analytic estimation in the face of numerous forms of bias and other historically problematic conditions. Furthermore, the resulting estimator, called DeepMA, has the unique property that it can easily evolve: As new conditions for which robustness is needed are identified, DeepMA can be re-trained to maintain high performance. Given the weaknesses that have been identified for meta-analysis, the current consensus is that it should serve as simply another data point, rather than residing at the top of the hierarchy of evidence. The novel approach I describe, however, holds the potential to eliminate these weaknesses, possibly solidifying meta-analysis as the platinum standard in scientific debate.


2019 ◽  
Vol 26 (1) ◽  
pp. 338-346
Author(s):  
Stefan Czypionka ◽  
Frank Kienhöfer

AbstractThe wheel of a passenger vehicle must be designed to be safe and light. Despite the tremendous potential of carbon fibre as an automotive material due to high strength and low weight, the prevalence of carbon fibre reinforced plastics (CFRPs) in vehicle wheels is limited. Manufacturing and testing CFRP prototypes is expensive. Thus it is advantageous to develop simulation models for composite weight reduction. The simulation models can provide insight into how lighter CFRP wheels can be designed. This study presents the design development of a CFRP wheel for a high-performance roadster; the CFRP wheel is offered by an automotive manufacturer as a high-performance option instead of aluminium wheels. Finite element (FE) simulations were initially conducted assuming an isotropic material. This initial model was used to eliminate stress concentrations and to design and manufacture an initial CFRP wheel. The CFRP wheel weight is 6.8 kg as compared to the original aluminium wheel which weighs 8.1 kg. This initial design passed the dynamic cornering fatigue test (the most stringent strength test for wheels). Thereafter the wheel was instrumented with strain gauges and a bending moment was applied to the hub using a custom-built test rig. The test rig produced a static load equivalent to the dynamic cornering fatigue test (in which the applied bending moment varies sinusoidally). The test rig allowed for the deflection of the load arm to be measured. The comparison of the experimentally measured strains and an FE model which includes the CFRP laminate properties showed good agreement. Two alternative laminate options were simulated using the FE model. These showed both an increase in stiffness and a calculated weight reduction. This study shows that an aluminium wheel for a high-performance roadster can be redesigned using CFRP to be 16% lighter and using a FE model a further 152 g weight reduction is possible (18% weight reduction in total when compared to the aluminium wheel).


2008 ◽  
Vol 5 (2) ◽  
Author(s):  
Yudong Sun ◽  
Steve McKeever

SummaryBiomolecular modelling has provided computational simulation based methods for investigating biological processes from quantum chemical to cellular levels. Modelling such microscopic processes requires atomic description of a biological system and conducts in fine timesteps. Consequently the simulations are extremely computationally demanding. To tackle this limitation, different biomolecular models have to be integrated in order to achieve high-performance simulations. The integration of diverse biomolecular models needs to convert molecular data between different data representations of different models. This data conversion is often non-trivial, requires extensive human input and is inevitably error prone. In this paper we present an automated data conversion method for biomolecular simulations between molecular dynamics and quantum mechanics/molecular mechanics models. Our approach is developed around an XML data representation called BioSimML (Biomolecular Simulation Markup Language). BioSimML provides a domain specific data representation for biomolecular modelling which can effciently support data interoperability between different biomolecular simulation models and data formats.


2020 ◽  
Author(s):  
Jorge Macias ◽  
Manuel J. Castro ◽  
Marc de la Asunción ◽  
José Manuel González-Vida ◽  
Carlos Sánchez-Linares ◽  
...  

<p>Tsunami simulation in the framework of Tsunami Early Warning Systems (TEWS) is a quite recent achievement, but still limited regarding the size of the problem and restricted to tsunami wave propagation. Faster Than Real Time (FTRT) tsunami simulations require greatly improved and highly efficient computational methods to achieve extremely fast and effective calculations. HPC facilities have the role to bring this efficiency to a maximum possible and drastically reducing computational times. Putting these two ingredients together is the aim of Pilot Demonstrator 2 (PD2) in ChEESE project. This PD will comprise both earthquake and landslide sources. Earthquake tsunami generation is to an extent simpler than landslide tsunami generation, as landslide generated tsunamis depend on the landslide dynamics which necessitate coupling dynamic landslide simulation models to the tsunami propagation. In both cases, FTRT simulations in several contexts and configurations will be the final aim of this pilot.</p><p>Among the objectives of our work in ChEESE project are achieving unprecedented FTRT tsunami computations with existing models and investigate the scalability limits of such models; increasing the size of the problems by increasing spatial resolution and/or producing longer simulations while still computing FTRT, dealing with problems and resolutions never done before; developing a TEWS including inundation for a particular target coastal zone, or numerous scenarios allowing PTHA (PD7) and PTF (PD8), an aim unattainable at present or including more physics in shallow water models for taking into account dispersive effects.</p><p>Up to now, the two European tsunami flagship codes selected by ChESEE project (Tsunami-HySEA and Landslide-HySEA) have been audit and efficiency further improved. The improved code versions have been tested in three European 0-Tier HPC facilities: BSC (Spain), CINECA (Italy) and Piz Daint (Switzerland) using up to 32 NVIDIA Graphic Cards (P100 and V100) for scaling purposes. Computing times have been drastically reduced and a PTF study composed by around 10,000 scenarios (4 nested grids, 12 M cells, 8 hours simulations) have been computed in 6 days of wall-clock computations in the 64 GPUs available for us at the BSC.</p><p><strong> </strong><strong>Acknowledgements</strong>. This research has been partially supported by the Spanish Government Research project <strong>MEGAFLOW</strong> (RTI2018-096064-B-C21), Universidad de Málaga, Campus de Excelencia Internacional Andalucía Tech and ChEESE project (EU Horizon 2020, grant agreement Nº 823844), https://cheese-coe.eu/</p>


2016 ◽  
Author(s):  
Inti Pelupessy ◽  
Ben van Werkhoven ◽  
Arjen van Elteren ◽  
Jan Viebahn ◽  
Adam Candy ◽  
...  

Abstract. In this paper we present the Oceanographic Multipurpose Software Environment (OMUSE). This framework aims to provide a homogeneous environment for existing or newly developed numerical ocean simulation codes, simplifying their use and deployment. In this way, OMUSE facilitates the design of numerical experiments that combine ocean models representing different physics or spanning different ranges of physical scales. Rapid development of simulation models is made possible through the creation of simple high-level scripts, with the low-level core part of the abstraction designed to deploy these simulations efficiently on heterogeneous high performance computing resources. Cross-verification of simulation models with different codes and numerical methods is facilitated by the unified interface that OMUSE provides. Reproducibility in numerical experiments is fostered by allowing complex numerical experiments to be expressed in portable scripts that conform to a common OMUSE interface. Here, we present the design of OMUSE as well as the modules and model components currently included, which range from a simple conceptual quasi-geostrophic solver, to the global circulation model POP. We discuss the types of the couplings that can be implemented using OMUSE and present example applications, that demonstrate the efficient and relatively straightforward model initialisation and coupling within OMUSE. These also include the concurrent use of data analysis tools on a running model. We also give examples of multi-scale and multi-physics simulations by embedding a regional ocean model into a global ocean model, and in coupling a surface wave propagation model with a coastal circulation model.


Sign in / Sign up

Export Citation Format

Share Document