scholarly journals Data assimilation using adaptive, non-conservative, moving mesh models

2019 ◽  
Author(s):  
Ali Aydoğdu ◽  
Alberto Carrassi ◽  
Colin T. Guider ◽  
Chris K. R. T. Jones ◽  
Pierre Rampal

Abstract. Numerical models solved on adaptive moving meshes have become increasingly prevalent in recent years. Motivating problems include the study of fluids in a Lagrangian frame and the presence of highly localized structures such as shock waves or interfaces. In the former case, Lagrangian solvers move the nodes of the mesh with the dynamical flow; in the latter, mesh resolution is increased in the proximity of the localized structure. Mesh adaptation can include remeshing, a procedure that adds or removes mesh nodes according to specific rules reflecting constraints in the numerical solver. In this case, the number of mesh nodes will change during the integration and, as a result, the dimension of the model’s state vector will not be conserved. This work presents a novel approach to the formulation of ensemble data assimilation for models with this underlying computational structure. The challenge lies in the fact that remeshing entails a different state space dimension across members of the ensemble, thus impeding the usual computation of consistent ensemble-based statistics. Our methodology adds one forward and one backward mapping step before and after the EnKF analysis respectively. This mapping takes all the ensemble members onto a fixed, uniform, reference mesh where the EnKF analysis can be performed. We consider a high- (HR) and a low-resolution (LR) fixed uniform reference mesh, whose resolutions are determined by the remeshing tolerances. This way the reference meshes embed the model numerical constraints and also are upper and lower uniform meshes bounding the resolutions of the individual ensemble meshes. Numerical experiments are carried out using 1D prototypical models: Burgers and Kuramoto-Sivashinsky equations, and both Eulerian and Lagrangian synthetic observations. While the HR strategy generally outperforms that of LR, their skill difference can be reduced substantially by an optimal tuning of the data assimilation parameters. The LR case is appealing in high-dimensions because of its lower computational burden. Lagrangian observations are shown to be very effective in that fewer of them are able to keep the analysis error at a level comparable to the more numerous observers for the Eulerian case. This study is motivated by the development of suitable EnKF strategies for 2D models of the sea-ice that are numerically solved on a Lagrangian mesh with remeshing.

2019 ◽  
Vol 26 (3) ◽  
pp. 175-193 ◽  
Author(s):  
Ali Aydoğdu ◽  
Alberto Carrassi ◽  
Colin T. Guider ◽  
Chris K. R. T Jones ◽  
Pierre Rampal

Abstract. Numerical models solved on adaptive moving meshes have become increasingly prevalent in recent years. Motivating problems include the study of fluids in a Lagrangian frame and the presence of highly localized structures such as shock waves or interfaces. In the former case, Lagrangian solvers move the nodes of the mesh with the dynamical flow; in the latter, mesh resolution is increased in the proximity of the localized structure. Mesh adaptation can include remeshing, a procedure that adds or removes mesh nodes according to specific rules reflecting constraints in the numerical solver. In this case, the number of mesh nodes will change during the integration and, as a result, the dimension of the model's state vector will not be conserved. This work presents a novel approach to the formulation of ensemble data assimilation (DA) for models with this underlying computational structure. The challenge lies in the fact that remeshing entails a different state space dimension across members of the ensemble, thus impeding the usual computation of consistent ensemble-based statistics. Our methodology adds one forward and one backward mapping step before and after the ensemble Kalman filter (EnKF) analysis, respectively. This mapping takes all the ensemble members onto a fixed, uniform reference mesh where the EnKF analysis can be performed. We consider a high-resolution (HR) and a low-resolution (LR) fixed uniform reference mesh, whose resolutions are determined by the remeshing tolerances. This way the reference meshes embed the model numerical constraints and are also upper and lower uniform meshes bounding the resolutions of the individual ensemble meshes. Numerical experiments are carried out using 1-D prototypical models: Burgers and Kuramoto–Sivashinsky equations and both Eulerian and Lagrangian synthetic observations. While the HR strategy generally outperforms that of LR, their skill difference can be reduced substantially by an optimal tuning of the data assimilation parameters. The LR case is appealing in high dimensions because of its lower computational burden. Lagrangian observations are shown to be very effective in that fewer of them are able to keep the analysis error at a level comparable to the more numerous observers for the Eulerian case. This study is motivated by the development of suitable EnKF strategies for 2-D models of the sea ice that are numerically solved on a Lagrangian mesh with remeshing.


2017 ◽  
Vol 3 (1) ◽  
Author(s):  
Paul Krause

AbstractFor dealing with dynamical instability in predictions, numerical models should be provided with accurate initial values on the attractor of the dynamical system they generate. A discrete control scheme is presented to this end for trailing variables of an evolutive system of ordinary differential equations. The Influence Sampling (IS) scheme adapts sample values of the trailing variables to input values of the determining variables in the attractor. The optimal IS scheme has affordable cost for large systems. In discrete data assimilation runs conducted with the Lorenz 1963 equations and a nonautonomous perturbation of the Lorenz equations whose dynamics shows on-off intermittency the optimal IS was compared to the straightforward insertion method and the Ensemble Kalman Filter (EnKF). With these unstable systems the optimal IS increases by one order of magnitude the maximum spacing between insertion times that the insertion method can handle and performs comparably to the EnKF when the EnKF converges. While the EnKF converges for sample sizes greater than or equal to 10, the optimal IS scheme does so fromsample size 1. This occurs because the optimal IS scheme stabilizes the individual paths of the Lorenz 1963 equations within data assimilation processes.


2018 ◽  
Vol 30 (02) ◽  
pp. 1850017
Author(s):  
Gang Wang ◽  
Chi Yu ◽  
Ying Wang

Nasal obstruction frequently has been associated with obstructive sleep apnea-hypopnea syndrome (OSAHS). Based on the CT data before and after the operation of the three Chinese patients, 3D numerical models involving the upper airway and the soft palate are established. Computational modeling for inspiration and expiration was performed by using fluid–structure interaction (FSI) method. The airflow characteristics such as velocity and pressure drop, and displacement distribution of soft palate are selected for comparison. The distribution of airflow in the upper airway and the motion changes of the soft palate are analyzed quantitively before and after the surgery. From the results, that the inner connection between the both upper and lower part of the upper airway is very complex due to the individual difference of own unique feature of upper airway. Whether can it treat OSAHS only through nasal surgery, which depends primarily on the improvement of ventilation in upper part of upper airway giving a beneficial effect on the lower part (especially in the velopharyngeal region). The simulation results correspond with the monitoring results of polysomnography (PSG) and the patient's presenting complaint. Moreover, the mechanical parameters obtained by numerical simulation can support the improvement of OSAHS symptoms after surgery, which provides quantitative and referential evidence for further study the role of nasal structure in OSAHS and the treatment effect of nasal surgery on OSAHS.


2020 ◽  
Author(s):  
Wen Shao ◽  
Mohan Yi ◽  
Jinyuan Tang ◽  
Siyuan Sun

Abstract The considerable heat treatment induced runout value in the end face of the automobile main reducer gear is always dimensionally out of tolerance. It directly affects the dimensional accuracy, the grade of carburized and hardened gears, and the post-quenching manufacturing costs. In this study, three dimensional numerical models were developed to simulate the carburizing-quenching process of gear based on the multi-field coupling theory using DEFORM software. The results indicated that the ununiform cooling rate of the gear caused by the asymmetry of the web structure was the determinant of severe deformation of the gear. Therefore, a novel method was proposed to minimize the heat treatment induced runout value. It was found that the heat treatment induced runout value could be effectively controlled by the addition of a compensation ring and the support of a rod structure. Further experiments showed that the average runout value of the gear end face before and after the proposed heat treatment method were about 0.023 mm and 0.059 mm respectively, which was in good agreement with the simulated results. The novel approach proposed in this study led to a reduction of the gear runout value by 70.0%-76.9% compared to that of the original heat treatment process, which may serve as a practical and economical way to predict and minimize the heat treatment induced distortion in drive gear.


2019 ◽  
Author(s):  
Shane Timmons

Encouraging consumers to switch to lower-rate mortgages is important both for the individual consumer’s finances and for functioning competitive markets, but switching rates are low. Given the complexity of mortgages, one potential regulatory intervention that may increase switching rates is to provide independent advice on how to select good mortgage products and how to navigate the switching process. Working with a government consumer protection agency, we conducted an experiment with mortgage-holders to test whether such advice alters perceptions of switching. The experiment tested how (i) the attributes of the offer, (ii) perceptions about the switching process, (iii) individual feelings of competence and (iv) comprehension of the product affect willingness to switch to better offers, both before and after reading the official advice. The advice made consumers more sensitive to interest rate decreases, especially at longer terms. It also increased consumers’ confidence in their ability to select good offers. Overall, the findings imply that advice from policymakers can change perceptions and increase switching rates. Moreover, the experiment demonstrates how lab studies can contribute to behaviourally-informed policy development.


2003 ◽  
Vol 128 (1) ◽  
pp. 17-26 ◽  
Author(s):  
David J. Kay ◽  
Richard M. Rosenfeld

OBJECTIVE: The goal was to validate the SN-5 survey as a measure of longitudinal change in health-related quality of life (HRQoL) for children with persistent sinonasal symptoms. DESIGN AND SETTING: We conducted a before and after study of 85 children aged 2 to 12 years in a metropolitan pediatric otolaryngology practice. Caregivers completed the SN-5 survey at entry and at least 4 weeks later. The survey included 5 symptom-cluster items covering the domains of sinus infection, nasal obstruction, allergy symptoms, emotional distress, and activity limitations. RESULTS: Good test-retest reliability ( R = 0.70) was obtained for the overall SN-5 score and the individual survey items ( R ≥ 0.58). The mean baseline SN-5 score was 3.8 (SD, 1.0) of a maximum of 7.0, with higher scores indicating poorer HRQoL. All SN-5 items had adequate correlation ( R ≥ 0.36) with external constructs. The mean change in SN-5 score after routine clinical care was 0.88 (SD, 1.19) with an effect size of 0.74 indicating good responsiveness to longitudinal change. The change scores correlated appropriately with changes in related external constructs ( R ≥ 0.42). CONCLUSIONS: The SN-5 is a valid, reliable, and responsive measure of HRQoL for children with persistent sinonasal symptoms, suitable for use in outcomes studies and routine clinical care.


2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Linda T. Betz ◽  
◽  
Nora Penzel ◽  
Lana Kambeitz-Ilankovic ◽  
Marlene Rosen ◽  
...  

AbstractRecent life events have been implicated in the onset and progression of psychosis. However, psychological processes that account for the association are yet to be fully understood. Using a network approach, we aimed to identify pathways linking recent life events and symptoms observed in psychosis. Based on previous literature, we hypothesized that general symptoms would mediate between recent life events and psychotic symptoms. We analyzed baseline data of patients at clinical high risk for psychosis and with recent-onset psychosis (n = 547) from the Personalised Prognostic Tools for Early Psychosis Management (PRONIA) study. In a network analysis, we modeled links between the burden of recent life events and all individual symptoms of the Positive and Negative Syndrome Scale before and after controlling for childhood trauma. To investigate the longitudinal associations between burden of recent life events and symptoms, we analyzed multiwave panel data from seven timepoints up to month 18. Corroborating our hypothesis, burden of recent life events was connected to positive and negative symptoms through general psychopathology, specifically depression, guilt feelings, anxiety and tension, even after controlling for childhood trauma. Longitudinal modeling indicated that on average, burden of recent life events preceded general psychopathology in the individual. In line with the theory of an affective pathway to psychosis, recent life events may lead to psychotic symptoms via heightened emotional distress. Life events may be one driving force of unspecific, general psychopathology described as characteristic of early phases of the psychosis spectrum, offering promising avenues for interventions.


2021 ◽  
Vol 47 (2) ◽  
pp. 1-28
Author(s):  
Goran Flegar ◽  
Hartwig Anzt ◽  
Terry Cojean ◽  
Enrique S. Quintana-Ortí

The use of mixed precision in numerical algorithms is a promising strategy for accelerating scientific applications. In particular, the adoption of specialized hardware and data formats for low-precision arithmetic in high-end GPUs (graphics processing units) has motivated numerous efforts aiming at carefully reducing the working precision in order to speed up the computations. For algorithms whose performance is bound by the memory bandwidth, the idea of compressing its data before (and after) memory accesses has received considerable attention. One idea is to store an approximate operator–like a preconditioner–in lower than working precision hopefully without impacting the algorithm output. We realize the first high-performance implementation of an adaptive precision block-Jacobi preconditioner which selects the precision format used to store the preconditioner data on-the-fly, taking into account the numerical properties of the individual preconditioner blocks. We implement the adaptive block-Jacobi preconditioner as production-ready functionality in the Ginkgo linear algebra library, considering not only the precision formats that are part of the IEEE standard, but also customized formats which optimize the length of the exponent and significand to the characteristics of the preconditioner blocks. Experiments run on a state-of-the-art GPU accelerator show that our implementation offers attractive runtime savings.


2021 ◽  
Vol 11 (7) ◽  
pp. 2917
Author(s):  
Madalina Rabung ◽  
Melanie Kopp ◽  
Antal Gasparics ◽  
Gábor Vértesy ◽  
Ildikó Szenthe ◽  
...  

The embrittlement of two types of nuclear pressure vessel steel, 15Kh2NMFA and A508 Cl.2, was studied using two different methods of magnetic nondestructive testing: micromagnetic multiparameter microstructure and stress analysis (3MA-X8) and magnetic adaptive testing (MAT). The microstructure and mechanical properties of reactor pressure vessel (RPV) materials are modified due to neutron irradiation; this material degradation can be characterized using magnetic methods. For the first time, the progressive change in material properties due to neutron irradiation was investigated on the same specimens, before and after neutron irradiation. A correlation was found between magnetic characteristics and neutron-irradiation-induced damage, regardless of the type of material or the applied measurement technique. The results of the individual micromagnetic measurements proved their suitability for characterizing the degradation of RPV steel caused by simulated operating conditions. A calibration/training procedure was applied on the merged outcome of both testing methods, producing excellent results in predicting transition temperature, yield strength, and mechanical hardness for both materials.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Daniel Duncan

Abstract Advances in sociophonetic research resulted in features once sorted into discrete bins now being measured continuously. This has implied a shift in what sociolinguists view as the abstract representation of the sociolinguistic variable. When measured discretely, variation is variation in selection: one variant is selected for production, and factors influencing language variation and change are influencing the frequency at which variants are selected. Measured continuously, variation is variation in execution: speakers have a single target for production, which they approximate with varying success. This paper suggests that both approaches can and should be considered in sociophonetic analysis. To that end, I offer the use of hidden Markov models (HMMs) as a novel approach to find speakers’ multiple targets within continuous data. Using the lot vowel among whites in Greater St. Louis as a case study, I compare 2-state and 1-state HMMs constructed at the individual speaker level. Ten of fifty-two speakers’ production is shown to involve the regular use of distinct fronted and backed variants of the vowel. This finding illustrates HMMs’ capacity to allow us to consider variation as both variant selection and execution, making them a useful tool in the analysis of sociophonetic data.


Sign in / Sign up

Export Citation Format

Share Document