unconstrained model
Recently Published Documents


TOTAL DOCUMENTS

37
(FIVE YEARS 8)

H-INDEX

10
(FIVE YEARS 1)

2021 ◽  
Vol 154 (A2) ◽  
Author(s):  
G J Macfarlane ◽  
M R Renilson ◽  
T Turner

In order to provide data to assist in developing and validating a numerical code to simulate the flooding immediately following damage scale model experiments were conducted on a fully constrained model to investigate the progressive flooding through a complex series of internal compartments within a generic destroyer type hull form. A 3.268 metre long model of a generic destroyer hull form with a simplified, typical internal arrangement was constructed to cover the configuration of greatest interest. A very rapid damage opening scenario was simulated by rupturing a taut membrane covering an opening. The model was instrumented to measure the levels of water and the air pressures in various compartments. In addition, video footage was obtained of the flooding process from both internally and externally of the model. Previous work presented by Macfarlane et al. (2010) showed the results for the unconstrained model. This paper reports on the outcomes from the experimental program where the model was fully constrained in all six degrees of freedom. Firstly, tests were conducted in calm water with damage opening extents ranging from 50% to 100%. When the damage opening was only 50% the rate of rise of water in each of the compartments was only marginally slower than for the 100% damage extent case. Secondly, the test results in calm water were compared against results from tests in regular beam seas. A ‘set-up’ of water inside each of the compartments on the 2nd Deck was found during the wave tests. The result of this is that the mean equilibrium water level in each compartment in the regular beam sea cases is noticeably higher than the equivalent calm water case, particularly for the two compartments on the port side, away from the damage. Finally, analysis of the data from further calm water and beam sea tests suggests that a similar result also occurs when the model is fixed at various non-zero heel angles.


2021 ◽  
Author(s):  
Min Tao ◽  
Xiao-Ping Zhang

<div>In this paper, we carry out a unified study for L_1 over L_2 sparsity promoting models, which are widely used in the regime of coherent dictionaries for recovering sparse nonnegative/arbitrary signal. First, we provide the exact recovery condition on both the constrained and the unconstrained models for a broad set of signals. Next, we prove the solution existence of these L_{1}/L_{2} models under the assumption that the null space of the measurement matrix satisfies the $s$-spherical section property. Then by deriving an analytical solution for the proximal operator of the L_{1} / L_{2} with nonnegative constraint, we develop a new alternating direction method of multipliers based method (ADMM$_p^+$) to solve the unconstrained model. We establish its global convergence to a d-stationary solution (sharpest stationary) and its local linear convergence under certain conditions. Numerical simulations on two specific applications confirm the superior of ADMM$_p^+$ over the state-of-the-art methods in sparse recovery. ADMM$_p^+$ reduces computational time by about $95\%\sim99\%$ while achieving a much higher accuracy compared to commonly used scaled gradient projection method for wavelength misalignment problem.</div>


2021 ◽  
Author(s):  
Min Tao ◽  
Xiao-Ping Zhang

<div>In this paper, we carry out a unified study for L_1 over L_2 sparsity promoting models, which are widely used in the regime of coherent dictionaries for recovering sparse nonnegative/arbitrary signal. First, we provide the exact recovery condition on both the constrained and the unconstrained models for a broad set of signals. Next, we prove the solution existence of these L_{1}/L_{2} models under the assumption that the null space of the measurement matrix satisfies the $s$-spherical section property. Then by deriving an analytical solution for the proximal operator of the L_{1} / L_{2} with nonnegative constraint, we develop a new alternating direction method of multipliers based method (ADMM$_p^+$) to solve the unconstrained model. We establish its global convergence to a d-stationary solution (sharpest stationary) and its local linear convergence under certain conditions. Numerical simulations on two specific applications confirm the superior of ADMM$_p^+$ over the state-of-the-art methods in sparse recovery. ADMM$_p^+$ reduces computational time by about $95\%\sim99\%$ while achieving a much higher accuracy compared to commonly used scaled gradient projection method for wavelength misalignment problem.</div>


2021 ◽  
Vol 12 (3) ◽  
pp. 1-16
Author(s):  
Yukai Shi ◽  
Sen Zhang ◽  
Chenxing Zhou ◽  
Xiaodan Liang ◽  
Xiaojun Yang ◽  
...  

Non-parallel text style transfer has attracted increasing research interests in recent years. Despite successes in transferring the style based on the encoder-decoder framework, current approaches still lack the ability to preserve the content and even logic of original sentences, mainly due to the large unconstrained model space or too simplified assumptions on latent embedding space. Since language itself is an intelligent product of humans with certain grammars and has a limited rule-based model space by its nature, relieving this problem requires reconciling the model capacity of deep neural networks with the intrinsic model constraints from human linguistic rules. To this end, we propose a method called Graph Transformer–based Auto-Encoder, which models a sentence as a linguistic graph and performs feature extraction and style transfer at the graph level, to maximally retain the content and the linguistic structure of original sentences. Quantitative experiment results on three non-parallel text style transfer tasks show that our model outperforms state-of-the-art methods in content preservation, while achieving comparable performance on transfer accuracy and sentence naturalness.


Author(s):  
Marco Corazza ◽  
Giacomo di Tollo ◽  
Giovanni Fasano ◽  
Raffaele Pesenti

AbstractIn this paper we propose a hybrid metaheuristic based on Particle Swarm Optimization, which we tailor on a portfolio selection problem. To motivate and apply our hybrid metaheuristic, we reformulate the portfolio selection problem as an unconstrained problem, by means of penalty functions in the framework of the exact penalty methods. Our metaheuristic is hybrid as it adaptively updates the penalty parameters of the unconstrained model during the optimization process. In addition, it iteratively refines its solutions to reduce possible infeasibilities. We report also a numerical case study. Our hybrid metaheuristic appears to perform better than the corresponding Particle Swarm Optimization solver with constant penalty parameters. It performs similarly to two corresponding Particle Swarm Optimization solvers with penalty parameters respectively determined by a REVAC-based tuning procedure and an irace-based one, but on average it just needs less than 4% of the computational time requested by the latter procedures.


Author(s):  
Peter Gangl ◽  
Kevin Sturm ◽  
Michael Neunteufel ◽  
Joachim Schöberl

Abstract In this paper, we present a framework for automated shape differentiation in the finite element software . Our approach combines the mathematical Lagrangian approach for differentiating PDE-constrained shape functions with the automated differentiation capabilities of . The user can decide which degree of automatisation is required, thus allowing for either a more custom-like or black-box–like behaviour of the software. We discuss the automatic generation of first- and second-order shape derivatives for unconstrained model problems as well as for more realistic problems that are constrained by different types of partial differential equations. We consider linear as well as nonlinear problems and also problems which are posed on surfaces. In numerical experiments, we verify the accuracy of the computed derivatives via a Taylor test. Finally, we present first- and second-order shape optimisation algorithms and illustrate them for several numerical optimisation examples ranging from nonlinear elasticity to Maxwell’s equations.


2020 ◽  
Author(s):  
Julia M. Haaf ◽  
Jeffrey N. Rouder

The most prominent goal when conducting a meta-analysis is to estimate the true effect size across a set of studies. This approach is problematic whenever the analyzed studies are inconsistent, i.e. some studies show an effect in the predicted direction while others show no effect and still others show an effect in the opposite direction. In case of such an inconsistency, the average effect may be a product of a mixture of mechanisms. The first question in any meta-analysis should therefore be whether all studies show an effect in the same direction. To tackle this question a model with multiple ordinal constraints is proposed---one constraint for each study in the set. This "every study" model is compared to a set of alternative models, such as an unconstrained model that predicts effects in both directions. If the ordinal constraints hold, one underlying mechanism may suffice to explain the results from all studies. A major implication is then that average effects become interpretable. We illustrate the model-comparison approach using Carbajal et al.'s (2020) meta-analysis on the familiar-word-recognition effect, show how predictor analyses can be incorporated in the approach, and provide R-code for interested researchers. As common in meta-analysis, only surface statistics (such as effect size and sample size) are provided from each study, and the modeling approach can be adapted to suit these conditions.


2018 ◽  
Vol 31 (10) ◽  
pp. 3959-3978 ◽  
Author(s):  
Nadine Goris ◽  
Jerry F. Tjiputra ◽  
Are Olsen ◽  
Jörg Schwinger ◽  
Siv K. Lauvset ◽  
...  

The North Atlantic is one of the major sinks for anthropogenic carbon in the global ocean. Improved understanding of the underlying mechanisms is vital for constraining future projections, which presently have high uncertainties. To identify some of the causes behind this uncertainty, this study investigates the North Atlantic’s anthropogenically altered carbon uptake and inventory, that is, changes in carbon uptake and inventory due to rising atmospheric CO2 and climate change (abbreviated as [Formula: see text]-uptake and [Formula: see text]-inventory). Focus is set on an ensemble of 11 Earth system models and their simulations of a future with high atmospheric CO2. Results show that the model spread in the [Formula: see text]-uptake originates in middle and high latitudes. Here, the annual cycle of oceanic pCO2 reveals inherent model mechanisms that are responsible for different model behavior: while it is SST-dominated for models with a low future [Formula: see text]-uptake, it is dominated by deep winter mixing and biological production for models with a high future [Formula: see text]-uptake. Models with a high future [Formula: see text]-uptake show an efficient carbon sequestration and hence store a large fraction of their contemporary North Atlantic [Formula: see text]-inventory below 1000-m depth, while the opposite is true for models with a low future [Formula: see text]-uptake. Constraining the model ensemble with observation-based estimates of carbon sequestration and summer oceanic pCO2 anomalies yields later flattening of the [Formula: see text]-uptake than previously estimated. This result highlights the need to depart from the concept of unconstrained model ensembles in order to reduce uncertainties associated with future projections.


2018 ◽  
Author(s):  
Joseph R. Mihaljevic ◽  
Carlos M. Polivka ◽  
Constance J. Mehmel ◽  
Chentong Li ◽  
Vanja Dukic ◽  
...  

AbstractA key assumption of models of infectious disease is that population-scale spread is driven by transmission between host individuals at small scales. This assumption, however, is rarely tested, likely because observing disease transmission between host individuals is non-trivial in many infectious diseases. Quantifying the transmission of insect baculoviruses at a small scale is in contrast straightforward. We fit a disease model to data from baculovirus epizootics (= epidemics in animals) at the scale of whole forests, while using prior parameter distributions constructed from branch-scale experiments. Our experimentally-constrained model fits the large-scale data very well, supporting the role of small-scale transmission mechanisms in baculovirus epizootics. We further compared our experimentally-based model to an unconstrained model that ignores our experimental data, serving as a proxy for models that include large-scale mechanisms. This analysis supports our hypothesis that small-scale mechanisms are important, especially individual variability in host susceptibility to the virus. Comparison of transmission rates in the two models, however, suggests that large-scale mechanisms increase transmission compared to our experimental estimates. Our study shows that small-scale and large-scale mechanisms drive forest-wide epizootics of baculoviruses, and that synthesizing mathematical models with data collected across scales is key to understanding the spread of infectious disease.


Sign in / Sign up

Export Citation Format

Share Document