Ablaut pattern extension as partial regularization strategy in German and Luxembourgish

Author(s):  
Jessica Nowak
2011 ◽  
Vol 25 (3) ◽  
pp. 928-939 ◽  
Author(s):  
P.A.G. Zavala ◽  
W. De Roeck ◽  
K. Janssens ◽  
J.R.F. Arruda ◽  
P. Sas ◽  
...  

2020 ◽  
pp. 107754632094469
Author(s):  
Xijun Ye ◽  
Chudong Pan

Unknown initial conditions can affect the identified accuracy of dynamic forces. Direct measurement of initial conditions is relatively difficult. This study proposes a sparse regularization–based method for identifying forces considering influences of unknown initial conditions. The initial conditions are embedded in a classical governing equation of force identification. The key idea is to introduce a concept of concomitant mapping matrix for reasonably expressing the initial conditions. First, a dictionary is introduced for expanding the dynamic forces. Then, the concomitant mapping matrix is formulated by using free vibrating responses, which correspond to structural responses happening after the structure is subjected to each atom of the force dictionary. A sparse regularization strategy is applied for solving the ill-conditioned equation. After that, the problem of force identification is converted into an optimization problem, and it can be solved by using a one-step strategy. Numerical simulations are carried out for verifying the feasibility and effectiveness of the proposed method. Illustrated results clearly show the applicability and robustness of the proposed method for dealing with force reconstruction and moving force identification.


2005 ◽  
Vol 44 (05) ◽  
pp. 674-686 ◽  
Author(s):  
B. Pfeifer ◽  
M. Seger ◽  
C. Hintermüller ◽  
F. Hanser ◽  
R. Modre ◽  
...  

Summary Objective: The computer model-based computation of the cardiac activation sequence in humans has been recently subject of successful clinical validation. This method is of potential interest for guiding ablation therapy of arrhythmogenic substrates. However, computation times of almost an hour are unattractive in a clinical setting. Thus, the objective is the development of a method which performs the computation in a few minutes run time. Methods: The computationally most expensive part is the product of the lead field matrix with a matrix containing the source pattern on the cardiac surface. The particular biophysical properties of both matrices are used for speeding up this operation by more than an order of magnitude. A conjugate gradient optimizer was developed using C++ for computing the activation map. Results: The software was tested on synthetic and clinical data. The increase in speed with respect to the previously used Fortran 77 implementation was a factor of 30 at a comparable quality of the results. As an additional finding the coupled regularization strategy, originally introduced for saving computation time, also reduced the sensitivity of the method to the choice of the regularization parameter. Conclusions: As it was shown for data from a WPW-patient the developed software can deliver diagnostically valuable information at a much shorter span of time than current clinical routine methods. Its main application could be the localization of focal arrhythmogenic substrates.


Author(s):  
J Q Li ◽  
J Chen ◽  
C Yang ◽  
G M Dong

Sound field visualization is a helpful design and analysis tool for the study of sound radiation and dispersion problems. It can help to comprehend deeply about noise transmission mechanism, monitor environment noise, evaluate sound quality, and even diagnose the machinery faults based on mechanical noise. The well-known near-field acoustic holography is an accurate sound field visualization technique. However, this technique has disadvantages such as strict measurement requirements and the need of an enormous number of microphones, which limits its extended applications. In order to visualize the sound field with a small number of microphones for measurements, the regeneration method of the radiated field by using the wave superposition algorithm is attempted in this study. It is based on the principle of equivalent source: the sound field radiated by an arbitrarily shaped radiator is substituted by the distributed point sources (monopole or dipole) constrained inside the actual source surface. For suppressing the adverse effect of measurement noise, the Tikhonov regularization strategy is adopted to work together with the wave superposition algorithm to give an accurate solution. Numerical simulations were performed based on a two-pulse-ball model to evaluate the accuracy of the combined algorithm of the wave superposition and the Tikhonov regularization strategy. In addition, an integrated sound field visualization system is designed and implemented. The functions include acoustic signal acquisition and processing, sound field reconstruction, and results visualization. The performance of the presented system was tested by experiments in a semi-anechoic chamber by using two sound boxes to simulate the sound sources. As concerning practical measurement microphones, there exist phase mismatches between the channels. Results will go wrong if the sound field reconstruction is performed directly with these uncalibrated measurement data. Therefore, a calibration procedure is applied to eliminate them. Experimental results indicate that the phase mismatches between the channels after calibration decay to 0.1°. Both the numerical simulations and experimental results accurately reconstructed the exterior sound field of the models. It is shown that the wave superposition algorithm together with the Tikhonov regularization strategy can exactly reconstruct the exterior sound field of radiators, which makes a base to its applications in practice. This sound field visualization system will make an operator's experimental work much easier.


Data ◽  
2022 ◽  
Vol 7 (1) ◽  
pp. 10
Author(s):  
Davide Buffelli ◽  
Fabio Vandin

Graph Neural Networks (GNNs) rely on the graph structure to define an aggregation strategy where each node updates its representation by combining information from its neighbours. A known limitation of GNNs is that, as the number of layers increases, information gets smoothed and squashed and node embeddings become indistinguishable, negatively affecting performance. Therefore, practical GNN models employ few layers and only leverage the graph structure in terms of limited, small neighbourhoods around each node. Inevitably, practical GNNs do not capture information depending on the global structure of the graph. While there have been several works studying the limitations and expressivity of GNNs, the question of whether practical applications on graph structured data require global structural knowledge or not remains unanswered. In this work, we empirically address this question by giving access to global information to several GNN models, and observing the impact it has on downstream performance. Our results show that global information can in fact provide significant benefits for common graph-related tasks. We further identify a novel regularization strategy that leads to an average accuracy improvement of more than 5% on all considered tasks.


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5586
Author(s):  
Yi-Tun Lin ◽  
Graham D. Finlayson

Spectral reconstruction (SR) algorithms attempt to recover hyperspectral information from RGB camera responses. Recently, the most common metric for evaluating the performance of SR algorithms is the Mean Relative Absolute Error (MRAE)—an ℓ1 relative error (also known as percentage error). Unsurprisingly, the leading algorithms based on Deep Neural Networks (DNN) are trained and tested using the MRAE metric. In contrast, the much simpler regression-based methods (which actually can work tolerably well) are trained to optimize a generic Root Mean Square Error (RMSE) and then tested in MRAE. Another issue with the regression methods is—because in SR the linear systems are large and ill-posed—that they are necessarily solved using regularization. However, hitherto the regularization has been applied at a spectrum level, whereas in MRAE the errors are measured per wavelength (i.e., per spectral channel) and then averaged. The two aims of this paper are, first, to reformulate the simple regressions so that they minimize a relative error metric in training—we formulate both ℓ2 and ℓ1 relative error variants where the latter is MRAE—and, second, we adopt a per-channel regularization strategy. Together, our modifications to how the regressions are formulated and solved leads to up to a 14% increment in mean performance and up to 17% in worst-case performance (measured with MRAE). Importantly, our best result narrows the gap between the regression approaches and the leading DNN model to around 8% in mean accuracy.


Author(s):  
Saurabh Varshneya ◽  
Antoine Ledent ◽  
Robert A. Vandermeulen ◽  
Yunwen Lei ◽  
Matthias Enders ◽  
...  

We propose a novel training methodology---Concept Group Learning (CGL)---that encourages training of interpretable CNN filters by partitioning filters in each layer into \emph{concept groups}, each of which is trained to learn a single visual concept. We achieve this through a novel regularization strategy that forces filters in the same group to be active in similar image regions for a given layer. We additionally use a regularizer to encourage a sparse weighting of the concept groups in each layer so that a few concept groups can have greater importance than others. We quantitatively evaluate CGL's model interpretability using standard interpretability evaluation techniques and find that our method increases interpretability scores in most cases. Qualitatively we compare the image regions which are most active under filters learned using CGL versus filters learned without CGL and find that CGL activation regions more strongly concentrate around semantically relevant features.


Sign in / Sign up

Export Citation Format

Share Document