regularization technique
Recently Published Documents


TOTAL DOCUMENTS

209
(FIVE YEARS 58)

H-INDEX

16
(FIVE YEARS 3)

2021 ◽  
Vol 47 (6) ◽  
Author(s):  
F. Guillén-González ◽  
M. A. Rodríguez-Bellido ◽  
D. A. Rueda-Gómez

AbstractWe consider the following repulsive-productive chemotaxis model: find u ≥ 0, the cell density, and v ≥ 0, the chemical concentration, satisfying $$ \left\{ \begin{array}{l} \partial_t u - {\Delta} u - \nabla\cdot (u\nabla v)=0 \ \ \text{ in}\ {\Omega},\ t>0,\\ \partial_t v - {\Delta} v + v = u^p \ \ { in}\ {\Omega},\ t>0, \end{array} \right. $$ ∂ t u − Δ u − ∇ ⋅ ( u ∇ v ) = 0 in Ω , t > 0 , ∂ t v − Δ v + v = u p i n Ω , t > 0 , with p ∈ (1, 2), ${\Omega }\subseteq \mathbb {R}^{d}$ Ω ⊆ ℝ d a bounded domain (d = 1, 2, 3), endowed with non-flux boundary conditions. By using a regularization technique, we prove the existence of global in time weak solutions of (1) which is regular and unique for d = 1, 2. Moreover, we propose two fully discrete Finite Element (FE) nonlinear schemes, the first one defined in the variables (u,v) under structured meshes, and the second one by using the auxiliary variable σ = ∇v and defined in general meshes. We prove some unconditional properties for both schemes, such as mass-conservation, solvability, energy-stability and approximated positivity. Finally, we compare the behavior of these schemes with respect to the classical FE backward Euler scheme throughout several numerical simulations and give some conclusions.


2021 ◽  
pp. 1-39
Author(s):  
Laurent Bonnasse-Gahot ◽  
Jean-Pierre Nadal

Abstract Classification is one of the major tasks that deep learning is successfully tackling. Categorization is also a fundamental cognitive ability. A well-known perceptual consequence of categorization in humans and other animals, categorical per ception, is notably characterized by a within-category compression and a between-category separation: two items, close in input space, are perceived closer if they belong to the same category than if they belong to different categories. Elaborating on experimental and theoretical results in cognitive science, here we study categorical effects in artificial neural networks. We combine a theoretical analysis that makes use of mutual and Fisher information quantities and a series of numerical simulations on networks of increasing complexity. These formal and numerical analyses provide insights into the geometry of the neural representation in deep layers, with expansion of space near category boundaries and contraction far from category boundaries. We investigate categorical representation by using two complementary approaches: one mimics experiments in psychophysics and cognitive neuroscience by means of morphed continua between stimuli of different categories, while the other introduces a categoricality index that, for each layer in the network, quantifies the separability of the categories at the neural population level. We show on both shallow and deep neural networks that category learning automatically induces categorical perception. We further show that the deeper a layer, the stronger the categorical effects. As an outcome of our study, we propose a coherent view of the efficacy of different heuristic practices of the dropout regularization technique. More generally, our view, which finds echoes in the neuroscience literature, insists on the differential impact of noise in any given layer depending on the geometry of the neural representation that is being learned, that is, on how this geometry reflects the structure of the categories.


Entropy ◽  
2021 ◽  
Vol 23 (10) ◽  
pp. 1255
Author(s):  
Yuheng Bu ◽  
Weihao Gao ◽  
Shaofeng Zou ◽  
Venugopal V. Veeravalli

It has been reported in many recent works on deep model compression that the population risk of a compressed model can be even better than that of the original model. In this paper, an information-theoretic explanation for this population risk improvement phenomenon is provided by jointly studying the decrease in the generalization error and the increase in the empirical risk that results from model compression. It is first shown that model compression reduces an information-theoretic bound on the generalization error, which suggests that model compression can be interpreted as a regularization technique to avoid overfitting. The increase in empirical risk caused by model compression is then characterized using rate distortion theory. These results imply that the overall population risk could be improved by model compression if the decrease in generalization error exceeds the increase in empirical risk. A linear regression example is presented to demonstrate that such a decrease in population risk due to model compression is indeed possible. Our theoretical results further suggest a way to improve a widely used model compression algorithm, i.e., Hessian-weighted K-means clustering, by regularizing the distance between the clustering centers. Experiments with neural networks are provided to validate our theoretical assertions.


Axioms ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 232
Author(s):  
Akhmed Dzhabrailov ◽  
Yuri Luchko ◽  
Elina Shishkina

In this paper, we treat a convolution-type operator called the generalized Bessel potential. Our main result is the derivation of two different forms of its inversion. The first inversion is provided in terms of an approximative inverse operator using the method of an improving multiplier. The second one employs the regularization technique for the divergent integrals in the form of the appropriate segments of the Taylor–Delsarte series.


2021 ◽  
Author(s):  
Christian Tinauer ◽  
Stefan Heber ◽  
Lukas Pirpamer ◽  
Anna Damulina ◽  
Reinhold Schmidt ◽  
...  

Deep neural networks are increasingly used for neurological disease classification by MRI, but the networks' decisions are not easily interpretable by humans. Heat mapping by deep Taylor decomposition revealed that (potentially misleading) image features even outside of the brain tissue are crucial for the classifier's decision. We propose a regularization technique to train convolutional neural network (CNN) classifiers utilizing relevance-guided heat maps calculated online during training. The method was applied using T1-weighted MR images from 128 subjects with Alzheimer's disease (mean age=71.9+-8.5 years) and 290 control subjects (mean age=71.3+-6.4 years). The developed relevance-guided framework achieves higher classification accuracies than conventional CNNs but more importantly, it relies on less but more relevant and physiological plausible voxels within brain tissue. Additionally, preprocessing effects from skull stripping and registration are mitigated, rendering this practically useful in deep learning neuroimaging studies. Understanding the decision mechanisms underlying CNNs, these results challenge the notion that unprocessed T1-weighted brain MR images in standard CNNs yield higher classification accuracy in Alzheimer's disease than solely atrophy.


Author(s):  
D.J.Samatha Naidu ◽  
G.Hima Bindu

NFV is the advanced technology in present situation. Online VNF Scaling in a cloud datacenter under multi-resource constraints were consider for formulating mathematical model. A new novel ILP Scaling algorithm works based on the regularization technique and dependent rounding.


2021 ◽  
Vol 11 (17) ◽  
pp. 7774
Author(s):  
Laura-Maria Dogariu ◽  
Jacob Benesty ◽  
Constantin Paleologu ◽  
Silviu Ciochină

Efficiently solving a system identification problem represents an important step in numerous important applications. In this framework, some of the most popular solutions rely on the Wiener filter, which is widely used in practice. Moreover, it also represents a benchmark for other related optimization problems. In this paper, new insights into the regularization of the Wiener filter are provided, which is a must in real-world scenarios. A proper regularization technique is of great importance, especially in challenging conditions, e.g., when operating in noisy environments and/or when only a low quantity of data is available for the estimation of the statistics. Different regularization methods are investigated in this paper, including several new solutions that fit very well for the identification of sparse and low-rank systems. Experimental results support the theoretical developments and indicate the efficiency of the proposed techniques.


2021 ◽  
pp. 105678952110392
Author(s):  
De-Cheng Feng ◽  
Xiaodan Ren

This paper presents a comprehensive analysis of the mesh-dependency issue for both plain concrete and reinforced concrete (RC) members under uniaxial loading. The detailed mechanisms for each case are firstly derived, and the analytical and numerical strain energies for concrete in different cases are compared to explain the phenomena of mesh-dependency. It is found that the mesh-dependency will be relieved or even eliminated with the increasing of the reinforcing ratio. Meanwhile, a concept of the critical reinforcing ratio is proposed to identify the corresponding boundary of mesh-dependency of RC members. In order to verify the above findings, several illustrative examples are performed and discussed. Finally, to overcome the mesh-dependency issue for RC members with lower reinforcing ratios, we propose a unified regularization method that modifies both stress-strain relations of steel and concrete based on the strain energy equivalence. The method is also applied to the illustrative examples for validation, and the numerical results indicate that the developed method can obtain objective results for cases with different meshes and reinforcing ratios.


Electronics ◽  
2021 ◽  
Vol 10 (16) ◽  
pp. 1943
Author(s):  
Wanbing Zou ◽  
Song Cheng ◽  
Luyuan Wang ◽  
Guanyu Fu ◽  
Delong Shang ◽  
...  

In terms of memory footprint requirement and computing speed, the binary neural networks (BNNs) have great advantages in power-aware deployment applications, such as AIoT edge terminals, wearable and portable devices, etc. However, the networks’ binarization process inevitably brings considerable information losses, and further leads to accuracy deterioration. To tackle these problems, we initiate analyzing from a perspective of the information theory, and manage to improve the networks information capacity. Based on the analyses, our work has two primary contributions: the first is a newly proposed median loss (ML) regularization technique. It improves the binary weights distribution more evenly, and consequently increases the information capacity of BNNs greatly. The second is the batch median of activations (BMA) method. It raises the entropy of activations by subtracting a median value, and simultaneously lowers the quantization error by computing separate scaling factors for the positive and negative activations procedure. Experiment results prove that the proposed methods utilized in ResNet-18 and ResNet-34 individually outperform the Bi-Real baseline by 1.3% and 0.9% Top-1 accuracy on the ImageNet 2012. Proposed ML and BMA for the storage cost and calculation complexity increments are minor and negligible. Additionally, comprehensive experiments also prove that our methods can be applicable and embedded into the present popular BNN networks with accuracy improvement and negligible overhead increment.


Sign in / Sign up

Export Citation Format

Share Document