proper subspace
Recently Published Documents


TOTAL DOCUMENTS

14
(FIVE YEARS 4)

H-INDEX

4
(FIVE YEARS 0)

Symmetry ◽  
2021 ◽  
Vol 13 (12) ◽  
pp. 2318
Author(s):  
Mariia Martsinkiv ◽  
Andriy Zagorodnyuk

This paper is devoted to studying approximations of symmetric continuous functions by symmetric analytic functions on a Banach space X with a symmetric basis. We obtain some positive results for the case when X admits a separating polynomial using a symmetrization operator. However, even in this case, there is a counter-example because the symmetrization operator is well defined only on a narrow, proper subspace of the space of analytic functions on X. For X=c0, we introduce ε-slice G-analytic functions that have a behavior similar to G-analytic functions at points x∈c0 such that all coordinates of x are greater than ε, and we prove a theorem on approximations of uniformly continuous functions on c0 by ε-slice G-analytic functions.


2021 ◽  
Vol 24 (6) ◽  
pp. 1797-1830
Author(s):  
Chenkuan Li

Abstract The objective of this paper is, for the first time, to extend the fractional Laplacian (−△) s u(x) over the space Ck (Rn ) (which contains S(Rn ) as a proper subspace) for all s > 0 and s ≠ 1, 2, …, based on the normalization in distribution theory, Pizzetti’s formula and surface integrals in Rn . We further present two theorems showing that our extended fractional Laplacian is continuous at the end points 1, 2, … . Two illustrative examples are provided to demonstrate computational techniques for obtaining the fractional Laplacian using special functions, Cauchy’s residue theorem and integral identities. An application to defining the Riesz derivative in the classical sense at odd numbers is also considered at the end.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Jianwei Lu ◽  
Guohua Zhou ◽  
Jiaqun Zhu ◽  
Lei Xue

Facial makeup significantly changes the perceived appearance of the face and reduces the accuracy of face recognition. To adapt to the application of smart cities, in this study, we introduce a novel joint subspace and low-rank coding method for makeup face recognition. To exploit more discriminative information of face images, we use the feature projection technology to find proper subspace and learn a discriminative dictionary in such subspace. In addition, we use a low-rank constraint in the dictionary learning. Then, we design a joint learning framework and use the iterative optimization strategy to obtain all parameters simultaneously. Experiments on real-world dataset achieve good performance and demonstrate the validity of the proposed method.


Author(s):  
Zhe Xue ◽  
Junping Du ◽  
Dawei Du ◽  
Wenqi Ren ◽  
Siwei Lyu

Incomplete view information often results in failure cases of the conventional multi-view methods. To address this problem, we propose a Deep Correlated Predictive Subspace Learning (DCPSL) method for incomplete multi-view semi-supervised classification. Specifically, we integrate semi-supervised deep matrix factorization, correlated subspace learning, and multi-view label prediction into a unified framework to jointly learn the deep correlated predictive subspace and multi-view shared and private label predictors. DCPSL is able to learn proper subspace representation that is suitable for class label prediction, which can further improve the performance of classification. Extensive experimental results on various practical datasets demonstrate that the proposed method performs favorably against the state-of-the-art methods.


Author(s):  
Qi Han

Using a regular Borel measure μ ⩾ 0 we derive a proper subspace of the commonly used Sobolev space D1(ℝN) when N ⩾ 3. The space resembles the standard Sobolev space H1(Ω) when Ω is a bounded region with a compact Lipschitz boundary ∂Ω. An equivalence characterization and an example are provided that guarantee that is compactly embedded into L1(RN). In addition, as an application we prove an existence result of positive solutions to an elliptic equation in ℝN that involves the Laplace operator with the critical Sobolev nonlinearity, or with a general nonlinear term that has a subcritical and superlinear growth. We also briefly discuss the compact embedding of to Lp(ℝN) when N ⩾ 2 and 2 ⩽ p ⩽ N.


2013 ◽  
Vol 2013 ◽  
pp. 1-8 ◽  
Author(s):  
Zhe-Ming Zheng ◽  
Hui-Sheng Ding ◽  
Gaston M. N’Guérékata

Several interesting and new properties of weighted pseudo almost periodic functions are established. Firstly, we obtain an equivalent definition for weighted pseudo almost periodic functions, which shows a close relationship between asymptotically almost periodic functions and weighted pseudo almost periodic functions; secondly, we prove that the space of asymptotically almost periodic functions is always a proper subspace of the space of weighted pseudo almost periodic functions; thirdly, we show that under some cases, the space of weighted pseudo almost periodic functions equals the classical space of pseudo almost periodic functions.


2009 ◽  
Vol 19 (04) ◽  
pp. 241-252 ◽  
Author(s):  
GIULIANO GROSSI

Hopfield neural network (HNN) is a nonlinear computational model successfully applied in finding near-optimal solutions of several difficult combinatorial problems. In many cases, the network energy function is obtained through a learning procedure so that its minima are states falling into a proper subspace (feasible region) of the search space. However, because of the network nonlinearity, a number of undesirable local energy minima emerge from the learning procedure, significantly effecting the network performance. In the neural model analyzed here, we combine both a penalty and a stochastic process in order to enhance the performance of a binary HNN. The penalty strategy allows us to gradually lead the search towards states representing feasible solutions, so avoiding oscillatory behaviors or asymptotically instable convergence. Presence of stochastic dynamics potentially prevents the network to fall into shallow local minima of the energy function, i.e., quite far from global optimum. Hence, for a given fixed network topology, the desired final distribution on the states can be reached by carefully modulating such process. The model uses pseudo-Boolean functions both to express problem constraints and cost function; a combination of these two functions is then interpreted as energy of the neural network. A wide variety of NP-hard problems fall in the class of problems that can be solved by the model at hand, particularly those having a monotonic quadratic pseudo-Boolean function as constraint function. That is, functions easily derived by closed algebraic expressions representing the constraint structure and easy (polynomial time) to maximize. We show the asymptotic convergence properties of this model characterizing its state space distribution at thermal equilibrium in terms of Markov chain and give evidence of its ability to find high quality solutions on benchmarks and randomly generated instances of two specific problems taken from the computational graph theory.


2007 ◽  
Vol 06 (02) ◽  
pp. 245-257 ◽  
Author(s):  
PAWEŁ GŁADKI ◽  
MURRAY MARSHALL

First counterexamples are given to a basic question raised in [10]. The paper considers the space of orderings (X,G) of the function field of a real irreducible conic [Formula: see text] over the field ℚ of rational numbers. It is shown that the pp conjecture fails to hold for such a space of orderings when [Formula: see text] has no rational points. In this case, it is shown that the pp conjecture "almost holds" in the sense that, if a pp formula holds on each finite subspace of (X,G), then it holds on each proper subspace of (X,G). For pp formulas which are product-free and 1-related, the pp conjecture is known to be true, at least if the stability index is finite [11]. The counterexamples constructed here are the simplest sort of pp formulas which are not product-free and 1-related.


2006 ◽  
Vol 11 (1) ◽  
pp. 48-82 ◽  
Author(s):  
Sandra Forte ◽  
Maurizio Vianello

A linear function defined on the space of elasticity tensors is a restricted invariant under a group of rotations G if it has an invariant restriction to a proper subspace which is larger than the set left fixed by the action of G itself. A necessary and sufficient condition for a function to be a restricted invariant is given using concepts related with isotypic decomposition, Haar integration and G -dependence. The result is applied to characterize isotropic and transversely isotropic restricted invariants.


2001 ◽  
Vol 160 (2-3) ◽  
pp. 169-182 ◽  
Author(s):  
Simona Franceschini ◽  
Anna Lorenzini
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document