Local Convexity of Balls

Author(s):  
Parisa Hariri ◽  
Riku Klén ◽  
Matti Vuorinen
Keyword(s):  
2014 ◽  
Vol 2014 ◽  
pp. 1-10 ◽  
Author(s):  
Muhammad Abbas ◽  
Ahmad Abd Majid ◽  
Jamaludin Md. Ali

We present the smooth and visually pleasant display of 2D data when it is convex, which is contribution towards the improvements over existing methods. This improvement can be used to get the more accurate results. An attempt has been made in order to develop the local convexity-preserving interpolant for convex data usingC2rational cubic spline. It involves three families of shape parameters in its representation. Data dependent sufficient constraints are imposed on single shape parameter to conserve the inherited shape feature of data. Remaining two of these shape parameters are used for the modification of convex curve to get a visually pleasing curve according to industrial demand. The scheme is tested through several numerical examples, showing that the scheme is local, computationally economical, and visually pleasing.


2008 ◽  
Vol 50 (2) ◽  
pp. 271-288
Author(s):  
HELGE GLÖCKNER

AbstractThe General Curve Lemma is a tool of Infinite-Dimensional Analysis that enables refined studies of differentiability properties of maps between real locally convex spaces to be made. In this article, we generalize the General Curve Lemma in two ways. First, we remove the condition of local convexity in the real case. Second, we adapt the lemma to the case of curves in topological vector spaces over ultrametric fields.


2020 ◽  
Author(s):  
Sorush Niknamian

Point cloud data reconstruction is the basis of point cloud data processing. The reconstruction effect has a great impact on application. For the problems of low precision, large error, and high time consumption of the current scattered point cloud data reconstruction algorithm, a new algorithm of scattered point cloud data reconstruction based on local convexity is proposed in this paper. Firstly, according to surface variation based on local outlier factor (SVLOF), the noise points of point cloud data are divided into near outlier and far outlier, and filtered for point cloud data preprocessing. Based on this, the algorithm based on local convexity is improved. The method of constructing local connection point set is used to replace triangulation to analyze the relationship of neighbor points. The connection part identification method is used for data reconstruction. Experimental results show that, the proposed method can reconstruct the scattered point cloud data accurately, with high precision, small error and low time consumption.


2017 ◽  
Vol 10 (3) ◽  
pp. 348-354 ◽  
Author(s):  
王雅男 WANG Ya-nan ◽  
王挺峰 WANG Ting-feng ◽  
田玉珍 TIAN Yu-zhen ◽  
孙涛 SUN Tao

1975 ◽  
Vol 27 (6) ◽  
pp. 1378-1383 ◽  
Author(s):  
Marilyn Breen

Let S be a subset of Rd. A point x in 5 is a point of local convexity of S if and only if there is some neighborhood N of x such that, if y, z ∈ N ᑎ 5, then [y, z] ⊆ S. If S fails to be locally convex at some point q in S then q is called a point of local nonconvexity (lnc point) of S.


Author(s):  
Stanislav Fort ◽  
Adam Scherlis

We explore the loss landscape of fully-connected and convolutional neural networks using random, low-dimensional hyperplanes and hyperspheres. Evaluating the Hessian, H, of the loss function on these hypersurfaces, we observe 1) an unusual excess of the number of positive eigenvalues of H, and 2) a large value of Tr(H)/||H|| at a well defined range of configuration space radii, corresponding to a thick, hollow, spherical shell we refer to as the Goldilocks zone. We observe this effect for fully-connected neural networks over a range of network widths and depths on MNIST and CIFAR-10 datasets with the ReLU and tanh non-linearities, and a similar effect for convolutional networks. Using our observations, we demonstrate a close connection between the Goldilocks zone, measures of local convexity/prevalence of positive curvature, and the suitability of a network initialization. We show that the high and stable accuracy reached when optimizing on random, low-dimensional hypersurfaces is directly related to the overlap between the hypersurface and the Goldilocks zone, and as a corollary demonstrate that the notion of intrinsic dimension is initialization-dependent. We note that common initialization techniques initialize neural networks in this particular region of unusually high convexity/prevalence of positive curvature, and offer a geometric intuition for their success. Furthermore, we demonstrate that initializing a neural network at a number of points and selecting for high measures of local convexity such as Tr(H)/||H||, number of positive eigenvalues of H, or low initial loss, leads to statistically significantly faster training on MNIST. Based on our observations, we hypothesize that the Goldilocks zone contains an unusually high density of suitable initialization configurations.


Sensors ◽  
2020 ◽  
Vol 20 (4) ◽  
pp. 1192 ◽  
Author(s):  
Mihai Niculiță

Archaeological topography identification from high-resolution DEMs (Digital Elevation Models) is a current method that is used with high success in archaeological prospecting of wide areas. I present a methodology through which burial mounds (tumuli) from LiDAR (Light Detection And Ranging) DEMS can be identified. This methodology uses geomorphometric and statistical methods to identify with high accuracy burial mound candidates. Peaks, defined as local elevation maxima are found as a first step. In the second step, local convexity watershed segments and their seeds are compared with positions of local peaks and the peaks that correspond or have in vicinity local convexity segments seeds are selected. The local convexity segments that correspond to these selected peaks are further fed to a Random Forest algorithm together with shape descriptors and descriptive statistics of geomorphometric variables in order to build a model for the classification. Multiple approaches to tune and select the proper training dataset, settings, and variables were tested. The validation of the model was performed on the full dataset where the training was performed and on an external dataset in order to test the usability of the method for other areas in a similar geomorphological and archaeological setting. The validation was performed against manually mapped, and field checked burial mounds from two neighbor study areas of 100 km2 each. The results show that by training the Random Forest on a dataset composed of between 75% and 100% of the segments corresponding to burial mounds and ten times more non-burial mounds segments selected using Latin hypercube sampling, 93% of the burial mound segments from the external dataset are identified. There are 42 false positive cases that need to be checked, and there are two burial mound segments missed. The method shows great promise to be used for burial mound detection on wider areas by delineating a certain number of tumuli mounds for model training.


Sign in / Sign up

Export Citation Format

Share Document