map grids
Recently Published Documents


TOTAL DOCUMENTS

9
(FIVE YEARS 1)

H-INDEX

2
(FIVE YEARS 0)

2019 ◽  
Vol 1 ◽  
pp. 1-2
Author(s):  
Tinghua Ai ◽  
Yingzhe Lei

<p><strong>Abstract.</strong> The past few decades have seen the development of automatically feature labelling when manual label placement was thought to be time and labour consuming. Emerging techniques like volunteered geographic information (VGI) collection are making label placement more complexed with many features in a limited space, especially for points of interest (POI). In order to improve the quality and the efficiency of point feature labelling, there have been massive researches focusing on issues like position models, assessment criteria and optimization methods. Most of the researches were using vector-based methods while raster-based methods were less used, because vector-based methods have the advantage of easy definition of features and labels but are usually followed by computation complexity problems for features with high density. In contrast raster-based methods are faster and more flexible, though being harder to represent features and labels precisely on the map grids. Considering that hexagon partitioning was rarely used in raster-based methods, compared with the most commonly used square portioning, and hexagon was potentially useful for its oblique sides and isotropic orientations, hexagonal grids were used in this research to investigate better point feature labelling approaches.</p><p>A new raster-based method was promoted to figure out high quality label placement of POI in dense area. Labels were placed on a hexagonal map grids based on the principles that one Chinese character is set to one hexagon unit with the mathematical relationship of <i>h</i>&amp;thinsp;=&amp;thinsp;((&amp;radic;3+1)/2)<i>a</i>, while <i>h</i> is the side length of a hexagon unit and <i>a</i> is the size of a Chinese character. Considering that hexagon grids are divided into flat topped type and pointy topped type, which leads to different orientations, split hexagons were promoted to extend orientations from 6 to 8 based on pointy topped grids. A hexagon is partitioned into two parts labelled ‘left’ and ‘right’ and a split hexagon is the combination of a ‘left’ part and a ‘right’ part separately from two neighboring hexagons, as shown in figure 1. Then every hexagon on the grid will have four status: not-occupied {(0,0)}, half-occupied {(0,1) and (1,0)} and both-occupied {(1,1)}. Based on the fundamental concepts above, specific definitions were made on how labels were supposed to be represented on hexagonal map grids, including the length, orientation, writing direction, character orientation and position of the labels.</p><p>The approach first initially arranges labels of POI with different combinations of label orientations while pursuing coherence as much as possible, including procedures of rasterization of vector data, POI grouping and initial scheme computation. Every POI in a same group would have same label orientation and every POI group may have several accessible orientations thus making initial schemes diverse. Then a second positioning algorithm was conducted to handle overlapping (labels with POI, labels with labels) problems and improve the overall quality of labelling. The algorithm used the methods of position changing and label turning, which allow label to change its position around POI and sometimes change the orientation when it is necessary to avoid collisions. Quality of labels in a closed block was assessed from three aspects: preferential orientation, occlusion and spaciousness. POI data was chosen from restaurant, hotel and shop facilities and figure 2 showed one of the examples of label placement results using this method. The results have shown good orientation consistency of labels and occlusions were reduced to the lowest, though several label-label occlusions remained due to the limited space. After being compared with vector-based method, the approach has shown better performance on maintaining map legibility, aesthetics and harmony.</p>


2018 ◽  
Vol 46 (5) ◽  
pp. 401-411 ◽  
Author(s):  
Dennis Edler ◽  
Julian Keil ◽  
Anne-Kathrin Bestgen ◽  
Lars Kuchinke ◽  
Frank Dickmann

Author(s):  
Y. Tian ◽  
S. Zhang ◽  
W. Du ◽  
J. Chen ◽  
H. Xie ◽  
...  

Models based on physical principles or semi-empirical parameterizations have used to compute the firn density, which is essential for the study of surface processes in the Antarctic ice sheet. However, parameterization of surface snow density is often challenged by the description of detailed local characterization. In this study we propose to generate a surface density map for East Antarctica from all the filed observations that are available. Considering that the observations are non-uniformly distributed around East Antarctica, obtained by different methods, and temporally inhomogeneous, the field observations are used to establish an initial density map with a grid size of 30&amp;thinsp;&amp;times;&amp;thinsp;30&amp;thinsp;km<sup>2</sup> in which the observations are averaged at a temporal scale of five years. We then construct an observation matrix with its columns as the map grids and rows as the temporal scale. If a site has an unknown density value for a period, we will set it to 0 in the matrix. In order to construct the main spatial and temple information of surface snow density matrix we adopt Empirical Orthogonal Function (EOF) method to decompose the observation matrix and only take first several lower-order modes, because these modes already contain most information of the observation matrix. However, there are a lot of zeros in the matrix and we solve it by using matrix completion algorithm, and then we derive the time series of surface snow density at each observation site. Finally, we can obtain the surface snow density by multiplying the modes interpolated by kriging with the corresponding amplitude of the modes. Comparative analysis have done between our surface snow density map and model results. The above details will be introduced in the paper.


2014 ◽  
Vol 11 (3) ◽  
pp. 506-514
Author(s):  
Željko Hećimović ◽  
Robert Župan ◽  
Tea Duplančić-Leder

2011 ◽  
Vol 464 ◽  
pp. 596-599
Author(s):  
Bo Xiang ◽  
Lu Ling An ◽  
Jin Hu Sun ◽  
Lai Shui Zhou

Authors create a relief segmentation method on point cloud model, and solve such problems as how to store the point cloud data, how to obtain the final contour, how to define Snakes energy term and how to acquire region from its contour. Firstly, the point cloud data is resampled by applying Z-MAP grid data structure. Then initial contour is drawn by interaction, and the total energy of the contour is computed to optimize the contour to the energy-minimizing position by iterations. Finally, the contour is scattered as points, and the points are mapped to Z-MAP grids for projection points. According to these projection points, the region is obtained.


1955 ◽  
Vol 19 (1) ◽  
pp. 156
Author(s):  
James T. Tanner
Keyword(s):  

1936 ◽  
Vol 3 (20) ◽  
pp. 322-325
Author(s):  
J. L. Winterbotham
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document