scholarly journals Automated summit spot height generation for modern topographic mapping

2019 ◽  
Vol 1 ◽  
pp. 1-1
Author(s):  
Samantha T. Arundel ◽  
Arthur Chan

<p><strong>Abstract.</strong> Spot elevations published on historical U.S. Geological Survey (USGS) topographic maps were established as needed to enhance information imparted by the quadrangle’s contours. In addition to a number of other features like road intersections, section corners, lakes and water wells, labels were routinely placed on mountain peaks, local high points, passes and saddles. While some elevations were established through field survey triangulation, many were “dropped” during photogrammetric stereocompilation. The average accuracy, which varied with method, terrain, and map vintage, was on the order of &amp;plusmn;&amp;thinsp;10 feet.</p><p> Today, inexpensive consumer devices deliver comparable accuracy using the Global Positioning System (GPS). Professional equipment can achieve even better accuracies. These methods have replaced the field- and funding-intensive traditional triangulation methods in the geographic sciences. However, since GPS measurements require visiting the feature location, a national dataset containing high-accuracy spot elevations has not yet been created. Consequently, modern US Topo maps are devoid of mountain peak or other spot elevations. Additionally, no government agency is currently mandated to create one.</p><p> Still, US Topo map users continue to demand the display of spot heights, particularly mountain peak elevations. As lidar data are collected through the 3D Elevation Program (3DEP), the source for higher accuracy spot heights increases. Therefore, a pilot study was conducted to evaluate the feasibility of automatically generating elevation values at summits named in the Board on Geographic Name’s Geographic Names Information System (GNIS), using 3DEP data. As the 3DEP incorporates more lidar data, these values should become increasingly more accurate.</p><p> The first step in the automation process involved “snapping” GNIS summits to the highest and most accurate nearby point (pixel) in the 3DEP one-third arc second seamless dataset (1/3 a-s). Development of this step was completed in a prior study and will be reported elsewhere. In Step 2, a process similar to Step 1 was implemented to identify the highest pixel in the 3DEP one-meter dataset (1-m) from an area defined by a one-pixel buffer around the snapped point pixel. After finding the highest pixel in the 1-m dataset, the same process for Step 2 was repeated on the lidar point cloud dataset (LPC). In the latter case, the highest lidar point within the buffered area was chosen where there were more than one, which was the case in most instances.</p><p> In the case that the next higher resolution data were unavailable, the summit spot elevation was set to the value obtained from the lower resolution layer. Hence, if 1-m data were unavailable, but LPC data did exist, the program in its current state was unable use the higher resolution data. This is due to the fact that the 1-m data are employed to drive the software to a more precise location of the summit. Without that intermediate step the method fails to identify the correct LPC point. Work in progress is aimed at overcoming this disadvantage, which is particularly important until the time when the 1-m dataset is populated for the entire nation. The National Geospatial Program estimates the completion of lidar acquisition by 2023. Processing from acquisition to update of the 1-m dataset, including quality assessment and revisions, currently requires between two and three years.</p><p> Resulting elevation values are compared to those published by Peakbagger.com and TOPOZONE.com. Preliminary results from 40 summits indicate that values derived from lidar are generally higher, whereas those populated from the 1/3 arc-second are generally lower. A thorough understanding of these relationships will require the evaluation of more points.</p>

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ghulam Mustafa ◽  
Muhammad Usman ◽  
Lisu Yu ◽  
Muhammad Tanvir afzal ◽  
Muhammad Sulaiman ◽  
...  

AbstractEvery year, around 28,100 journals publish 2.5 million research publications. Search engines, digital libraries, and citation indexes are used extensively to search these publications. When a user submits a query, it generates a large number of documents among which just a few are relevant. Due to inadequate indexing, the resultant documents are largely unstructured. Publicly known systems mostly index the research papers using keywords rather than using subject hierarchy. Numerous methods reported for performing single-label classification (SLC) or multi-label classification (MLC) are based on content and metadata features. Content-based techniques offer higher outcomes due to the extreme richness of features. But the drawback of content-based techniques is the unavailability of full text in most cases. The use of metadata-based parameters, such as title, keywords, and general terms, acts as an alternative to content. However, existing metadata-based techniques indicate low accuracy due to the use of traditional statistical measures to express textual properties in quantitative form, such as BOW, TF, and TFIDF. These measures may not establish the semantic context of the words. The existing MLC techniques require a specified threshold value to map articles into predetermined categories for which domain knowledge is necessary. The objective of this paper is to get over the limitations of SLC and MLC techniques. To capture the semantic and contextual information of words, the suggested approach leverages the Word2Vec paradigm for textual representation. The suggested model determines threshold values using rigorous data analysis, obviating the necessity for domain expertise. Experimentation is carried out on two datasets from the field of computer science (JUCS and ACM). In comparison to current state-of-the-art methodologies, the proposed model performed well. Experiments yielded average accuracy of 0.86 and 0.84 for JUCS and ACM for SLC, and 0.81 and 0.80 for JUCS and ACM for MLC. On both datasets, the proposed SLC model improved the accuracy up to 4%, while the proposed MLC model increased the accuracy up to 3%.


Author(s):  
Lingli Zhu ◽  
Juha Hyyppä ◽  
Juho-pekka Virtanen ◽  
Xiaowei Yu ◽  
Harri Kaartinen

This paper investigated building data from multispectral and single-photon Lidar systems. The multispectral datasets from the individual channels and fused channels were explored. The multispectral and single-photon Lidar data were compared across multiple aspects: the data acquisition geometry, number of echoes, intensity, density, resolution, data defects, noise level, and the absolute and relative accuracy. In addition, we explored the performance of the multispectral and single-photon data for roof plane detection for eight complex/stylish buildings to investigate the suitability of these data for 3D building reconstruction. The building data from the single-photon and multispectral Lidar systems were evaluated with respect to the reference building vector data with an accuracy of better than 5 cm. The advantages and disadvantages of both technologies and their applications in the urban building environment are discussed.


GEOMATICA ◽  
2015 ◽  
Vol 69 (3) ◽  
pp. 271-284
Author(s):  
Xuebin Wei ◽  
Xiaobai Yao

Light Detection and Ranging (LiDAR) has become an important data source in urban modelling. Traditional methods of LiDAR data processing for building detection require high spatial resolution data and sophisticated algorithms. The aerial photos, on the other hand, provide continuous spectral information on buildings. However, the accuracy of classified building boundaries from aerial photos is constrained when building roofs and their surroundings share analogous spectral characteristics. This paper develops a statistical approach that can integrate characteristic variables derived from sparse LiDAR points and air photos to detect buildings by estimating object heights and identifying clusters of similar heights. Within this study, the approach chooses a local regression method, namely geographically-weighted regression (GWR), to account for local variations of building surface height. In the GWR model, LiDAR data provide the height information of spatial objects, which is the dependent variable, while the brightness values from visible bands of the aerial photo serve as the independent variables. The established GWR model estimates the height at each pixel based on height values of its surrounding pixels with consideration of the distances between the pixels as well as similarities between their brightness values in visible bands. Clusters of contiguous pixels with higher estimated height val ues distinguish themselves from surrounding roads or other surfaces. A case study is conducted to evaluate the performance of the proposed method. It is found that the accuracy of the proposed statistical method is better than those by image classification of aerial photos alone or by building extraction of LiDAR data alone. The results demonstrate that this simple and effective method can be very useful for automatic detection of buildings in urban areas. The approach can be most helpful for studies of urban areas where more suitable but expensive high resolution data are not available.


2019 ◽  
Vol 11 (18) ◽  
pp. 2105 ◽  
Author(s):  
Berninger ◽  
Lohberger ◽  
Zhang ◽  
Siegert

Globally available high-resolution information about canopy height and AGB is important for carbon accounting. The present study showed that Pol-InSAR data from TS-X and RS-2 could be used together with field inventories and high-resolution data such as drone or LiDAR data to support the carbon accounting in the context of REDD+ (Reducing Emissions from Deforestation and Forest Degradation) projects.


Author(s):  
G.D. Danilatos

Over recent years a new type of electron microscope - the environmental scanning electron microscope (ESEM) - has been developed for the examination of specimen surfaces in the presence of gases. A detailed series of reports on the system has appeared elsewhere. A review summary of the current state and potential of the system is presented here.The gas composition, temperature and pressure can be varied in the specimen chamber of the ESEM. With air, the pressure can be up to one atmosphere (about 1000 mbar). Environments with fully saturated water vapor only at room temperature (20-30 mbar) can be easily maintained whilst liquid water or other solutions, together with uncoated specimens, can be imaged routinely during various applications.


Author(s):  
C. Barry Carter

This paper will review the current state of understanding of interface structure and highlight some of the future needs and problems which must be overcome. The study of this subject can be separated into three different topics: 1) the fundamental electron microscopy aspects, 2) material-specific features of the study and 3) the characteristics of the particular interfaces. The two topics which are relevant to most studies are the choice of imaging techniques and sample preparation. The techniques used to study interfaces in the TEM include high-resolution imaging, conventional diffraction-contrast imaging, and phase-contrast imaging (Fresnel fringe images, diffuse scattering). The material studied affects not only the characteristics of the interfaces (through changes in bonding, etc.) but also the method used for sample preparation which may in turn have a significant affect on the resulting image. Finally, the actual nature and geometry of the interface must be considered. For example, it has become increasingly clear that the plane of the interface is particularly important whenever at least one of the adjoining grains is crystalline.A particularly productive approach to the study of interfaces is to combine different imaging techniques as illustrated in the study of grain boundaries in alumina. In this case, the conventional imaging approach showed that most grain boundaries in ion-thinned samples are grooved at the grain boundary although the extent of this grooving clearly depends on the crystallography of the surface. The use of diffuse scattering (from amorphous regions) gives invaluable information here since it can be used to confirm directly that surface grooving does occur and that the grooves can fill with amorphous material during sample preparation (see Fig. 1). Extensive use of image simulation has shown that, although information concerning the interface can be obtained from Fresnel-fringe images, the introduction of artifacts through sample preparation cannot be lightly ignored. The Fresnel-fringe simulation has been carried out using a commercial multislice program (TEMPAS) which was intended for simulation of high-resolution images.


Author(s):  
K. Siangchaew ◽  
J. Bentley ◽  
M. Libera

Energy-filtered electron-spectroscopic TEM imaging provides a new way to study the microstructure of polymers without heavy-element stains. Since spectroscopic imaging exploits the signal generated directly by the electron-specimen interaction, it can produce richer and higher resolution data than possible with most staining methods. There are basically two ways to collect filtered images (fig. 1). Spectrum imaging uses a focused probe that is digitally rastered across a specimen with an entire energy-loss spectrum collected at each x-y pixel to produce a 3-D data set. Alternatively, filtering schemes such as the Zeiss Omega filter and the Gatan Imaging Filter (GIF) acquire individual 2-D images with electrons of a defined range of energy loss (δE) that typically is 5-20 eV.


2005 ◽  
Vol 41 ◽  
pp. 205-218
Author(s):  
Constantine S. Mitsiades ◽  
Nicholas Mitsiades ◽  
Teru Hideshima ◽  
Paul G. Richardson ◽  
Kenneth C. Anderson

The ubiquitin–proteasome pathway is a principle intracellular mechanism for controlled protein degradation and has recently emerged as an attractive target for anticancer therapies, because of the pleiotropic cell-cycle regulators and modulators of apoptosis that are controlled by proteasome function. In this chapter, we review the current state of the field of proteasome inhibitors and their prototypic member, bortezomib, which was recently approved by the U.S. Food and Drug Administration for the treatment of advanced multiple myeloma. Particular emphasis is placed on the pre-clinical research data that became the basis for eventual clinical applications of proteasome inhibitors, an overview of the clinical development of this exciting drug class in multiple myeloma, and a appraisal of possible uses in other haematological malignancies, such non-Hodgkin's lymphomas.


1995 ◽  
Vol 38 (5) ◽  
pp. 1126-1142 ◽  
Author(s):  
Jeffrey W. Gilger

This paper is an introduction to behavioral genetics for researchers and practioners in language development and disorders. The specific aims are to illustrate some essential concepts and to show how behavioral genetic research can be applied to the language sciences. Past genetic research on language-related traits has tended to focus on simple etiology (i.e., the heritability or familiality of language skills). The current state of the art, however, suggests that great promise lies in addressing more complex questions through behavioral genetic paradigms. In terms of future goals it is suggested that: (a) more behavioral genetic work of all types should be done—including replications and expansions of preliminary studies already in print; (b) work should focus on fine-grained, theory-based phenotypes with research designs that can address complex questions in language development; and (c) work in this area should utilize a variety of samples and methods (e.g., twin and family samples, heritability and segregation analyses, linkage and association tests, etc.).


VASA ◽  
2019 ◽  
Vol 48 (1) ◽  
pp. 35-46
Author(s):  
Stephen Hofmeister ◽  
Matthew B. Thomas ◽  
Joseph Paulisin ◽  
Nicolas J. Mouawad

Abstract. The management of vascular emergencies is dependent on rapid identification and confirmation of the diagnosis with concurrent patient stabilization prior to immediate transfer to the operating suite. A variety of technological advances in diagnostic imaging as well as the advent of minimally invasive endovascular interventions have shifted the contemporary treatment algorithms of such pathologies. This review provides a comprehensive discussion on the current state and future trends in the management of ruptured abdominal aortic aneurysms as well as acute aortic dissections.


Sign in / Sign up

Export Citation Format

Share Document