scholarly journals Measuring distance through topographic models

2019 ◽  
Vol 1 ◽  
pp. 1-1
Author(s):  
Roberto de Figueiredo Ribeiro

<p><strong>Abstract.</strong> Accurate measurement of distances is of paramount importance to transportation infrastructure planning. Be it for estimating travel time, locating accidents and hazards through road markers, planning maintenance services, or setting prices for building contracts, distance is the primary metric upon which all aspects of the job are based, given that transportation infrastructure deals mostly with linear features. Yet, countries with older infrastructure often don’t know for how long their networks run &amp;ndash; especially so in case of developing countries. Brazil currently has over 2640000&amp;thinsp;km of roads, with construction documentation lacking for most of the network. The most used method for generating distance measurements, the car odometer from driving between two points, while apt for doing macro-regional planning, is unfit for large-scale engineering work, as this study shows below.</p><p>The industry standard for measuring distances uses a precision odometer connected to specialized tires, used either on their own or as a “fifth wheel” on a vehicle. Such method, however, is laborious and slow, and only generates a scalar between two points, with any new distance necessitating a new measurement, even if the two sets share a common space, or if one distance is a subset of the other. This paper proposes the usage of systematic mapping techniques to generate topographic linear features with measuring information, from which any distance can be calculated. To generate these features, first a linear path is constructed in GIS software over a route. The height information of each node in the path is then extracted from a source, and then the topographic distance is calculated from the vertical profile. Finally, an M coordinate is generated for each node.</p><p>For comparison between sources, a base path was used as ground truth. This path was constructed from a GNSS survey along the road, collected on cinematic mode at 10Hz (1.1&amp;thinsp;m gap between points), and post-processed with fixed-phase relative positioning tied to a base station. The mean positional quality achieved was 2.5 cm of planimetric, and 4.3&amp;thinsp;cm of altimetric precision. Two other sources of height information were used for comparison, one a flight DTM with 33&amp;thinsp;cm LE90 and 1 m of cell size, and the NASA 1 Arc-second SRTM with a nominal 9&amp;thinsp;m LE90 and 30&amp;thinsp;m cell size. Furthermore, a planimetric distance using a navigational GPS device (C/A code only) was also calculated. Two highways were selected for testing, and divided into 341 segments of 200 meters each, to account for the influence of slope in the calculations.</p><p>As expected, the flight DTM came the closest to the base model, deviating from it at an average of 31.95&amp;thinsp;ppm, with 2.8&amp;thinsp;ppm of standard error. It is, however, the most expensive and time-consuming method. The SRTM deviated an average of 5131.53&amp;thinsp;ppm and showed very high variation, with 8481.96&amp;thinsp;ppm of standard error. The navigation GPS deviated at an average of 685.18 ppm, with 633.11&amp;thinsp;ppm of standard error. Both the SRTM and GPS appear to deviate further from the base model as slope increases, but given that few segments with over 2.5&amp;deg; of slope were present in the sample, a correlation could not yet be established. For comparison, the average of the car odometer method was 16654.51 ppm, with a standard error of 22661.69&amp;thinsp;ppm.</p><p> Given its high deviation, the SRTM is unfit for precision work, but is a big improvement over using the car odometer for general indications. Further studies with mid-range DTMs should be done to provide a remote sensing alternative. The handheld GPS had better results than expected, given its nominal precision of 15&amp;thinsp;m. Despite a probable larger absolute positioning error, its relative error distribution remained steady enough to allow a good distance measurement.</p>

Author(s):  
Timofei Istomin ◽  
Elia Leoni ◽  
Davide Molteni ◽  
Amy L. Murphy ◽  
Gian Pietro Picco ◽  
...  

Proximity detection is at the core of several mobile and ubiquitous computing applications. These include reactive use cases, e.g., alerting individuals of hazards or interaction opportunities, and others concerned only with logging proximity data, e.g., for offline analysis and modeling. Common approaches rely on Bluetooth Low Energy (BLE) or ultra-wideband (UWB) radios. Nevertheless, these strike opposite tradeoffs between the accuracy of distance estimates quantifying proximity and the energy efficiency affecting system lifetime, effectively forcing a choice between the two and ultimately constraining applicability. Janus reconciles these dimensions in a dual-radio protocol enabling accurate and energy-efficient proximity detection, where the energy-savvy BLE is exploited to discover devices and coordinate their distance measurements, acquired via the energy-hungry UWB. A model supports domain experts in configuring Janus for their use cases with predictable performance. The latency, reliability, and accuracy of Janus are evaluated experimentally, including realistic scenarios endowed with the mm-level ground truth provided by a motion capture system. Energy measurements show that Janus achieves weeks to months of autonomous operation, depending on the use case configuration. Finally, several large-scale campaigns exemplify its practical usefulness in real-world contexts.


2017 ◽  
Vol 17 (12) ◽  
pp. 2093-2107 ◽  
Author(s):  
Jérémie Voumard ◽  
Antonio Abellán ◽  
Pierrick Nicolet ◽  
Ivanna Penna ◽  
Marie-Aurélie Chanut ◽  
...  

Abstract. We discuss here different challenges and limitations of surveying rock slope failures using 3-D reconstruction from image sets acquired from street view imagery (SVI). We show how rock slope surveying can be performed using two or more image sets using online imagery with photographs from the same site but acquired at different instances. Three sites in the French alps were selected as pilot study areas: (1) a cliff beside a road where a protective wall collapsed, consisting of two image sets (60 and 50 images in each set) captured within a 6-year time frame; (2) a large-scale active landslide located on a slope at 250 m from the road, using seven image sets (50 to 80 images per set) from five different time periods with three image sets for one period; (3) a cliff over a tunnel which has collapsed, using two image sets captured in a 4-year time frame. The analysis include the use of different structure from motion (SfM) programs and a comparison between the extracted photogrammetric point clouds and a lidar-derived mesh that was used as a ground truth. Results show that both landslide deformation and estimation of fallen volumes were clearly identified in the different point clouds. Results are site- and software-dependent, as a function of the image set and number of images, with model accuracies ranging between 0.2 and 3.8 m in the best and worst scenario, respectively. Although some limitations derived from the generation of 3-D models from SVI were observed, this approach allowed us to obtain preliminary 3-D models of an area without on-field images, allowing extraction of the pre-failure topography that would not be available otherwise.


2017 ◽  
Author(s):  
Jérémie Voumard ◽  
Antonio Abellan ◽  
Pierrick Nicolet ◽  
Marie-Aurélie Chanut ◽  
Marc-Henri Derron ◽  
...  

Abstract. We discuss here the challenges and limitations on surveying rock slope failures using 3D reconstruction from images acquired from Street View Imagery (SVI) and processed with modern photogrammetric workflows. We show how the back in time function can be used for a 3D reconstruction of two or more image sets from the same site but at different instants of time, allowing for rock slope surveying. Three sites in the French alps were selected: (a) a cliff beside a road where a protective wall collapsed consisting on two images sets (60 and 50 images on each set) captured on a six years timeframe; (b) a large-scale active landslide located on a slope at 250 m from the road, using seven images sets (50 to 80 images per set) from five different time periods with three images sets for one period; (c) a cliff over a tunnel which has collapsed, using three images sets on a six years time-frame. The analysis includes the use of different commercially available Structure for Motion (SfM) programs and comparison between the so-extracted photogrammetric point clouds and a LiDAR derived mesh used as a ground truth. As a result, both landslide deformation together with estimation of fallen volumes were clearly identified in the point clouds. Results are site and software-dependent, as a function of the image set and number of images, with model accuracies ranging between 0.1 and 3.1 m in the best and worst scenario, respectively. Despite some clear limitations and challenges, this manuscript demonstrates that this original approach might allow obtaining preliminary 3D models of an area without on-field images. Furthermore, the pre-failure topography can be obtained for sites where it would not be available otherwise.


2005 ◽  
Vol 33 (1) ◽  
pp. 38-62 ◽  
Author(s):  
S. Oida ◽  
E. Seta ◽  
H. Heguri ◽  
K. Kato

Abstract Vehicles, such as an agricultural tractor, construction vehicle, mobile machinery, and 4-wheel drive vehicle, are often operated on unpaved ground. In many cases, the ground is deformable; therefore, the deformation should be taken into consideration in order to assess the off-the-road performance of a tire. Recent progress in computational mechanics enabled us to simulate the large scale coupling problem, in which the deformation of tire structure and of surrounding medium can be interactively considered. Using this technology, hydroplaning phenomena and tire traction on snow have been predicted. In this paper, the simulation methodology of tire/soil coupling problems is developed for pneumatic tires of arbitrary tread patterns. The Finite Element Method (FEM) and the Finite Volume Method (FVM) are used for structural and for soil-flow analysis, respectively. The soil is modeled as an elastoplastic material with a specified yield criterion and a nonlinear elasticity. The material constants are referred to measurement data, so that the cone penetration resistance and the shear resistance are represented. Finally, the traction force of the tire in a cultivated field is predicted, and a good correlation with experiments is obtained.


Author(s):  
A. V. Ponomarev

Introduction: Large-scale human-computer systems involving people of various skills and motivation into the information processing process are currently used in a wide spectrum of applications. An acute problem in such systems is assessing the expected quality of each contributor; for example, in order to penalize incompetent or inaccurate ones and to promote diligent ones.Purpose: To develop a method of assessing the expected contributor’s quality in community tagging systems. This method should only use generally unreliable and incomplete information provided by contributors (with ground truth tags unknown).Results:A mathematical model is proposed for community image tagging (including the model of a contributor), along with a method of assessing the expected contributor’s quality. The method is based on comparing tag sets provided by different contributors for the same images, being a modification of pairwise comparison method with preference relation replaced by a special domination characteristic. Expected contributors’ quality is evaluated as a positive eigenvector of a pairwise domination characteristic matrix. Community tagging simulation has confirmed that the proposed method allows you to adequately estimate the expected quality of community tagging system contributors (provided that the contributors' behavior fits the proposed model).Practical relevance: The obtained results can be used in the development of systems based on coordinated efforts of community (primarily, community tagging systems). 


Cells ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 1030
Author(s):  
Julie Lake ◽  
Catherine S. Storm ◽  
Mary B. Makarious ◽  
Sara Bandres-Ciga

Neurodegenerative diseases are etiologically and clinically heterogeneous conditions, often reflecting a spectrum of disease rather than well-defined disorders. The underlying molecular complexity of these diseases has made the discovery and validation of useful biomarkers challenging. The search of characteristic genetic and transcriptomic indicators for preclinical disease diagnosis, prognosis, or subtyping is an area of ongoing effort and interest. The next generation of biomarker studies holds promise by implementing meaningful longitudinal and multi-modal approaches in large scale biobank and healthcare system scale datasets. This work will only be possible in an open science framework. This review summarizes the current state of genetic and transcriptomic biomarkers in Parkinson’s disease, Alzheimer’s disease, and amyotrophic lateral sclerosis, providing a comprehensive landscape of recent literature and future directions.


2021 ◽  
Vol 13 (3) ◽  
pp. 68
Author(s):  
Steven Knowles Flanagan ◽  
Zuoyin Tang ◽  
Jianhua He ◽  
Irfan Yusoff

Dedicated Short-Range Communication (DSRC) or IEEE 802.11p/OCB (Out of the Context of a Base-station) is widely considered to be a primary technology for Vehicle-to-Vehicle (V2V) communication, and it is aimed toward increasing the safety of users on the road by sharing information between one another. The requirements of DSRC are to maintain real-time communication with low latency and high reliability. In this paper, we investigate how communication can be used to improve stopping distance performance based on fieldwork results. In addition, we assess the impacts of reduced reliability, in terms of distance independent, distance dependent and density-based consecutive packet losses. A model is developed based on empirical measurements results depending on distance, data rate, and traveling speed. With this model, it is shown that cooperative V2V communications can effectively reduce reaction time and increase safety stop distance, and highlight the importance of high reliability. The obtained results can be further used for the design of cooperative V2V-based driving and safety applications.


Polymers ◽  
2021 ◽  
Vol 13 (7) ◽  
pp. 1094
Author(s):  
Bastian Klose ◽  
Daniel Kremer ◽  
Merve Aksit ◽  
Kasper P. van der Zwan ◽  
Klaus Kreger ◽  
...  

Polystyrene foams have become more and more important owing to their lightweight potential and their insulation properties. Progress in this field is expected to be realized by foams featuring a microcellular morphology. However, large-scale processing of low-density foams with a closed-cell structure and volume expansion ratio of larger than 10, exhibiting a homogenous morphology with a mean cell size of approximately 10 µm, remains challenging. Here, we report on a series of 4,4′-diphenylmethane substituted bisamides, which we refer to as kinked bisamides, acting as efficient supramolecular foam cell nucleating agents for polystyrene. Self-assembly experiments from solution showed that these bisamides form supramolecular fibrillary or ribbon-like nanoobjects. These kinked bisamides can be dissolved at elevated temperatures in a large concentration range, forming dispersed nano-objects upon cooling. Batch foaming experiments using 1.0 wt.% of a selected kinked bisamide revealed that the mean cell size can be as low as 3.5 µm. To demonstrate the applicability of kinked bisamides in a high-throughput continuous foam process, we performed foam extrusion. Using 0.5 wt.% of a kinked bisamide yielded polymer foams with a foam density of 71 kg/m3 and a homogeneous microcellular morphology with cell sizes of ≈10 µm, which is two orders of magnitude lower compared to the neat polystyrene reference foam with a comparable foam density.


2019 ◽  
Vol 214 ◽  
pp. 04033
Author(s):  
Hervé Rousseau ◽  
Belinda Chan Kwok Cheong ◽  
Cristian Contescu ◽  
Xavier Espinal Curull ◽  
Jan Iven ◽  
...  

The CERN IT Storage group operates multiple distributed storage systems and is responsible for the support of the infrastructure to accommodate all CERN storage requirements, from the physics data generated by LHC and non-LHC experiments to the personnel users' files. EOS is now the key component of the CERN Storage strategy. It allows to operate at high incoming throughput for experiment data-taking while running concurrent complex production work-loads. This high-performance distributed storage provides now more than 250PB of raw disks and it is the key component behind the success of CERNBox, the CERN cloud synchronisation service which allows syncing and sharing files on all major mobile and desktop platforms to provide offline availability to any data stored in the EOS infrastructure. CERNBox recorded an exponential growth in the last couple of year in terms of files and data stored thanks to its increasing popularity inside CERN users community and thanks to its integration with a multitude of other CERN services (Batch, SWAN, Microsoft Office). In parallel CASTOR is being simplified and transitioning from an HSM into an archival system, focusing mainly in the long-term data recording of the primary data from the detectors, preparing the road to the next-generation tape archival system, CTA. The storage services at CERN cover as well the needs of the rest of our community: Ceph as data back-end for the CERN OpenStack infrastructure, NFS services and S3 functionality; AFS for legacy home directory filesystem services and its ongoing phase-out and CVMFS for software distribution. In this paper we will summarise our experience in supporting all our distributed storage system and the ongoing work in evolving our infrastructure, testing very-dense storage building block (nodes with more than 1PB of raw space) for the challenges waiting ahead.


2018 ◽  
Vol 7 (12) ◽  
pp. 472 ◽  
Author(s):  
Bo Wan ◽  
Lin Yang ◽  
Shunping Zhou ◽  
Run Wang ◽  
Dezhi Wang ◽  
...  

The road-network matching method is an effective tool for map integration, fusion, and update. Due to the complexity of road networks in the real world, matching methods often contain a series of complicated processes to identify homonymous roads and deal with their intricate relationship. However, traditional road-network matching algorithms, which are mainly central processing unit (CPU)-based approaches, may have performance bottleneck problems when facing big data. We developed a particle-swarm optimization (PSO)-based parallel road-network matching method on graphics-processing unit (GPU). Based on the characteristics of the two main stages (similarity computation and matching-relationship identification), data-partition and task-partition strategies were utilized, respectively, to fully use GPU threads. Experiments were conducted on datasets with 14 different scales. Results indicate that the parallel PSO-based matching algorithm (PSOM) could correctly identify most matching relationships with an average accuracy of 84.44%, which was at the same level as the accuracy of a benchmark—the probability-relaxation-matching (PRM) method. The PSOM approach significantly reduced the road-network matching time in dealing with large amounts of data in comparison with the PRM method. This paper provides a common parallel algorithm framework for road-network matching algorithms and contributes to integration and update of large-scale road-networks.


Sign in / Sign up

Export Citation Format

Share Document