scholarly journals Vehicular Localization Enhancement via Consensus

Sensors ◽  
2020 ◽  
Vol 20 (22) ◽  
pp. 6506
Author(s):  
Hong Ki Kim ◽  
Minji Kim ◽  
Sang Hyun Lee

This paper presents a strategy to cooperatively enhance the vehicular localization in vehicle-to-everything (V2X) networks by exchanges and updates of local data in a consensus-based manner. Where each vehicle in the network can obtain its location estimate despite its possible inaccuracy, the proposed strategy takes advantage of the abundance of the local estimates to improve the overall accuracy. During the execution of the strategy, vehicles exchange each other’s inter-vehicular relationship pertaining to measured distances and angles in order to update their own estimates. The iteration of the update rules leads to averaging out the measurement errors within the network, resulting in all vehicles’ localization error to retain similar magnitudes and orientations with respect to the ground truth locations. Furthermore, the estimate error of the anchor—the vehicle with the most reliable localization performance—is temporarily aggravated through the iteration. Such circumstances are exploited to simultaneously counteract the estimate errors and effectively improve the localization performance. Simulated experiments are conducted in order to observe the nature and its effects of the operations. The outcomes of the experiments and analysis of the protocol suggest that the presented technique successfully enhances the localization performances, while making additional insights regarding performance according to environmental changes and different implementation techniques.

2021 ◽  
Author(s):  
masada Tzabari ◽  
Vadim Holodovsky ◽  
Omer Shubi ◽  
Eshkol Eitan ◽  
Orit Altaratz ◽  
...  

<p> Significant climate uncertainties are associated with insufficient understanding of small warm clouds, due to the nature of their 3D structure and radiative transfer. It is desirable to improve understanding of such clouds and their sensitivity to environmental changes. This requires sensing platforms that are suitable for 3D sensing, and signal analysis tuned to 3D radiative transfer. We approach these challenges in the CloudCT project, funded by the ERC. It is a mission that develops and aims to demonstrate 3D volumetric scattering tomography of clouds. This will be facilitated by an unprecedented large formation of ten cooperating nanosatellites. The formation will simultaneously image cloud fields from multiple directions, at approximately 20m nadir ground resolution. Based on this data, scattering tomography will seek the 3D volumetric distribution of droplet effective radius, liquid water content and optical extinction. In addition to advancement of the technology, CloudCT will yield a global database of 3D macro and microphysical properties of warm cloud fields.</p><p>In this talk, we present advances made on several fronts of the project: modeling, payload, algorithm, and operation. Regarding cloud modeling, we performed LES simulations (using the SAM model with bin microphysics) of warm convective cloud fields (at different environments), at high spatial resolution. Using the simulated clouds properties, several imager and waveband possibilities have been quantitatively considered for the mission. Major consideration criteria are tomographic quality in the face of sensor and photon noise, calibration errors and stray light. Additional criteria are technological availability, platform constraints, calibration requirements and cost.</p><p>We investigated specifically possibilities of visible light (VIS, 463nm, 545nm, 645nm, and 705nm) short wave infra-red (SWIR, 1641 nm), and polarized imagers (POL, 463nm, 545nm, 645nm, and 705nm).  These examinations relied on physical modeling of 3D radiative transfer and the sensing processes. Due to platform constraints in CloudCT, each platform will carry a single camera exclusively (either VIS/NIR or SWIR). Hence, we describe the tradeoff of introducing SWIR cameras and various POL architectures.  </p><p>While CloudCT is mainly designed for simultaneous imaging of each cloud field, it is possible to tolerate a lag of several seconds, as small warm clouds hardly evolve in this time scale (at the 20 meter spatial scale). We exploit this, to add more view-points, using the same number of platforms (10). The added viewpoints correspond to single-scattering angles, where polarization yields enhanced sensitivity to the droplet microphysics. These angles require sampling of <1° in the fogbow region. This dictates requirements for the platform attitude control.  </p><p>On the algorithmic front, we advanced the retrieval to yield results that (compared to the simulated ground truth) have smaller errors than the prior art. Elements of our advancement include initialization by a parametric horizontally-uniform microphysical model. The parameters of this initialization are determined by a fast optimization process.  The optimized initialization is particularly strong, when relying on the detected degree of linear polarization, instead of radiance.</p>


Author(s):  
Jason N. Greenberg ◽  
Xiaobo Tan

Localization of mobile robots in GPS-denied envrionments (e.g., underwater) is of great importance to achieving navigation and other missions for these robots. In our prior work a concept of Simultaneous Localization And Communication (SLAC) was proposed, where the line of sight (LOS) requirement in LED-based communication is exploited to extract the relative bearing of the two communicating parties for localization purposes. The concept further involves the use of Kalman filtering for prediction of the mobile robot’s position, to reduce the overhead in establishing LOS. In this work the design of such a SLAC system is presented and experimentally evaluated in a two-dimensional setting, where a mobile robot localizes itself through wireless LED links with two stationary base nodes. Experimental results are presented to demonstrate the feasibility of the proposed approach and the important role the Kalman filter plays in reducing the localization error. The effect of the distance between the base nodes on the localization performance is further studied, which bears implications in future SLAC systems where mobile base nodes can be reconfigured adaptively to maximize the localization performance.


2020 ◽  
Vol 10 (18) ◽  
pp. 6624
Author(s):  
Chenquan Hua ◽  
Chengjin Xie ◽  
Xuan Xu

An image recognition technique is proposed for determining optimal neck levels for standard metal gauges, in the process of validating pipe provers. A camera-level follow-up control system was designed to achieve automated tracking of fluid levels by a camera, thereby preventing errors from inclined viewing angles. An orange background plate was placed behind the tube to reduce background interference, and highlight scale numbers/lines and concave meniscus. A segmentation algorithm, based on edge detection and K-means clustering, was used to segment indicator tubes and scales in the acquired images. The concave meniscus reconstruction algorithm and curve-fitting algorithm were proposed to better identify the lowest point of the meniscus. A characteristic edge detection model was used to identify centimeter-scale lines corresponding to the meniscus. A binary tree multiclass support vector machine (MCSVM) classifier was then used to identify scale numbers corresponding to scale lines and determine the optimal neck level for standard metal gauges. Experimental results showed that measurement errors were within ±0.1 mm compared to a ground truth acquired manually using Vernier calipers. The recognition time, including follow-up control, was less than 10 s, which is much lower than the switching time required between measuring individual tanks. This automated measurement approach for gauge neck levels can effectively reduce measurement times, decrease manmade errors in liquid level readings, and improve the efficiency of pipe prover validation.


Author(s):  
P. Glira ◽  
N. Pfeifer ◽  
C. Briese ◽  
C. Ressl

Airborne Laser Scanning (ALS) is an efficient method for the acquisition of dense and accurate point clouds over extended areas. To ensure a gapless coverage of the area, point clouds are collected strip wise with a considerable overlap. The redundant information contained in these overlap areas can be used, together with ground-truth data, to re-calibrate the ALS system and to compensate for systematic measurement errors. This process, usually denoted as <i>strip adjustment</i>, leads to an improved georeferencing of the ALS strips, or in other words, to a higher data quality of the acquired point clouds. We present a fully automatic strip adjustment method that (a) uses the original scanner and trajectory measurements, (b) performs an on-the-job calibration of the entire ALS multisensor system, and (c) corrects the trajectory errors individually for each strip. Like in the Iterative Closest Point (ICP) algorithm, correspondences are established iteratively and directly between points of overlapping ALS strips (avoiding a time-consuming segmentation and/or interpolation of the point clouds). The suitability of the method for large amounts of data is demonstrated on the basis of an ALS block consisting of 103 strips.


Sensors ◽  
2020 ◽  
Vol 20 (4) ◽  
pp. 1035
Author(s):  
Xiaokang Ye ◽  
José Rodríguez-Piñeiro ◽  
Yuan Liu ◽  
Xuefeng Yin ◽  
Antonio Pérez Yuste

Time difference of arrival (TDoA) technology is widely utilized for source localization, which stimulates many studies on performance-evaluation approaches for TDoA localization systems. Some approaches using simulations are designed merely for a simple Line-of-Sight (LoS) scenario while some other ones using experiments show high cost and inefficiency. This paper proposes an integrated approach to evaluate a TDoA localization system in an area with a complicated environment. Radio propagation graph is applied through a simulation to obtain channel impulse responses (CIRs) between a source to be located and the TDoA sensors for the area. Realistic signals received by the sensors in baseband are emulated combining the source transmitted signal and the CIRs. A hardware unit takes charge of sending the radio emulated received signals to the system under test, which is consistent with real experimental measurements. Statistical analysis of the system is allowed based on localization errors obtained comparing the system’s estimates with the ground truth of the source location. Verified results for LoS and non-LoS scenarios with variable transmitted signal bandwidths and signal-to-noise ratios, as well as for three variations of the sensor locations in an automobile circuit, show the usability of the proposed experiment-free performance-evaluation approach.


Author(s):  
G. Pavoni ◽  
M. Palma ◽  
M. Callieri ◽  
M. Dellepiane ◽  
C. Cerrano ◽  
...  

This study presents a practical method to estimate dimensions of <i>Paramuricea clavata</i> colonies using generic photographic datasets collected across wide areas. <i>Paramuricea clavata</i> is a non-rigid, tree-like octocoral; this morphology greatly affects the quality of the sea fans multi-view stereo matching reconstruction, resulting in hazy and incoherent clouds, full of “false” points with random orientation. Therefore, the standard procedure to take measurements over a reconstructed textured surface in 3D space is impractical. Our method overcomes this problem by using quasi-orthorectified images, produced by projecting registered photos on the plane that best fits the point cloud of the colony. The assessments of the measures collected have been performed comparing ground truth data set and time series images of the same set of colonies. The measurement errors fall below the requirements for this type of ecological observations.<br> Compared to previous works, the presented method does not require a detailed reconstruction of individual colonies, but relies on a global multi-view stereo reconstruction performed through a comprehensive photographic coverage of the area of interest, using a lowcost pre-calibrated camera. This approach drastically reduces the time spent working on the field, helping practitioners and scientists in improving efficiency and accuracy in their monitoring plans.


Author(s):  
K. Clyne ◽  
B. Leblon ◽  
A. LaRocque ◽  
M. Costa ◽  
M. Leblanc ◽  
...  

Abstract. The eastern coastline of James Bay (Eeyou Istchee) is known to be home to beds of subarctic eelgrass (Zostera marina L.). These eelgrass beds provide valuable habitat and food source for coastal and marine animals and contribute valuable ecosystem services such as stabilization of the shoreline all along the coast. Despite reports from Cree communities that eelgrass bed health has declined, limited research has been performed to assess and map the spatial distribution of eelgrass within the bay. This study aims to address that issue by evaluating the capability of Landsat-8 Operational Land Imager (OLI) imagery to establish a baseline map of eelgrass distribution in 2019 in the relatively turbid waters of Eeyou Istchee. Three images acquired in September 2019 were merged and classified using Random Forests into the following classes: Eelgrass, Turbid Water, Highly Turbid Water, and Optically Deep Water. The resulting classified image was validated against 108 ground truth data that were obtained from both the eelgrass health and Hydro-Quebec research team. The resulting overall accuracy was 78.7%, indicating the potential of the Random Forests classifier to estimate baseline eelgrass coverage in James Bay using Landsat-8 imagery. This project is part of a Cree driven project, the Coastal Habitat Comprehensive Research Program (CHCRP). The CHCRP aims to combine Cree's traditional knowledge with Western science to better understand environmental changes in the coastal ecosystems and ecosystem services of eastern James Bay. The study is funded by a MITACS grant sponsored by Niskamoon Corporation, an indigenous non-profit organization.


Author(s):  
D. Frommholz

<p><strong>Abstract.</strong> This paper describes the construction and composition of a synthetic test world for the validation of photogrammetric algorithms. Since its 3D objects are entirely generated by software, the geometric accuracy of the scene does not suffer from measurement errors which existing real-world ground truth is inherently afflicted with. The resulting data set covers an area of 13188 by 6144 length units and exposes positional residuals as small as the machine epsilon of the double-precision floating point numbers used exclusively for the coordinates. It is colored with high-resolution textures to accommodate the simulation of virtual flight campaigns with large optical sensors and laser scanners in both aerial and close-range scenarios. To specifically support the derivation of image samples and point clouds, the synthetic scene gets stored in the human-readable Alias/Wavefront OBJ and POV-Ray data formats. While conventional rasterization remains possible, using the open-source ray tracer as a render tool facilitates the creation of ideal pinhole bitmaps, consistent digital surface models (DSMs), true ortho-mosaics (TOMs) and orientation metadata without programming knowledge. To demonstrate the application of the constructed 3D scene, example validation recipes are discussed in detail for a state-of-the-art implementation of semi-global matching and a perspective-correct multi-source texture mapper. For the latter, beyond the visual assessment, a statistical evaluation of the achieved texture quality is given.</p>


Sign in / Sign up

Export Citation Format

Share Document