scholarly journals A Study of Visual Descriptors for Outdoor Navigation Using Google Street View Images

2016 ◽  
Vol 2016 ◽  
pp. 1-12 ◽  
Author(s):  
L. Fernández ◽  
L. Payá ◽  
O. Reinoso ◽  
L. M. Jiménez ◽  
M. Ballesta

A comparative analysis between several methods to describe outdoor panoramic images is presented. The main objective consists in studying the performance of these methods in the localization process of a mobile robot (vehicle) in an outdoor environment, when a visual map that contains images acquired from different positions of the environment is available. With this aim, we make use of the database provided by Google Street View, which contains spherical panoramic images captured in urban environments and their GPS position. The main benefit of using these images resides in the fact that it permits testing any novel localization algorithm in countless outdoor environments anywhere in the world and under realistic capture conditions. The main contribution of this work consists in performing a comparative evaluation of different methods to describe images to solve the localization problem in an outdoor dense map using only visual information. We have tested our algorithms using several sets of panoramic images captured in different outdoor environments. The results obtained in the work can be useful to select an appropriate description method for visual navigation tasks in outdoor environments using the Google Street View database and taking into consideration both the accuracy in localization and the computational efficiency of the algorithm.

Author(s):  
N. Bruno ◽  
R. Roncella

<p><strong>Abstract.</strong> Google Street View is a technology implemented in several Google services/applications (e.g. Google Maps, Google Earth) which provides the user, interested in viewing a particular location on the map, with panoramic images (represented in equi-rectangular projection) at street level. Generally, consecutive panoramas are acquired with an average distance of 5&amp;ndash;10<span class="thinspace"></span>m and can be compared to a traditional photogrammetric strip and, thus, processed to reconstruct portion of city at nearly zero cost. Most of the photogrammetric software packages available today implement spherical camera models and can directly process images in equi-rectangular projection. Although many authors provided in the past relevant works that involved the use of Google Street View imagery, mainly for 3D city model reconstruction, very few references can be found about the actual accuracy that can be obtained with such data. The goal of the present work is to present preliminary tests (at time of writing just three case studies has been analysed) about the accuracy and reliability of the 3D models obtained from Google Street View panoramas.</p>


Author(s):  
E. Boussias-Alexakis ◽  
V. Tsironisa ◽  
E. Petsa ◽  
G. Karras

This paper focuses on the issue of sparse matching in cases of extremely wide-base panoramic images such as those acquired by Google Street View in narrow urban streets. In order to effectively use affine point operators for bundle adjustment, panoramas must be suitably rectified to simulate affinity. To this end, a custom piecewise planar projection (triangular prism projection) is applied. On the assumption that the image baselines run parallel to the street façades, the estimated locations of the vanishing lines of the façade plane allow effectively removing projectivity and applying the ASIFT point operator on panorama pairs. Results from comparisons with multi-panorama adjustment, based on manually measured image points, and ground truth indicate that such an approach, if further elaborated, may well provide a realistic answer to the matching problem in the case of demanding panorama configurations.


2021 ◽  
Vol 5 (2) ◽  
pp. 69-78
Author(s):  
Nur Muhammad Amin Hashim Amir ◽  
◽  
Aznan Omar ◽  
Hilal Mazlan ◽  
◽  
...  

Covid-19 has sojourned the world as we know then into a cessation. It affects various disciplinary fields to a standstill which includes art and tourism. In Malaysia, to adapt to the global pandemic; new opportunities have emerged and dealt with it no longer becomes optional but rather a solution. Therefore, this research is mainly focused on implementing virtual tours to cope with the new norms; and evaluates its implication specifically in showcasing art exhibitions. The researcher uses the concept of Google Street View to capture virtual spaces; combining with Pano2Vr software as constructing tools; for audiences to interact and discusses its usefulness based on their ease of accessibility. Through the usage of this software, the researcher was able to reconstruct the actual gallery into series of interconnected images that trajectories within a web hosting server which are accessible over various platforms. The researcher purposely uses 360 panoramic images to maintain the ingenuity and actuality of the exhibition surroundings; due to most audiences are more complacent to the practicality compared to 3D digital replication. The advantages and disadvantages of this particular application of Virtual Tours (VTs) are then assessed through data collected based on the accessed devices, accessed locations, and total participation to see whether this concept can be used as a new alternative tool in showcasing art exhibitions in the effort of avoiding the pandemic widespread while still keeping the art activity at a sensible pace.


Author(s):  
E. Boussias-Alexakis ◽  
V. Tsironisa ◽  
E. Petsa ◽  
G. Karras

This paper focuses on the issue of sparse matching in cases of extremely wide-base panoramic images such as those acquired by Google Street View in narrow urban streets. In order to effectively use affine point operators for bundle adjustment, panoramas must be suitably rectified to simulate affinity. To this end, a custom piecewise planar projection (triangular prism projection) is applied. On the assumption that the image baselines run parallel to the street façades, the estimated locations of the vanishing lines of the façade plane allow effectively removing projectivity and applying the ASIFT point operator on panorama pairs. Results from comparisons with multi-panorama adjustment, based on manually measured image points, and ground truth indicate that such an approach, if further elaborated, may well provide a realistic answer to the matching problem in the case of demanding panorama configurations.


Author(s):  
Erna Verawati ◽  
Surya Darma Nasution ◽  
Imam Saputra

Sharpening the image of the road display requies a degree of brightness in the process of sharpening the image from the original image result of the improved image. One of the sharpening of the street view image is image processing. Image processing is one of the multimedia components that plays an important role as a form of visual information. There are many image processing methods that are used in sharpening the image of street views, one of them is the gram schmidt spectral sharpening method and high pass filtering. Gram schmidt spectral sharpening method is method that has another name for intensity modulation based on a refinement fillter. While the high pass filtering method is a filter process that btakes image with high intensity gradients and low intensity difference that will be reduced or discarded. Researce result show that the gram schmidt spectral sharpening method and high pass filtering can be implemented properly so that the sharpening of the street view image can be guaranteed sharpening by making changes frome the original image to the image using the gram schmidt spectral sharpening method and high pass filtering.Keywords: Image processing, gram schmidt spectral sharpening and high pass filtering.


IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Ervin Yohannes ◽  
Chih-Yang Lin ◽  
Timothy K. Shih ◽  
Chen-Ya Hong ◽  
Avirmed Enkhbat ◽  
...  

2021 ◽  
Vol 10 (1) ◽  
Author(s):  
Kang Liu ◽  
Ling Yin ◽  
Meng Zhang ◽  
Min Kang ◽  
Ai-Ping Deng ◽  
...  

Abstract Background Dengue fever (DF) is a mosquito-borne infectious disease that has threatened tropical and subtropical regions in recent decades. An early and targeted warning of a dengue epidemic is important for vector control. Current studies have primarily determined weather conditions to be the main factor for dengue forecasting, thereby neglecting that environmental suitability for mosquito breeding is also an important factor, especially in fine-grained intra-urban settings. Considering that street-view images are promising for depicting physical environments, this study proposes a framework for facilitating fine-grained intra-urban dengue forecasting by integrating the urban environments measured from street-view images. Methods The dengue epidemic that occurred in 167 townships of Guangzhou City, China, between 2015 and 2019 was taken as a study case. First, feature vectors of street-view images acquired inside each township were extracted by a pre-trained convolutional neural network, and then aggregated as an environmental feature vector of the township. Thus, townships with similar physical settings would exhibit similar environmental features. Second, the environmental feature vector is combined with commonly used features (e.g., temperature, rainfall, and past case count) as inputs to machine-learning models for weekly dengue forecasting. Results The performance of machine-learning forecasting models (i.e., MLP and SVM) integrated with and without environmental features were compared. This indicates that models integrating environmental features can identify high-risk urban units across the city more precisely than those using common features alone. In addition, the top 30% of high-risk townships predicted by our proposed methods can capture approximately 50–60% of dengue cases across the city. Conclusions Incorporating local environments measured from street view images is effective in facilitating fine-grained intra-urban dengue forecasting, which is beneficial for conducting spatially precise dengue prevention and control.


2021 ◽  
Vol 22 ◽  
pp. 101226
Author(s):  
Claire L. Cleland ◽  
Sara Ferguson ◽  
Frank Kee ◽  
Paul Kelly ◽  
Andrew James Williams ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document