scholarly journals AUTOMATIC ADJUSTMENT OF WIDE-BASE GOOGLE STREET VIEW PANORAMAS

Author(s):  
E. Boussias-Alexakis ◽  
V. Tsironisa ◽  
E. Petsa ◽  
G. Karras

This paper focuses on the issue of sparse matching in cases of extremely wide-base panoramic images such as those acquired by Google Street View in narrow urban streets. In order to effectively use affine point operators for bundle adjustment, panoramas must be suitably rectified to simulate affinity. To this end, a custom piecewise planar projection (triangular prism projection) is applied. On the assumption that the image baselines run parallel to the street façades, the estimated locations of the vanishing lines of the façade plane allow effectively removing projectivity and applying the ASIFT point operator on panorama pairs. Results from comparisons with multi-panorama adjustment, based on manually measured image points, and ground truth indicate that such an approach, if further elaborated, may well provide a realistic answer to the matching problem in the case of demanding panorama configurations.

Author(s):  
E. Boussias-Alexakis ◽  
V. Tsironisa ◽  
E. Petsa ◽  
G. Karras

This paper focuses on the issue of sparse matching in cases of extremely wide-base panoramic images such as those acquired by Google Street View in narrow urban streets. In order to effectively use affine point operators for bundle adjustment, panoramas must be suitably rectified to simulate affinity. To this end, a custom piecewise planar projection (triangular prism projection) is applied. On the assumption that the image baselines run parallel to the street façades, the estimated locations of the vanishing lines of the façade plane allow effectively removing projectivity and applying the ASIFT point operator on panorama pairs. Results from comparisons with multi-panorama adjustment, based on manually measured image points, and ground truth indicate that such an approach, if further elaborated, may well provide a realistic answer to the matching problem in the case of demanding panorama configurations.


Author(s):  
N. Bruno ◽  
R. Roncella

<p><strong>Abstract.</strong> Google Street View is a technology implemented in several Google services/applications (e.g. Google Maps, Google Earth) which provides the user, interested in viewing a particular location on the map, with panoramic images (represented in equi-rectangular projection) at street level. Generally, consecutive panoramas are acquired with an average distance of 5&amp;ndash;10<span class="thinspace"></span>m and can be compared to a traditional photogrammetric strip and, thus, processed to reconstruct portion of city at nearly zero cost. Most of the photogrammetric software packages available today implement spherical camera models and can directly process images in equi-rectangular projection. Although many authors provided in the past relevant works that involved the use of Google Street View imagery, mainly for 3D city model reconstruction, very few references can be found about the actual accuracy that can be obtained with such data. The goal of the present work is to present preliminary tests (at time of writing just three case studies has been analysed) about the accuracy and reliability of the 3D models obtained from Google Street View panoramas.</p>


2016 ◽  
Vol 2016 ◽  
pp. 1-12 ◽  
Author(s):  
L. Fernández ◽  
L. Payá ◽  
O. Reinoso ◽  
L. M. Jiménez ◽  
M. Ballesta

A comparative analysis between several methods to describe outdoor panoramic images is presented. The main objective consists in studying the performance of these methods in the localization process of a mobile robot (vehicle) in an outdoor environment, when a visual map that contains images acquired from different positions of the environment is available. With this aim, we make use of the database provided by Google Street View, which contains spherical panoramic images captured in urban environments and their GPS position. The main benefit of using these images resides in the fact that it permits testing any novel localization algorithm in countless outdoor environments anywhere in the world and under realistic capture conditions. The main contribution of this work consists in performing a comparative evaluation of different methods to describe images to solve the localization problem in an outdoor dense map using only visual information. We have tested our algorithms using several sets of panoramic images captured in different outdoor environments. The results obtained in the work can be useful to select an appropriate description method for visual navigation tasks in outdoor environments using the Google Street View database and taking into consideration both the accuracy in localization and the computational efficiency of the algorithm.


Author(s):  
Chen Tianen ◽  
Kohei Yamamoto ◽  
Kikuo Tachibana

This paper explores a method of generating panoramic street strip image map which is called as “Pano-Street” here and contains both sides, ground surface and overhead part of a street with a sequence of 360° panoramic images captured with Point Grey’s Ladybug3 mounted on the top of Mitsubishi MMS-X 220 at 2m intervals along the streets in urban environment. On-board GPS/IMU, speedometer and post sequence image analysis technology such as bundle adjustment provided much more accuracy level position and attitude data for these panoramic images, and laser data. The principle for generating panoramic street strip image map is similar to that of the traditional aero ortho-images. A special 3D DEM(3D-Mesh called here) was firstly generated with laser data, the depth map generated from dense image matching with the sequence of 360° panoramic images, or the existing GIS spatial data along the MMS trajectory, then all 360° panoramic images were projected and stitched on the 3D-Mesh with the position and attitude data. This makes it possible to make large scale panoramic street strip image maps for most types of cities, and provides another kind of street view way to view the 360° scene along the street by avoiding the switch of image bubbles like Google Street View and Bing Maps Streetside.


2021 ◽  
Vol 5 (2) ◽  
pp. 69-78
Author(s):  
Nur Muhammad Amin Hashim Amir ◽  
◽  
Aznan Omar ◽  
Hilal Mazlan ◽  
◽  
...  

Covid-19 has sojourned the world as we know then into a cessation. It affects various disciplinary fields to a standstill which includes art and tourism. In Malaysia, to adapt to the global pandemic; new opportunities have emerged and dealt with it no longer becomes optional but rather a solution. Therefore, this research is mainly focused on implementing virtual tours to cope with the new norms; and evaluates its implication specifically in showcasing art exhibitions. The researcher uses the concept of Google Street View to capture virtual spaces; combining with Pano2Vr software as constructing tools; for audiences to interact and discusses its usefulness based on their ease of accessibility. Through the usage of this software, the researcher was able to reconstruct the actual gallery into series of interconnected images that trajectories within a web hosting server which are accessible over various platforms. The researcher purposely uses 360 panoramic images to maintain the ingenuity and actuality of the exhibition surroundings; due to most audiences are more complacent to the practicality compared to 3D digital replication. The advantages and disadvantages of this particular application of Virtual Tours (VTs) are then assessed through data collected based on the accessed devices, accessed locations, and total participation to see whether this concept can be used as a new alternative tool in showcasing art exhibitions in the effort of avoiding the pandemic widespread while still keeping the art activity at a sensible pace.


Author(s):  
Chen Tianen ◽  
Kohei Yamamoto ◽  
Kikuo Tachibana

This paper explores a method of generating panoramic street strip image map which is called as “Pano-Street” here and contains both sides, ground surface and overhead part of a street with a sequence of 360° panoramic images captured with Point Grey’s Ladybug3 mounted on the top of Mitsubishi MMS-X 220 at 2m intervals along the streets in urban environment. On-board GPS/IMU, speedometer and post sequence image analysis technology such as bundle adjustment provided much more accuracy level position and attitude data for these panoramic images, and laser data. The principle for generating panoramic street strip image map is similar to that of the traditional aero ortho-images. A special 3D DEM(3D-Mesh called here) was firstly generated with laser data, the depth map generated from dense image matching with the sequence of 360° panoramic images, or the existing GIS spatial data along the MMS trajectory, then all 360° panoramic images were projected and stitched on the 3D-Mesh with the position and attitude data. This makes it possible to make large scale panoramic street strip image maps for most types of cities, and provides another kind of street view way to view the 360° scene along the street by avoiding the switch of image bubbles like Google Street View and Bing Maps Streetside.


IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Ervin Yohannes ◽  
Chih-Yang Lin ◽  
Timothy K. Shih ◽  
Chen-Ya Hong ◽  
Avirmed Enkhbat ◽  
...  

2021 ◽  
Vol 22 ◽  
pp. 101226
Author(s):  
Claire L. Cleland ◽  
Sara Ferguson ◽  
Frank Kee ◽  
Paul Kelly ◽  
Andrew James Williams ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document