Mobile Robot Navigation Utilizing the WEB Based Aerial Images Without Prior Teaching Run

2017 ◽  
Vol 29 (4) ◽  
pp. 697-705 ◽  
Author(s):  
Satoshi Muramatsu ◽  
Tetsuo Tomizawa ◽  
Shunsuke Kudoh ◽  
Takashi Suehiro ◽  
◽  
...  

In order to realize the work of goods conveyance etc. by robot, localization of robot position is fundamental technology component. Map matching methods is one of the localization technique. In map matching method, usually, to create the map data for localization, we have to operate the robot and measure the environment (teaching run). This operation requires a lot of time and work. In recent years, due to improved Internet services, aerial image data is easily obtained from Google Maps etc. Therefore, we utilize the aerial images as a map data to for mobile robots localization and navigation without teaching run. In this paper, we proposed the robot localization and navigation technique using aerial images. We verified the proposed technique by the localization and autonomous running experiment.

2019 ◽  
Vol 11 (10) ◽  
pp. 1157 ◽  
Author(s):  
Jorge Fuentes-Pacheco ◽  
Juan Torres-Olivares ◽  
Edgar Roman-Rangel ◽  
Salvador Cervantes ◽  
Porfirio Juarez-Lopez ◽  
...  

Crop segmentation is an important task in Precision Agriculture, where the use of aerial robots with an on-board camera has contributed to the development of new solution alternatives. We address the problem of fig plant segmentation in top-view RGB (Red-Green-Blue) images of a crop grown under open-field difficult circumstances of complex lighting conditions and non-ideal crop maintenance practices defined by local farmers. We present a Convolutional Neural Network (CNN) with an encoder-decoder architecture that classifies each pixel as crop or non-crop using only raw colour images as input. Our approach achieves a mean accuracy of 93.85% despite the complexity of the background and a highly variable visual appearance of the leaves. We make available our CNN code to the research community, as well as the aerial image data set and a hand-made ground truth segmentation with pixel precision to facilitate the comparison among different algorithms.


2021 ◽  
Vol 13 (14) ◽  
pp. 2656
Author(s):  
Furong Shi ◽  
Tong Zhang

Deep-learning technologies, especially convolutional neural networks (CNNs), have achieved great success in building extraction from areal images. However, shape details are often lost during the down-sampling process, which results in discontinuous segmentation or inaccurate segmentation boundary. In order to compensate for the loss of shape information, two shape-related auxiliary tasks (i.e., boundary prediction and distance estimation) were jointly learned with building segmentation task in our proposed network. Meanwhile, two consistency constraint losses were designed based on the multi-task network to exploit the duality between the mask prediction and two shape-related information predictions. Specifically, an atrous spatial pyramid pooling (ASPP) module was appended to the top of the encoder of a U-shaped network to obtain multi-scale features. Based on the multi-scale features, one regression loss and two classification losses were used for predicting the distance-transform map, segmentation, and boundary. Two inter-task consistency-loss functions were constructed to ensure the consistency between distance maps and masks, and the consistency between masks and boundary maps. Experimental results on three public aerial image data sets showed that our method achieved superior performance over the recent state-of-the-art models.


2019 ◽  
Vol 8 (1) ◽  
pp. 47 ◽  
Author(s):  
Franz Kurz ◽  
Seyed Azimi ◽  
Chun-Yu Sheu ◽  
Pablo d’Angelo

The 3D information of road infrastructures is growing in importance with the development of autonomous driving. In this context, the exact 2D position of road markings as well as height information play an important role in, e.g., lane-accurate self-localization of autonomous vehicles. In this paper, the overall task is divided into an automatic segmentation followed by a refined 3D reconstruction. For the segmentation task, we applied a wavelet-enhanced fully convolutional network on multiview high-resolution aerial imagery. Based on the resulting 2D segments in the original images, we propose a successive workflow for the 3D reconstruction of road markings based on a least-squares line-fitting in multiview imagery. The 3D reconstruction exploits the line character of road markings with the aim to optimize the best 3D line location by minimizing the distance from its back projection to the detected 2D line in all the covering images. Results showed an improved IoU of the automatic road marking segmentation by exploiting the multiview character of the aerial images and a more accurate 3D reconstruction of the road surface compared to the semiglobal matching (SGM) algorithm. Further, the approach avoids the matching problem in non-textured image parts and is not limited to lines of finite length. In this paper, the approach is presented and validated on several aerial image data sets covering different scenarios like motorways and urban regions.


Author(s):  
D. Hein ◽  
R. Berger

<p><strong>Abstract.</strong> Many remote sensing applications demand for a fast and efficient way of generating orthophoto maps from raw aerial images. One prerequisite is direct georeferencing, which allows to geolocate aerial images to their geographic position on the earth’s surface. But this is only half the story. When dealing with a large quantity of highly overlapping images, a major challenge is to select the most suitable image parts in order to generate seamless aerial maps of the captured area. This paper proposes a method that quickly determines such an optimal (rectangular) section for each single aerial image, which in turn can be used for generating seamless aerial maps. Its key approach is to clip aerial images depending on their geometric intersections with a terrain elevation model of the captured area, which is why we call it <i>terrain aware image clipping</i> (TAC). The method has a modest computational footprint and is therefore applicable even for rather limited embedded vision systems. It can be applied for both, real-time aerial mapping applications using data links as well as for rapid map generation right after landing without any postprocessing step. Referring to real-time applications, this method also minimizes transmission of redundant image data. The proposed method has already been demonstrated in several search-and-rescue scenarios and real-time mapping applications using a broadband data link and diffent kinds of camera and carrier systems. Moreover, a patent for this technology is pending.</p>


Author(s):  
A. Moussa ◽  
N. El-Sheimy

The last few years have witnessed an increasing volume of aerial image data because of the extensive improvements of the Unmanned Aerial Vehicles (UAVs). These newly developed UAVs have led to a wide variety of applications. A fast assessment of the achieved coverage and overlap of the acquired images of a UAV flight mission is of great help to save the time and cost of the further steps. A fast automatic stitching of the acquired images can help to visually assess the achieved coverage and overlap during the flight mission. This paper proposes an automatic image stitching approach that creates a single overview stitched image using the acquired images during a UAV flight mission along with a coverage image that represents the count of overlaps between the acquired images. The main challenge of such task is the huge number of images that are typically involved in such scenarios. A short flight mission with image acquisition frequency of one second can capture hundreds to thousands of images. The main focus of the proposed approach is to reduce the processing time of the image stitching procedure by exploiting the initial knowledge about the images positions provided by the navigation sensors. The proposed approach also avoids solving for all the transformation parameters of all the photos together to save the expected long computation time if all the parameters were considered simultaneously. After extracting the points of interest of all the involved images using Scale-Invariant Feature Transform (SIFT) algorithm, the proposed approach uses the initial image’s coordinates to build an incremental constrained Delaunay triangulation that represents the neighborhood of each image. This triangulation helps to match only the neighbor images and therefore reduces the time-consuming features matching step. The estimated relative orientation between the matched images is used to find a candidate seed image for the stitching process. The pre-estimated transformation parameters of the images are employed successively in a growing fashion to create the stitched image and the coverage image. The proposed approach is implemented and tested using the images acquired through a UAV flight mission and the achieved results are presented and discussed.


Author(s):  
A. Moussa ◽  
N. El-Sheimy

The last few years have witnessed an increasing volume of aerial image data because of the extensive improvements of the Unmanned Aerial Vehicles (UAVs). These newly developed UAVs have led to a wide variety of applications. A fast assessment of the achieved coverage and overlap of the acquired images of a UAV flight mission is of great help to save the time and cost of the further steps. A fast automatic stitching of the acquired images can help to visually assess the achieved coverage and overlap during the flight mission. This paper proposes an automatic image stitching approach that creates a single overview stitched image using the acquired images during a UAV flight mission along with a coverage image that represents the count of overlaps between the acquired images. The main challenge of such task is the huge number of images that are typically involved in such scenarios. A short flight mission with image acquisition frequency of one second can capture hundreds to thousands of images. The main focus of the proposed approach is to reduce the processing time of the image stitching procedure by exploiting the initial knowledge about the images positions provided by the navigation sensors. The proposed approach also avoids solving for all the transformation parameters of all the photos together to save the expected long computation time if all the parameters were considered simultaneously. After extracting the points of interest of all the involved images using Scale-Invariant Feature Transform (SIFT) algorithm, the proposed approach uses the initial image’s coordinates to build an incremental constrained Delaunay triangulation that represents the neighborhood of each image. This triangulation helps to match only the neighbor images and therefore reduces the time-consuming features matching step. The estimated relative orientation between the matched images is used to find a candidate seed image for the stitching process. The pre-estimated transformation parameters of the images are employed successively in a growing fashion to create the stitched image and the coverage image. The proposed approach is implemented and tested using the images acquired through a UAV flight mission and the achieved results are presented and discussed.


Author(s):  
A. P. Dal Poz ◽  
V. J. M. Fernandes

In this paper a method for automatic extraction of building roof boundaries is proposed, which combines LiDAR data and highresolution aerial images. The proposed method is based on three steps. In the first step aboveground objects are extracted from LiDAR data. Initially a filtering algorithm is used to process the original LiDAR data for getting ground and non-ground points. Then, a region-growing procedure and the convex hull algorithm are sequentially used to extract polylines that represent aboveground objects from the non-ground point cloud. The second step consists in extracting corresponding LiDAR-derived aboveground objects from a high-resolution aerial image. In order to avoid searching for the interest objects over the whole image, the LiDAR-derived aboveground objects’ polylines are photogrammetrically projected onto the image space and rectangular bounding boxes (sub-images) that enclose projected polylines are generated. Each sub-image is processed for extracting the polyline that represents the interest aboveground object within the selected sub-image. Last step consists in identifying polylines that represent building roof boundaries. We use the Markov Random Field (MRF) model for modelling building roof characteristics and spatial configurations. Polylines that represent building roof boundaries are found by optimizing the resulting MRF energy function using the Genetic Algorithm. Experimental results are presented and discussed in this paper.


2021 ◽  
Vol 13 (4) ◽  
pp. 1917
Author(s):  
Alma Elizabeth Thuestad ◽  
Ole Risbøl ◽  
Jan Ingolf Kleppe ◽  
Stine Barlindhaug ◽  
Elin Rose Myrvoll

What can remote sensing contribute to archaeological surveying in subarctic and arctic landscapes? The pros and cons of remote sensing data vary as do areas of utilization and methodological approaches. We assessed the applicability of remote sensing for archaeological surveying of northern landscapes using airborne laser scanning (LiDAR) and satellite and aerial images to map archaeological features as a basis for (a) assessing the pros and cons of the different approaches and (b) assessing the potential detection rate of remote sensing. Interpretation of images and a LiDAR-based bare-earth digital terrain model (DTM) was based on visual analyses aided by processing and visualizing techniques. 368 features were identified in the aerial images, 437 in the satellite images and 1186 in the DTM. LiDAR yielded the better result, especially for hunting pits. Image data proved suitable for dwellings and settlement sites. Feature characteristics proved a key factor for detectability, both in LiDAR and image data. This study has shown that LiDAR and remote sensing image data are highly applicable for archaeological surveying in northern landscapes. It showed that a multi-sensor approach contributes to high detection rates. Our results have improved the inventory of archaeological sites in a non-destructive and minimally invasive manner.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Lukman E. Mansuri ◽  
D.A. Patel

PurposeHeritage is the latent part of a sustainable built environment. Conservation and preservation of heritage is one of the United Nations' (UN) sustainable development goals. Many social and natural factors seriously threaten heritage structures by deteriorating and damaging the original. Therefore, regular visual inspection of heritage structures is necessary for their conservation and preservation. Conventional inspection practice relies on manual inspection, which takes more time and human resources. The inspection system seeks an innovative approach that should be cheaper, faster, safer and less prone to human error than manual inspection. Therefore, this study aims to develop an automatic system of visual inspection for the built heritage.Design/methodology/approachThe artificial intelligence-based automatic defect detection system is developed using the faster R-CNN (faster region-based convolutional neural network) model of object detection to build an automatic visual inspection system. From the English and Dutch cemeteries of Surat (India), images of heritage structures were captured by digital camera to prepare the image data set. This image data set was used for training, validation and testing to develop the automatic defect detection model. While validating this model, its optimum detection accuracy is recorded as 91.58% to detect three types of defects: “spalling,” “exposed bricks” and “cracks.”FindingsThis study develops the model of automatic web-based visual inspection systems for the heritage structures using the faster R-CNN. Then it demonstrates detection of defects of spalling, exposed bricks and cracks existing in the heritage structures. Comparison of conventional (manual) and developed automatic inspection systems reveals that the developed automatic system requires less time and staff. Therefore, the routine inspection can be faster, cheaper, safer and more accurate than the conventional inspection method.Practical implicationsThe study presented here can improve inspecting the built heritages by reducing inspection time and cost, eliminating chances of human errors and accidents and having accurate and consistent information. This study attempts to ensure the sustainability of the built heritage.Originality/valueFor ensuring the sustainability of built heritage, this study presents the artificial intelligence-based methodology for the development of an automatic visual inspection system. The automatic web-based visual inspection system for the built heritage has not been reported in previous studies so far.


Sign in / Sign up

Export Citation Format

Share Document