automatic image stitching
Recently Published Documents


TOTAL DOCUMENTS

15
(FIVE YEARS 4)

H-INDEX

3
(FIVE YEARS 0)

2021 ◽  
Author(s):  
Yanlin Huang ◽  
Meilian Zheng ◽  
Ziwei Song ◽  
Songzhu Mei ◽  
Zebin Wang ◽  
...  

Abstract In the process of equipment production in large manufacturing, the continuity of production is becoming more significant. Timely detection of equipment operation faults can ensure production continuity and greatly reduce loss. In this study, the purpose is to use multi-equipment and image stitching algorithm to obtain the complete image of a large-scale production line. An improved image stitching method based on image fusion is proposed in this paper, which mainly solves the technical problems of stitching seams, unnatural effects, and distortion after image transformation in the existing stitching technique. In the image stitching algorithm, the improved fusion algorithm based on the optimal seam and gradated in and out fusion algorithm is used to realize image fusion, including the use of dynamic programming to find the optimal seam and limit the range of fusion based on the optimal seam found. Finally, the gradated in and out fusion algorithm is used to perform fusion calculation within the limited fusion range to complete image stitching. In the end, through the comparison of different dimensional image fusion indicators with the effect of the existing fusion algorithm, the experimental results show that the method in this paper solves the problem of unnatural image stitching effect, enhances the image stitching result, and has the great fusion effect. Therefore, the panorama processed by the image stitching algorithm proposed in this paper can be efficiently processed through the industrial detection module.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5054
Author(s):  
Maria Júlia R. Aguiar ◽  
Tiago da Rocha Alves ◽  
Leonardo M. Honório ◽  
Ivo C. S. Junior ◽  
Vinícius F. Vidal

The image stitching process is based on the alignment and composition of multiple images that represent parts of a 3D scene. The automatic construction of panoramas from multiple digital images is a technique of great importance, finding applications in different areas such as remote sensing and inspection and maintenance in many work environments. In traditional automatic image stitching, image alignment is generally performed by the Levenberg–Marquardt numerical-based method. Although these traditional approaches only present minor flaws in the final reconstruction, the final result is not appropriate for industrial grade applications. To improve the final stitching quality, this work uses a RGBD robot capable of precise image positing. To optimize the final adjustment, this paper proposes the use of bio-inspired algorithms such as Bat Algorithm, Grey Wolf Optimizer, Arithmetic Optimization Algorithm, Salp Swarm Algorithm and Particle Swarm Optimization in order verify the efficiency and competitiveness of metaheuristics against the classical Levenberg–Marquardt method. The obtained results showed that metaheuristcs have found better solutions than the traditional approach.


Author(s):  
A. Moussa ◽  
N. El-Sheimy

The last few years have witnessed an increasing volume of aerial image data because of the extensive improvements of the Unmanned Aerial Vehicles (UAVs). These newly developed UAVs have led to a wide variety of applications. A fast assessment of the achieved coverage and overlap of the acquired images of a UAV flight mission is of great help to save the time and cost of the further steps. A fast automatic stitching of the acquired images can help to visually assess the achieved coverage and overlap during the flight mission. This paper proposes an automatic image stitching approach that creates a single overview stitched image using the acquired images during a UAV flight mission along with a coverage image that represents the count of overlaps between the acquired images. The main challenge of such task is the huge number of images that are typically involved in such scenarios. A short flight mission with image acquisition frequency of one second can capture hundreds to thousands of images. The main focus of the proposed approach is to reduce the processing time of the image stitching procedure by exploiting the initial knowledge about the images positions provided by the navigation sensors. The proposed approach also avoids solving for all the transformation parameters of all the photos together to save the expected long computation time if all the parameters were considered simultaneously. After extracting the points of interest of all the involved images using Scale-Invariant Feature Transform (SIFT) algorithm, the proposed approach uses the initial image’s coordinates to build an incremental constrained Delaunay triangulation that represents the neighborhood of each image. This triangulation helps to match only the neighbor images and therefore reduces the time-consuming features matching step. The estimated relative orientation between the matched images is used to find a candidate seed image for the stitching process. The pre-estimated transformation parameters of the images are employed successively in a growing fashion to create the stitched image and the coverage image. The proposed approach is implemented and tested using the images acquired through a UAV flight mission and the achieved results are presented and discussed.


Author(s):  
A. Moussa ◽  
N. El-Sheimy

The last few years have witnessed an increasing volume of aerial image data because of the extensive improvements of the Unmanned Aerial Vehicles (UAVs). These newly developed UAVs have led to a wide variety of applications. A fast assessment of the achieved coverage and overlap of the acquired images of a UAV flight mission is of great help to save the time and cost of the further steps. A fast automatic stitching of the acquired images can help to visually assess the achieved coverage and overlap during the flight mission. This paper proposes an automatic image stitching approach that creates a single overview stitched image using the acquired images during a UAV flight mission along with a coverage image that represents the count of overlaps between the acquired images. The main challenge of such task is the huge number of images that are typically involved in such scenarios. A short flight mission with image acquisition frequency of one second can capture hundreds to thousands of images. The main focus of the proposed approach is to reduce the processing time of the image stitching procedure by exploiting the initial knowledge about the images positions provided by the navigation sensors. The proposed approach also avoids solving for all the transformation parameters of all the photos together to save the expected long computation time if all the parameters were considered simultaneously. After extracting the points of interest of all the involved images using Scale-Invariant Feature Transform (SIFT) algorithm, the proposed approach uses the initial image’s coordinates to build an incremental constrained Delaunay triangulation that represents the neighborhood of each image. This triangulation helps to match only the neighbor images and therefore reduces the time-consuming features matching step. The estimated relative orientation between the matched images is used to find a candidate seed image for the stitching process. The pre-estimated transformation parameters of the images are employed successively in a growing fashion to create the stitched image and the coverage image. The proposed approach is implemented and tested using the images acquired through a UAV flight mission and the achieved results are presented and discussed.


2015 ◽  
Vol 24 (3) ◽  
pp. 033007 ◽  
Author(s):  
Jaehyun An ◽  
Beom Su Kim ◽  
Hyung Il Koo ◽  
Nam Ik Cho

Sign in / Sign up

Export Citation Format

Share Document