ILLUMINATION, SCALE AND ROTATION INVARIANT ALGORITHM FOR VISION-BASED UAV NAVIGATION

Author(s):  
AAKASH DAWADEE ◽  
JAVAAN CHAHL ◽  
D(NANDA) NANDAGOPAL ◽  
ZORICA NEDIC

Navigation has been a major challenge for the successful operation of an autonomous aircraft. Although success has been achieved using active methods such as radar, sonar, lidar and the global positioning system (GPS), such methods are not always suitable due to their susceptibility to jamming and outages. Vision, as a passive navigation method, is considered as an excellent alternative; however, the development of vision-based autonomous systems for outdoor environments has proven difficult. For flying systems, this is compounded by the additional challenges posed by environmental and atmospheric conditions. In this paper, we present a novel passive vision-based algorithm which is invariant to illumination, scale and rotation. We use a three stage landmark recognition algorithm and an algorithm for waypoint matching. Our algorithms have been tested in both synthetic and real-world outdoor environments demonstrating overall good performance. We further compare our feature matching method with the speed-up robust features (SURF) method with results demonstrating that our method outperforms the SURF method in feature matching as well as computational cost.

Author(s):  
Jimmy Ming-Tai Wu ◽  
Qian Teng ◽  
Shahab Tayeb ◽  
Jerry Chun-Wei Lin

AbstractThe high average-utility itemset mining (HAUIM) was established to provide a fair measure instead of genetic high-utility itemset mining (HUIM) for revealing the satisfied and interesting patterns. In practical applications, the database is dynamically changed when insertion/deletion operations are performed on databases. Several works were designed to handle the insertion process but fewer studies focused on processing the deletion process for knowledge maintenance. In this paper, we then develop a PRE-HAUI-DEL algorithm that utilizes the pre-large concept on HAUIM for handling transaction deletion in the dynamic databases. The pre-large concept is served as the buffer on HAUIM that reduces the number of database scans while the database is updated particularly in transaction deletion. Two upper-bound values are also established here to reduce the unpromising candidates early which can speed up the computational cost. From the experimental results, the designed PRE-HAUI-DEL algorithm is well performed compared to the Apriori-like model in terms of runtime, memory, and scalability in dynamic databases.


2017 ◽  
Vol 2017 ◽  
pp. 1-10
Author(s):  
Hsuan-Ming Huang ◽  
Ing-Tsung Hsiao

Background and Objective. Over the past decade, image quality in low-dose computed tomography has been greatly improved by various compressive sensing- (CS-) based reconstruction methods. However, these methods have some disadvantages including high computational cost and slow convergence rate. Many different speed-up techniques for CS-based reconstruction algorithms have been developed. The purpose of this paper is to propose a fast reconstruction framework that combines a CS-based reconstruction algorithm with several speed-up techniques.Methods. First, total difference minimization (TDM) was implemented using the soft-threshold filtering (STF). Second, we combined TDM-STF with the ordered subsets transmission (OSTR) algorithm for accelerating the convergence. To further speed up the convergence of the proposed method, we applied the power factor and the fast iterative shrinkage thresholding algorithm to OSTR and TDM-STF, respectively.Results. Results obtained from simulation and phantom studies showed that many speed-up techniques could be combined to greatly improve the convergence speed of a CS-based reconstruction algorithm. More importantly, the increased computation time (≤10%) was minor as compared to the acceleration provided by the proposed method.Conclusions. In this paper, we have presented a CS-based reconstruction framework that combines several acceleration techniques. Both simulation and phantom studies provide evidence that the proposed method has the potential to satisfy the requirement of fast image reconstruction in practical CT.


Author(s):  
Yang Hu ◽  
Yalin Wang ◽  
Feng Xu ◽  
Bitao Yao ◽  
Wenjun Xu ◽  
...  

Abstract Remanufacturing has received increasing attention for environmental protection and resource conservation considerations. Disassembly is a crucial step in remanufacturing, is always done manually which is inefficient while robotic disassembly can improve the efficiency of the disassembly. Aiming at the problem of product connector recognition during the robotic disassembly process, we analyze the template matching and feature matching principles based on two-dimensional images. To reduce the computational complexity of traditional template matching, a stepwise search strategy combining coarse and fine is proposed. Based on this a product connector recognition algorithm based on fast template matching and a product connector recognition algorithm based on feature matching is designed. Taking bolts and hexagon nuts as examples, the recognition effects of the two algorithms are compared and analyzed.


2018 ◽  
Vol 2018 ◽  
pp. 1-12
Author(s):  
Yun-Hua Wu ◽  
Lin-Lin Ge ◽  
Feng Wang ◽  
Bing Hua ◽  
Zhi-Ming Chen ◽  
...  

In order to satisfy the real-time requirement of spacecraft autonomous navigation using natural landmarks, a novel algorithm called CSA-SURF (chessboard segmentation algorithm and speeded up robust features) is proposed to improve the speed without loss of repeatability performance of image registration progress. It is a combination of chessboard segmentation algorithm and SURF. Here, SURF is used to extract the features from satellite images because of its scale- and rotation-invariant properties and low computational cost. CSA is based on image segmentation technology, aiming to find representative blocks, which will be allocated to different tasks to speed up the image registration progress. To illustrate the advantages of the proposed algorithm, PCA-SURF, which is the combination of principle component analysis and SURF, is also analyzed in this paper for comparison. Furthermore, random sample consensus (RANSAC) algorithm is applied to eliminate the false matches for further accuracy improvement. The simulation results show that the proposed strategy obtains good results, especially in scaling and rotation variation. Besides, CSA-SURF decreased 50% of the time in extraction and 90% of the time in matching without losing the repeatability performance by comparing with SURF algorithm. The proposed method has been demonstrated as an alternative way for image registration of spacecraft autonomous navigation using natural landmarks.


Author(s):  
Franz Pichler ◽  
Gundolf Haase

A finite element code is developed in which all of the computationally expensive steps are performed on a graphics processing unit via the THRUST and the PARALUTION libraries. The code focuses on the simulation of transient problems where the repeated computations per time-step create the computational cost. It is used to solve partial and ordinary differential equations as they arise in thermal-runaway simulations of automotive batteries. The speed-up obtained by utilizing the graphics processing unit for every critical step is compared against the single core and the multi-threading solutions which are also supported by the chosen libraries. This way a high total speed-up on the graphics processing unit is achieved without the need for programming a single classical Compute Unified Device Architecture kernel.


2021 ◽  
Vol 28 (2) ◽  
pp. 163-182
Author(s):  
José L. Simancas-García ◽  
Kemel George-González

Shannon’s sampling theorem is one of the most important results of modern signal theory. It describes the reconstruction of any band-limited signal from a finite number of its samples. On the other hand, although less well known, there is the discrete sampling theorem, proved by Cooley while he was working on the development of an algorithm to speed up the calculations of the discrete Fourier transform. Cooley showed that a sampled signal can be resampled by selecting a smaller number of samples, which reduces computational cost. Then it is possible to reconstruct the original sampled signal using a reverse process. In principle, the two theorems are not related. However, in this paper we will show that in the context of Non Standard Mathematical Analysis (NSA) and Hyperreal Numerical System R, the two theorems are equivalent. The difference between them becomes a matter of scale. With the scale changes that the hyperreal number system allows, the discrete variables and functions become continuous, and Shannon’s sampling theorem emerges from the discrete sampling theorem.


2010 ◽  
Vol 9 (4) ◽  
pp. 29-34 ◽  
Author(s):  
Achim Weimert ◽  
Xueting Tan ◽  
Xubo Yang

In this paper, we present a novel feature detection approach designed for mobile devices, showing optimized solutions for both detection and description. It is based on FAST (Features from Accelerated Segment Test) and named 3D FAST. Being robust, scale-invariant and easy to compute, it is a candidate for augmented reality (AR) applications running on low performance platforms. Using simple calculations and machine learning, FAST is a feature detection algorithm known to be efficient but not very robust in addition to its lack of scale information. Our approach relies on gradient images calculated for different scale levels on which a modified9 FAST algorithm operates to obtain the values of the corner response function. We combine the detection with an adapted version of SURF (Speed Up Robust Features) descriptors, providing a system with all means to implement feature matching and object detection. Experimental evaluation on a Symbian OS device using a standard image set and comparison with SURF using Hessian matrix-based detector is included in this paper, showing improvements in speed (compared to SURF) and robustness (compared to FAST)


2020 ◽  
Vol 20 (6) ◽  
pp. 116-125
Author(s):  
Nikolay Shegunov ◽  
Oleg Iliev

AbstractMultiLevel Monte Carlo (MLMC) attracts great interest for numerical simulations of Stochastic Partial Differential Equations (SPDEs), due to its superiority over the standard Monte Carlo (MC) approach. MLMC combines in a proper manner many cheap fast simulations with few slow and expensive ones, the variance is reduced, and a significant speed up is achieved. Simulations with MC/MLMC consist of three main components: generating random fields, solving deterministic problem and reduction of the variance. Each part is subject to a different degree of parallelism. Compared to the classical MC, MLMC introduces “levels” on which the sampling is done. These levels have different computational cost, thus, efficiently utilizing the parallel resources becomes a non-trivial problem. The main focus of this paper is the parallelization of the MLMC Algorithm.


2011 ◽  
Vol 11 (04) ◽  
pp. 571-587 ◽  
Author(s):  
WILLIAM ROBSON SCHWARTZ ◽  
HELIO PEDRINI

Fractal image compression is one of the most promising techniques for image compression due to advantages such as resolution independence and fast decompression. It exploits the fact that natural scenes present self-similarity to remove redundancy and obtain high compression rates with smaller quality degradation compared to traditional compression methods. The main drawback of fractal compression is its computationally intensive encoding process, due to the need for searching regions with high similarity in the image. Several approaches have been developed to reduce the computational cost to locate similar regions. In this work, we propose a method based on robust feature descriptors to speed up the encoding time. The use of robust features provides more discriminative and representative information for regions of the image. When the regions are better represented, the search for similar parts of the image can be reduced to focus only on the most likely matching candidates, which leads to reduction on the computational time. Our experimental results show that the use of robust feature descriptors reduces the encoding time while keeping high compression rates and reconstruction quality.


Geophysics ◽  
2021 ◽  
pp. 1-64
Author(s):  
Claudia Haindl ◽  
Kuangdai Leng ◽  
Tarje Nissen-Meyer

We present an adaptive approach to seismic modeling by which the computational cost of a 3D simulation can be reduced while retaining resolution and accuracy. This Azimuthal Complexity Adaptation (ACA) approach relies upon the inherent smoothness of wavefields around the azimuth of a source-centered cylindrical coordinate system. Azimuthal oversampling is thereby detected and eliminated. The ACA method has recently been introduced as part of AxiSEM3D, an open-source solver for global seismology. We employ a generalization of this solver which can handle local-scale Cartesian models, and which features a combination of an absorbing boundary condition and a sponge boundary with automated parameter tuning. The ACA method is benchmarked against an established 3D method using a model featuring bathymetry and a salt body. We obtain a close fit where the models are implemented equally in both solvers and an expectedly poor fit otherwise, with the ACA method running an order of magnitude faster than the classic 3D method. Further, we present maps of maximum azimuthal wavenumbers that are created to facilitate azimuthal complexity adaptation. We show how these maps can be interpreted in terms of the 3D complexity of the wavefield and in terms of seismic resolution. The expected performance limits of the ACA method for complex 3D structures are tested on the SEG/EAGE salt model. In this case, ACA still reduces the overall degrees of freedom by 92% compared to a complexity-blind AxiSEM3D simulation. In comparison with the reference 3D method, we again find a close fit and a speed-up of a factor 7. We explore how the performance of ACA is affected by model smoothness by subjecting the SEG/EAGE salt model to Gaussian smoothing. This results in a doubling of the speed-up. ACA thus represents a convergent, versatile and efficient method for a variety of complex settings and scales.


Sign in / Sign up

Export Citation Format

Share Document