scholarly journals Performance Evaluation of Gradient Domain and Pyramid Blending Used in Image Stitching Process with ORB Binary Descriptor

Panorama development is the basically method of integrating multiple images captured of the same scene under consideration to get high resolution image. This process is useful for combining multiple images which are overlapped to obtain larger image. Usefulness of Image stitching is found in the field related to medical imaging, data from satellites, computer vision and automatic target recognition in military applications. The goal objective of this research paper is basically for developing an high improved resolution and its quality panorama having with high accuracy and minimum computation time. Initially we compared different image feature detectors and tested SIFT, SURF, ORB to find out the rate of detection of the corrected available key points along with processing time. Later on, testing is done with some common techniques of image blending or fusion for improving the mosaicing quality process. In this experimental results, it has been found out that ORB image feature detection and description algorithm is more accurate, fastest which gives a higher performance and Pyramid blending method gives the better stitching quality. Lastly panorama is developed based on combination of ORB binary descriptor method for finding out image features and pyramid blending method.

2020 ◽  
Vol 17 (9) ◽  
pp. 4419-4424
Author(s):  
Venkat P. Patil ◽  
C. Ram Singla

Image mosaicing is a method that combines several images or pictures of the superposing field of view to create a panoramic high-resolution picture. In the field of medical imagery, satellite data, computer vision, military automatic target recognition can be seen the importance of image mosaicing. The present domains of studies in computer vision, computer graphics and photo graphics are image stitching and video stitching. The registration of images includes five primary phases: feature detection and description; matching feature; rejection of outliers; transformation function derivation; image replication. Stitching images from specific scenes is a difficult job when images can be picked up under different noise. In this paper, we examine an algorithm for seamless stitching of images in order to resolve all such problems by employing dehazing methods to the collected images, and before defining image features and bound energy characteristics that match image-based features of the SIFT-Scale Invariant Feature Transform. The proposed method experimentation is compared with the conventional methods of stitching of image using squared distance to match the feature. The proposed seamless stitching technique is assessed on the basis of the metrics, HSGV and VSGV. The analysis of this stitching algorithm aims to minimize the amount of computation time and discrepancies in the final stitched results obtained.


2019 ◽  
Vol 8 (3) ◽  
pp. 7274-7279

Image mosaicing is a method where two or more pictures of the same image can be combined into a big picture and a high resolution panorama created. It is helpful for constructing a bigger picture with numerous overlapping pictures of the same scene. The image mosaic development is the union of two pictures. The significance of image mosaicing in the sector of computer vision, medical imaging, satellite data, army automatic target recognition can be seen. Picture stitching can be performed from a broad angle video taken from left to right to develop a wide-scale panorama to obtain a high-resolution picture. This research paper includes valuable content which will be very helpful for creating significant choices in vision-based apps and is intended primarily to establish a benchmark for scientists, regardless of their specific fields. In this paper it has been seen that distinct algorithms perform differently in terms of time complexity and image quality. We have looked at a variety of feature detectors and descriptors such as SIFT-SIFT, SURF-SURF, STAR-BRIEF and ORB-ORB for the development of video file panoramic images. We have noted that SIFT provides excellent outcomes, giving the image the largest amount of key points identified at the cost of computational time and SURF, ORB, has fewer key points obtained, where it has been seen that ORB is the simplest of the above algorithms, but produces no good performance quality image outcomes. A good compromise can be achieved with SURF. Depending on the application, the metric for image feature extraction would change. In addition, the speed of each algorithm is also recorded. This systemic analysis suggests many characteristics of the stitching of images.


Electronics ◽  
2020 ◽  
Vol 9 (3) ◽  
pp. 391
Author(s):  
Dah-Jye Lee ◽  
Samuel G. Fuller ◽  
Alexander S. McCown

Feature detection, description, and matching are crucial steps for many computer vision algorithms. These steps rely on feature descriptors to match image features across sets of images. Previous work has shown that our SYnthetic BAsis (SYBA) feature descriptor can offer superior performance to other binary descriptors. This paper focused on various optimizations and hardware implementation of the newer and optimized version. The hardware implementation on a field-programmable gate array (FPGA) is a high-throughput low-latency solution which is critical for applications such as high-speed object detection and tracking, stereo vision, visual odometry, structure from motion, and optical flow. We compared our solution to other hardware designs of binary descriptors. We demonstrated that our implementation of SYBA as a feature descriptor in hardware offered superior image feature matching performance and used fewer resources than most binary feature descriptor implementations.


2019 ◽  
Vol 5 (1) ◽  
pp. 273-276
Author(s):  
Konstantin Sieler ◽  
Ady Naber ◽  
Werner Nahm

AbstractOptical image processing is part of many applications used for brain surgeries. Microscope camera, or patient movement, like brain-movement through the pulse or a change in the liquor, can cause the image processing to fail. One option to compensate movement is feature detection and spatial allocation. This allocation is based on image features. The frame wise matched features are used to calculate the transformation matrix. The goal of this project was to evaluate different feature detectors based on spatial density and temporal robustness to reveal the most appropriate feature. The feature detectors included corner-, and blob-detectors and were applied on nine videos. These videos were taken during brain surgery with surgical microscopes and include the RGB channels. The evaluation showed that each detector detected up to 10 features for nine frames. The feature detector KAZE resulted in being the best feature detector in both density and robustness.


A strong and efficient Feature extraction algorithm is highly recommended for individual recognition in human authentication systems. This paper presents the work carried on palm vein image to extract features of the person vein for recognition and classification using improved canny edge detector. This paper describes a novel method to extract valuable features of the people’s vein pattern and achieving high recognition rate. The experiments carried using two algorithms 1) PCACE (principal component analysis with canny edge) algorithm and 2) LDACE (linear discriminant analysis with canny edge) algorithm. These two methods are analyzed on palm vein image and found LDACE algorithm is a best extractor compare to PCACE method. An Equal Error Rate (EER) is applied to evaluate two algorithms. Hidden Markova Model (HMM) is utilized for image feature classification and matching using contactless Palm Under Test (PUT) palm vein database. The percentage of recognition is measured by False Acceptance Ratio (FAR) and False Rejection Ratio (FRR). This method shows robust response with respect to human palm vein identification process computation time and improved recognition rate.


Author(s):  
W. Krakow ◽  
D. A. Smith

The successful determination of the atomic structure of [110] tilt boundaries in Au stems from the investigation of microscope performance at intermediate accelerating voltages (200 and 400kV) as well as a detailed understanding of how grain boundary image features depend on dynamical diffraction processes variation with specimen and beam orientations. This success is also facilitated by improving image quality by digital image processing techniques to the point where a structure image is obtained and each atom position is represented by a resolved image feature. Figure 1 shows an example of a low angle (∼10°) Σ = 129/[110] tilt boundary in a ∼250Å Au film, taken under tilted beam brightfield imaging conditions, to illustrate the steps necessary to obtain the atomic structure configuration from the image. The original image of Fig. 1a shows the regular arrangement of strain-field images associated with the cores of ½ [10] primary dislocations which are separated by ∼15Å.


2016 ◽  
Vol 20 (2) ◽  
pp. 191-201 ◽  
Author(s):  
Wei Lu ◽  
Yan Cui ◽  
Jun Teng

To decrease the cost of instrumentation for the strain and displacement monitoring method that uses sensors as well as considers the structural health monitoring challenges in sensor installation, it is necessary to develop a machine vision-based monitoring method. For this method, the most important step is the accurate extraction of the image feature. In this article, the edge detection operator based on multi-scale structure elements and the compound mathematical morphological operator is proposed to provide improved image feature extraction. The proposed method can not only achieve an improved filtering effect and anti-noise ability but can also detect the edge more accurately. Furthermore, the required image features (vertex of a square calibration board and centroid of a circular target) can be accurately extracted using the extracted image edge information. For validation, the monitoring tests for the structural local mean strain and in-plane displacement were designed accordingly. Through analysis of the error between the measured and calculated values of the structural strain and displacement, the feasibility and effectiveness of the proposed edge detection operator are verified.


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5312
Author(s):  
Yanni Zhang ◽  
Yiming Liu ◽  
Qiang Li ◽  
Jianzhong Wang ◽  
Miao Qi ◽  
...  

Recently, deep learning-based image deblurring and deraining have been well developed. However, most of these methods fail to distill the useful features. What is more, exploiting the detailed image features in a deep learning framework always requires a mass of parameters, which inevitably makes the network suffer from a high computational burden. We propose a lightweight fusion distillation network (LFDN) for image deblurring and deraining to solve the above problems. The proposed LFDN is designed as an encoder–decoder architecture. In the encoding stage, the image feature is reduced to various small-scale spaces for multi-scale information extraction and fusion without much information loss. Then, a feature distillation normalization block is designed at the beginning of the decoding stage, which enables the network to distill and screen valuable channel information of feature maps continuously. Besides, an information fusion strategy between distillation modules and feature channels is also carried out by the attention mechanism. By fusing different information in the proposed approach, our network can achieve state-of-the-art image deblurring and deraining results with a smaller number of parameters and outperform the existing methods in model complexity.


Sensors ◽  
2019 ◽  
Vol 19 (2) ◽  
pp. 291 ◽  
Author(s):  
Hamdi Sahloul ◽  
Shouhei Shirafuji ◽  
Jun Ota

Local image features are invariant to in-plane rotations and robust to minor viewpoint changes. However, the current detectors and descriptors for local image features fail to accommodate out-of-plane rotations larger than 25°–30°. Invariance to such viewpoint changes is essential for numerous applications, including wide baseline matching, 6D pose estimation, and object reconstruction. In this study, we present a general embedding that wraps a detector/descriptor pair in order to increase viewpoint invariance by exploiting input depth maps. The proposed embedding locates smooth surfaces within the input RGB-D images and projects them into a viewpoint invariant representation, enabling the detection and description of more viewpoint invariant features. Our embedding can be utilized with different combinations of descriptor/detector pairs, according to the desired application. Using synthetic and real-world objects, we evaluated the viewpoint invariance of various detectors and descriptors, for both standalone and embedded approaches. While standalone local image features fail to accommodate average viewpoint changes beyond 33.3°, our proposed embedding boosted the viewpoint invariance to different levels, depending on the scene geometry. Objects with distinct surface discontinuities were on average invariant up to 52.8°, and the overall average for all evaluated datasets was 45.4°. Similarly, out of a total of 140 combinations involving 20 local image features and various objects with distinct surface discontinuities, only a single standalone local image feature exceeded the goal of 60° viewpoint difference in just two combinations, as compared with 19 different local image features succeeding in 73 combinations when wrapped in the proposed embedding. Furthermore, the proposed approach operates robustly in the presence of input depth noise, even that of low-cost commodity depth sensors, and well beyond.


Atmosphere ◽  
2021 ◽  
Vol 12 (7) ◽  
pp. 828
Author(s):  
Wai Lun Lo ◽  
Henry Shu Hung Chung ◽  
Hong Fu

Estimation of Meteorological visibility from image characteristics is a challenging problem in the research of meteorological parameters estimation. Meteorological visibility can be used to indicate the weather transparency and this indicator is important for transport safety. This paper summarizes the outcomes of the experimental evaluation of a Particle Swarm Optimization (PSO) based transfer learning method for meteorological visibility estimation method. This paper proposes a modified approach of the transfer learning method for visibility estimation by using PSO feature selection. Image data are collected at fixed location with fixed viewing angle. The database images were gone through a pre-processing step of gray-averaging so as to provide information of static landmark objects for automatic extraction of effective regions from images. Effective regions are then extracted from image database and the image features are then extracted from the Neural Network. Subset of Image features are selected based on the Particle Swarming Optimization (PSO) methods to obtain the image feature vectors for each effective sub-region. The image feature vectors are then used to estimate the visibilities of the images by using the Multiple Support Vector Regression (SVR) models. Experimental results show that the proposed method can give an accuracy more than 90% for visibility estimation and the proposed method is effective and robust.


Sign in / Sign up

Export Citation Format

Share Document