Region based stereo matching oriented image processing

Author(s):  
S. Randriamasy ◽  
A. Gagalowicz
2014 ◽  
Vol 644-650 ◽  
pp. 207-210
Author(s):  
Shuang Liu ◽  
Xiang Jie Kong ◽  
Ming Cai Shan

Binocular parallax vision system is a kind of computer vision technology. Two cameras on different locations can get two different pictures of same object. The space position of the object can be calculated by the parallax information of two different pictures. The binocular parallax vision technology includes cameras calibration, image processing, and stereo matching analysis. The paper will introduce the inside and outside parameters calibration methods, and combing the traffic applications, designed the calibrating scheme. The parameters that obtained according to the scheme can meet the demands of measuring the vehicle distance. The high precision can meet the needs of intelligent transportation vehicles in a security vehicles spacing survey, which is an effective way for measuring the front car distance.


2020 ◽  
Vol 17 (2) ◽  
pp. 172988142091000
Author(s):  
Jiaofei Huo ◽  
Xiaomo Yu

With the development of computer technology and three-dimensional reconstruction technology, three-dimensional reconstruction based on visual images has become one of the research hotspots in computer graphics. Three-dimensional reconstruction based on visual image can be divided into three-dimensional reconstruction based on single photo and video. As an indirect three-dimensional modeling technology, this method is widely used in the fields of film and television production, cultural relics restoration, mechanical manufacturing, and medical health. This article studies and designs a stereo vision system based on two-dimensional image modeling technology. The system can be divided into image processing, camera calibration, stereo matching, three-dimensional point reconstruction, and model reconstruction. In the part of image processing, common image processing methods, feature point extraction algorithm, and edge extraction algorithm are studied. On this basis, interactive local corner extraction algorithm and interactive local edge detection algorithm are proposed. It is found that the Harris algorithm can effectively remove the features of less information and easy to generate clustering phenomenon. At the same time, the method of limit constraints is used to match the feature points extracted from the image. This method has high matching accuracy and short time. The experimental research has achieved good matching results. Using the platform of binocular stereo vision system, each step in the process of three-dimensional reconstruction has achieved high accuracy, thus achieving the three-dimensional reconstruction of the target object. Finally, based on the research of three-dimensional reconstruction of mechanical parts and the designed binocular stereo vision system platform, the experimental results of edge detection, camera calibration, stereo matching, and three-dimensional model reconstruction in the process of three-dimensional reconstruction are obtained, and the full text is summarized, analyzed, and prospected.


2014 ◽  
Vol 670-671 ◽  
pp. 1194-1199 ◽  
Author(s):  
Jian Cheng Liu ◽  
Guang Xi Xiong

This paper presents a method of measuring the volumetric tool wear by use of the image processing technique. A single CCD camera based stereo vision system is built to acquire the image pair. The crater wear’s boundary of the cutting tool is then detected, and then the 3D volumetric shape of the worn region on the rake face is reconstructed through the developed image matching algorithms, and the crater’s volume and depth is estimated. A Matlab software system is developed to perform image acquisition, calibration, image rectification, image adjustment, stereo matching, crater’s depth estimation, and the representation of the volumetric tool wear. The feasibility of the proposed method is verified through experiments.


Author(s):  
S. Mary Praveena ◽  
R. Kanmani ◽  
A. K. Kavitha

Image fusion is a sub field of image processing in which more than one images are fused to create an image where all the objects are in focus. The process of image fusion is performed for multi-sensor and multi-focus images of the same scene. Multi-sensor images of the same scene are captured by different sensors whereas multi-focus images are captured by the same sensor. In multi-focus images, the objects in the scene which are closer to the camera are in focus and the farther objects get blurred. Contrary to it, when the farther objects are focused then closer objects get blurred in the image. To achieve an image where all the objects are in focus, the process of images fusion is performed either in spatial domain or in transformed domain. In recent times, the applications of image processing have grown immensely. Usually due to limited depth of field of optical lenses especially with greater focal length, it becomes impossible to obtain an image where all the objects are in focus. Thus it plays an important role to perform other tasks of image processing such as image segmentation, edge detection, stereo matching and image enhancement. Hence, a novel feature-level multi-focus image fusion technique has been proposed which fuses multi-focus images. Thus the results of extensive experimentation performed to highlight the efficiency and utility of the proposed technique is presented.  The proposed work further explores comparison between fuzzy based image fusion and neuro fuzzy fusion technique along with quality evaluation indices.


Author(s):  
Nellutla Sasikala ◽  
V. Swathipriya ◽  
M. Ashwini ◽  
V. Preethi ◽  
A. Pranavi ◽  
...  

This paper deals with image processing and feature extraction. Feature extraction plays a vital role in the field of image processing. There exist different image pre-processing approaches for feature extraction such as binarization, thresholding, resizing, normalisation so on...Then after these techniques are applied to obtain high clarity images. In Feature extraction object recognition and stereo matching are at the base of many computer vision problems. The descriptor generator module is changed for increasing the performance of algorithm. SIFT algorithm consist of two modules such as key point detection module and descriptor generation module. When compared to recent solution, the descriptor generation module speed is fifteen times faster and the time for feature extraction is also reduced.


Sign in / Sign up

Export Citation Format

Share Document