calibration pattern
Recently Published Documents


TOTAL DOCUMENTS

52
(FIVE YEARS 12)

H-INDEX

8
(FIVE YEARS 1)

Author(s):  
Yousfi Jezia ◽  
Lahouar Samir ◽  
Ben Amara Abdelmajid

Abstract In this paper, we study 3D object reconstruction based on a set of 2D images. In order to get the best camera path that increases accuracy we focus on this strategy to be used. Euclidean 3D image-based reconstruction is developed in three steps, which are primitive extraction, correspondence of these primitives and then triangulation. The extraction and triangulation are purely geometrical, whereas the matching step can have precision issues especially in the case of noisy images. An experimental study is carried out where a camera is attached to a robot arm and moved precisely relative to a scene containing a checkerboard calibration pattern. The reconstruction results are compared with values of motion given to the robot. A geometric and analytical study of the impact of the motion of the camera with respect to the scene on the error of a 3D image-based reconstructed point was also carried out. It has been demonstrated that the impact of a correspondence error on the reconstruction accuracy point varies drastically depending on the image capture strategy.


Author(s):  
Qi Zhang ◽  
Qing Wang

Due to the trade-off between spatial resolution and angular resolution of the light field, it is difficult to extract high precision corner points and line features from light fields for calibration. A novel calibration pattern of separate circles is designed, and a light field camera calibration method based on common self-polar triangle with respect to separate circles is proposed in this paper. First, we explore the uniquity and reconstruction of common self-polar triangle with respect to sperate circles. Then, based on projections of the multi-projection-center model on the plane and conic, the common self-polar triangle on the sub-aperture image is reconstructed and used to estimate planar homography. Finally, a light field camera calibration algorithm is then proposed, including linear initialization and non-linear optimization. Experimental results on both synthetic and real data have verified the effectiveness and robustness of the method and algorithm proposed.


Energies ◽  
2021 ◽  
Vol 14 (9) ◽  
pp. 2508
Author(s):  
Pascal Kölblin ◽  
Alexander Bartler ◽  
Marvin Füller

Electroluminescence (EL) measurements allow one to detect damages and/or defective parts in photovoltaic systems. In principle, it seems possible to predict the complete current/voltage curve from such pictures even automatically. However, such a precise analysis requires image corrections and calibrations, because vignetting and lens distortion cause signal and spatial distortions. Earlier works on crystalline silicon modules used the cell gap joints (CGJ) as calibration pattern. Unfortunately, this procedure fails if the detection of the gaps is not accurate or if the contrast in the images is low. Here, we enhance the automated camera calibration algorithm with a reliable pattern detection and analyze quantitatively the quality of the process. Our method uses an iterative Hough transform to detect line structures and uses three key figures (KF) to separate detected busbars from cell gaps. This method allows a reliable identification of all cell gaps, even in noisy images or if disconnected edges in PV cells exist or potential induced degradation leads to a low contrast between active cell area and background. In our dataset, a subset of 30 EL images (72 cell each) forming grid (5×11) lead to consistent calibration results. We apply the calibration process to 997 single module EL images of PV modules and evaluate our results with a random subset of 40 images. After lens distortion correction and perspective correction, we analyze the residual deviation between ideal target grid points and the previously detected CGJ after applied distortion and perspective correction. For all of the 2200 control points in the 40 evaluation images, we achieve a deviation of less than or equal to 3 pixels. For 50% of the control points, a deviation of of less than or equal to 1 pixel is reached.


2021 ◽  
Vol 7 ◽  
pp. e485
Author(s):  
Oleksandr Semeniuta

Calibration of vision systems is essential for performing measurement in real world coordinates. For stereo vision, one performs stereo calibration, the results of which are used for 3D reconstruction of points imaged in the two cameras. A common and flexible technique for such calibration is based on collection and processing pairs of images of a planar chessboard calibration pattern. The inherent weakness of this approach lies in its reliance on the random nature of data collection, which might lead to better or worse calibration results, depending on the collected set of image pairs. In this paper, a subset-based approach to camera and stereo calibration, along with its implementation based on OpenCV, is presented. It utilizes a series of calibration runs based on randomly chosen subsets from the global set of image pairs, with subsequent evaluation of metrics based on triangulating the features in each image pair. The proposed method is evaluated on a collected set of chessboard image pairs obtained with two identical industrial cameras. To highlight the capabilities of the method to select the best-performing calibration parameters, a principal component analysis and clustering of the transformed data was performed, based on the set of metric measurements per each calibration run.


2021 ◽  
pp. 1-21
Author(s):  
Arshiya Mahmoudi ◽  
Mehdi Sabzehparvar ◽  
Mahdi Mortazavi

Abstract This paper describes a camera simulation framework for validating machine vision algorithms under general airborne camera imperfections. Lens distortion, image delay, rolling shutter, motion blur, interlacing, vignetting, image noise, and light level are modelled. This is the first simulation that considers all temporal distortions jointly, along with static lens distortions in an online manner. Several innovations are proposed including a motion tracking system allowing the camera to follow the flight log with eligible derivatives. A reverse pipeline, relating each pixel in the output image to pixels in the ideal input image, is developed. It is shown that the inverse lens distortion model and the inverse temporal distortion models are decoupled in this way. A short-time pixel displacement model is proposed to solve for temporal distortions (i.e. delay, rolling shutter, motion blur, and interlacing). Evaluation is done by several means including regenerating an airborne dataset, regenerating the camera path on a calibration pattern, and evaluating the ability of the time displacement model to predict other frames. Qualitative evaluations are also made.


2020 ◽  
Author(s):  
Michiro Yamamoto ◽  
Shintaro Oyama ◽  
Syuto Otsuka ◽  
Yukimi Murakami ◽  
Hideo Yokota ◽  
...  

Abstract Background: The purpose of this study was to develop and evaluate a novel elbow arthroscopy system with superimposed bone and nerve visualization based on preoperative computed tomography (CT) and magnetic resonance imaging (MRI) data. Methods: We obtained bone and nerve segmentation data by CT and MRI, respectively, of the elbow of a healthy human volunteer and cadaveric Japanese monkey. A life size 3-dimensional (3D) model of human organs and frame was constructed using a stereo-lithographic 3D printer. Elbow arthroscopy was performed using the elbow of a cadaveric Japanese monkey. The augmented reality (AR) range of error was examined at 1 cm and 2 cm scope–object distances. Results: We successfully performed AR arthroscopy using the life-size 3D elbow model and the elbow of the cadaveric Japanese monkey by making anteromedial and posterior portals. The computer graphics (CG) position and shape were initially different because of lens distortion. The CG position and shape were corrected to match the arthroscopic view using lens distortion parameter estimates based on the calibration pattern. AR position and shape errors were 2.3 mm at 1 cm scope–object distance and 3.6 mm at 2 cm scope–object distance. Conclusion: We attained reasonable accuracy and demonstrated the working of the designed system. Given the multiple applications of AR-enhanced arthroscopic visualization, it has the potential to be the next-generation technology for arthroscopy. This technique will contribute the reduction of serious complications associated with elbow arthroscopy.


2020 ◽  
Vol 8 (1) ◽  
pp. 57 ◽  
Author(s):  
Simone Cosoli

The International Telecommunication Union (ITU) Resolution 612, in combination with Report ITU-R M2.234 (11/2011) and Recommendation ITU-R M.1874-1 (02/2013), regulates the use of the radiolocation services between 3 and 50 MHz to support high frequency oceanographic radar (HFR) operations. The operational frame for HFR systems include: band sharing capabilities, such as synchronization of the signal modulation; pulse shaping and multiple levels of filtering, to reduce out-of-band interferences; low radiated power; directional transmission antenna, to reduce emission over land. Resolution 612 also aims at reducing the use of spectral bands, either through the application of existing band-sharing capabilities, the reduction of the spectral leakage to neighboring frequency bands, or the development and implementation of listen-before-talk (LBT) capabilities. While the LBT mode is operational and commonly used at several phased-array HFR installations, the implementation to commercial direction-finding systems does not appear to be available yet. In this paper, a proof-of-concept is provided for the implementation of the LBT mode for commercial SeaSonde HFRs deployed in Australia, with potential for applications in other networks and installations elsewhere. Potential critical aspects for systems operated under this configuration are also pointed out. Both the receiver and the transmitter antennas may lose efficiency if the frequency offset from the resonant frequency or calibration pattern are too large. Radial resolution clearly degrades when a dynamical adaptation of the bandwidth is performed, which results in non-homogeneous spatial resolution and reduction of the quality of the data. A recommendation would be to perform the LBT-adapt scans after a full measurement cycle (1-h or 3-h, depending on the system configuration) is concluded. Mutual cross-interference from clock offsets between two HFR systems may bias the frequency scans when the site computers controlling data acquisitions are not properly time-synchronized.


Sign in / Sign up

Export Citation Format

Share Document