projective transformation
Recently Published Documents


TOTAL DOCUMENTS

140
(FIVE YEARS 32)

H-INDEX

8
(FIVE YEARS 2)

2022 ◽  
Vol 183 ◽  
pp. 129-146
Author(s):  
Xianwei Zheng ◽  
Zhuang Yuan ◽  
Zhen Dong ◽  
Mingyue Dong ◽  
Jianya Gong ◽  
...  

2021 ◽  
Vol 13 (16) ◽  
pp. 3269
Author(s):  
Reza Maalek ◽  
Derek D. Lichti

Projective transformation of spheres onto images produce ellipses, whose centers do not coincide with the projected center of the sphere. This results in an eccentricity error, which must be treated in high precision metrology. This article provides closed formulations for modeling this error in images to enable 3-dimensional (3D) reconstruction of the center of spherical objects. The article also provides a new direct robust method for detecting spherical pattern in point clouds. It was shown that the eccentricity error in an image has only one component in the direction of the major axis of the ellipse. It was also revealed that the eccentricity is zero if and only if the center of the projected sphere lies on the camera’s perspective center. The effectiveness of the robust sphere detection and the eccentricity error modeling method was evaluated on simulated point clouds of spheres and real-world images, respectively. It was observed that the proposed robust sphere fitting method outperformed the popular M-estimator sample consensus in terms of radius and center estimation accuracy by a factor of 13, and 14 on average, respectively. Using the proposed eccentricity adjustment, the estimated 3D center of the sphere using modeled eccentricity was superior to the unmodeled case. It was also observed that the accuracy of the estimated 3D center using modeled eccentricity continuously improved as the number of images increased, whereas the unmodeled eccentricity did not show improvements after eight image views. The results of the investigation show that: (i) the proposed method effectively modeled the eccentricity error, and (ii) the effects of eliminating the eccentricity error in the 3D reconstruction become even more pronounced in a larger number of image views.


2021 ◽  
Vol 7 (6) ◽  
pp. 94
Author(s):  
Georg Wölflein ◽  
Ognjen Arandjelović

Identifying the configuration of chess pieces from an image of a chessboard is a problem in computer vision that has not yet been solved accurately. However, it is important for helping amateur chess players improve their games by facilitating automatic computer analysis without the overhead of manually entering the pieces. Current approaches are limited by the lack of large datasets and are not designed to adapt to unseen chess sets. This paper puts forth a new dataset synthesised from a 3D model that is an order of magnitude larger than existing ones. Trained on this dataset, a novel end-to-end chess recognition system is presented that combines traditional computer vision techniques with deep learning. It localises the chessboard using a RANSAC-based algorithm that computes a projective transformation of the board onto a regular grid. Using two convolutional neural networks, it then predicts an occupancy mask for the squares in the warped image and finally classifies the pieces. The described system achieves an error rate of 0.23% per square on the test set, 28 times better than the current state of the art. Further, a few-shot transfer learning approach is developed that is able to adapt the inference system to a previously unseen chess set using just two photos of the starting position, obtaining a per-square accuracy of 99.83% on images of that new chess set. The code, dataset, and trained models are made available online.


2021 ◽  
Vol 4 ◽  
Author(s):  
Md Nazmuzzaman Khan ◽  
Mohammad Al Hasan ◽  
Sohel Anwar

A single camera creates a bounding box (BB) for the detected object with certain accuracy through a convolutional neural network (CNN). However, a single RGB camera may not be able to capture the actual object within the BB even if the CNN detector accuracy is high for the object. In this research, we present a solution to this limitation through the usage of multiple cameras, projective transformation, and a fuzzy logic–based fusion. The proposed algorithm generates a “confidence score” for each frame to check the trustworthiness of the BB generated by the CNN detector. As a first step toward this solution, we created a two-camera setup to detect objects. Agricultural weed is used as objects to be detected. A CNN detector generates BB for each camera when weed is present. Then a projective transformation is used to project one camera’s image plane to another camera’s image plane. The intersect over union (IOU) overlap of the BB is computed when objects are detected correctly. Four different scenarios are generated based on how far the object is from the multi-camera setup, and IOU overlap is calculated for each scenario (ground truth). When objects are detected correctly and bounding boxes are at correct distance, the IOU overlap value should be close to the ground truth IOU overlap value. On the other hand, the IOU overlap value should differ if BBs are at incorrect positions. Mamdani fuzzy rules are generated using this reasoning, and three different confidence scores (“high,” “ok,” and “low”) are given to each frame based on accuracy and position of BBs. The proposed algorithm was then tested under different conditions to check its validity. The confidence score of the proposed fuzzy system for three different scenarios supports the hypothesis that the multi-camera–based fusion algorithm improved the overall robustness of the detection system.


2021 ◽  
Vol 13 (3) ◽  
pp. 490
Author(s):  
Yongfei Li ◽  
Shicheng Wang ◽  
Hao He ◽  
Deyu Meng ◽  
Dongfang Yang

We address the problem of aerial image geolocalization over an area as large as a whole city through road network matching, which is modeled as a 2D point set registration problem under the 2D projective transformation and solved in a two-stage manner. In the first stage, all the potential transformations aligning the query road point set to the reference road point set are found by local point feature matching. A local geometric feature, called the Projective-Invariant Contour Feature (PICF), which consists of a road intersection and the closest points to it in each direction, is specifically designed. We prove that the proposed PICF is equivariant under the 2D projective transformation group. We then encode the PICF with a projective-invariant descriptor to enable the fast search of potential correspondences. The bad correspondences are then removed by a geometric consistency check with the graph-cut algorithm effectively. In the second stage, a flexible strategy is developed to recover the homography transformation with all the PICF correspondences with the Random Sample Consensus (RANSAC) method or to recover the transformation with only one correspondence and then refine it with the local-to-global Iterative Closest Point (ICP) algorithm when only a few correspondences exist. The strategy makes our method efficient to deal with both scenes where roads are sparse and scenes where roads are dense. The refined transformations are then verified with alignment accuracy to determine whether they are accepted as correct. Experimental results show that our method runs faster and greatly improves the recall compared with the state-of-the-art methods.


2021 ◽  
Vol 13 (3) ◽  
pp. 396
Author(s):  
Claudio Ignacio Fernández ◽  
Ata Haddadi ◽  
Brigitte Leblon ◽  
Jinfei Wang ◽  
Keri Wang

Cucumber powdery mildew, which is caused by Podosphaera xanthii, is a major disease that has a significant economic impact in cucumber greenhouse production. It is necessary to develop a non-invasive fast detection system for that disease. Such a system will use multispectral imagery acquired at a close range with a camera attached to a mobile cart’s mechanic extension. This study evaluated three image registration methods applied to non-georeferenced multispectral images acquired at close range over greenhouse cucumber plants with a MicaSense® RedEdge camera. The detection of matching points was performed using Speeded-Up Robust Features (SURF), and outliers matching points were removed using the M-estimator Sample Consensus (MSAC) algorithm. Three geometric transformations (affine, similarity, and projective) were considered in the registration process. For each transformation, we mapped the matching points of the blue, green, red, and NIR band images into the red-edge band space and computed the root mean square error (RMSE in pixel) to estimate the accuracy of each image registration. Our results achieved an RMSE of less than 1 pixel with the similarity and affine transformations and of less than 2 pixels with the projective transformation, whatever the band image. We determined that the best image registration method corresponded to the affine transformation because the RMSE is less than 1 pixel and the RMSEs have a Gaussian distribution for all of the bands, but the blue band.


Author(s):  
Grigoris I Kalogeropoulos ◽  
Athanasios D Karageorgos ◽  
Athanasios A Pantelous

Abstract The study of linear time invariant descriptor systems has intimately been related to the study of matrix pencils. It is true that a large number of systems can be reduced to the study of differential (or difference) systems, $S\left ( {F,G} \right )$, $$\begin{align*} & S\left({F,G}\right): F\dot{x}(t) = G{x}(t) \left(\text{or the dual, } F{x}(t) = G\dot{x}(t)\right), \end{align*}$$and $$\begin{align*} & S\left({F,G}\right): Fx_{k+1} = Gx_k \left(\text{or the dual, } Fx_k=Gx_{k+1}\right)\!, F,G \in{\mathbb{C}^{m \times n}}, \end{align*}$$and their properties can be characterized by homogeneous matrix pencils, $sF - \hat{s}G$. Based on the fact that the study of the invariants for the projective equivalence class can be reduced to the study of the invariants of the matrices of set ${\mathbb{C}^{k \times 2}}$ (for $k \geqslant 3$ with all $2\times 2$-minors non-zero) under the extended Hermite equivalence, in the context of the bilinear strict equivalence relation, a novel projective transformation is analytically derived.


Sign in / Sign up

Export Citation Format

Share Document