Image Completion Using Randomized Correspondence

2011 ◽  
Vol 271-273 ◽  
pp. 229-234
Author(s):  
Yun Ling ◽  
Hai Tao Sun ◽  
Jian Wei Han ◽  
Xun Wang

Image completion techniques can be used to repair unknown image regions. However, existing techniques are too slow for real-time applications. In this paper, an image completion technique based on randomized correspondence is presented to accelerate the completing process. Some good patch matches are found via random sampling and propagated to surrounding areas. Approximate nearest neighbor matches between image patches can be found in real-time. For images with strong structure, straight lines or curves across unknown regions can be manually specified to preserve the important structures. In such case, search is only performed on specified lines or curves. Finally, the remaining unknown regions can be filled using randomized correspondence with structural constraint. The experiments show that the quality and speed of presented technique are much better than that of existing methods.

Author(s):  
Michael Sherer ◽  
Ebin Scaria

Many programs have a fixed directed graph structure in the way they are processed. In particular, computer vision systems often employ this kind of pipe-and-filter structure. It is desirable to take advantage of the inherent parallelism in such a system. Additionally, such systems need to run in real-time for robotics applications. In such applications, robotic platforms must make time-critical decisions, and so any additional performance gain would be beneficial. To further improve on this, the platform may need to make the best decision it can by a given time, so that newer data can be processed. Thus, having a timeout that would return a good result may be better than operating on outdated information.


2020 ◽  
Author(s):  
Frederico Limberger ◽  
Manuel Oliveira

Automatic detection of planar regions in point clouds is an important step for many graphics, image processing, and computer vision applications. While laser scanners and digital photography have allowed us to capture increasingly larger datasets, previous approaches for planar region detection are computationally expensive, precluding their use in real-time applications. We present an O(n log n) technique for plane detection in unorganized point clouds based on an efficient Hough-transform voting scheme. It works by clustering sets of approximately co-planar points and by casting votes for these clusters on a spherical accumulator using a trivariate Gaussian kernel. A comparison with competing techniques shows that our approach is considerably faster and scales significantly better than previous ones, being the first practical solution for deterministic plane detection in large unorganized point clouds.


1989 ◽  
Author(s):  
Insup Lee ◽  
Susan Davidson ◽  
Victor Wolfe

2020 ◽  
Vol 15 (2) ◽  
pp. 144-196 ◽  
Author(s):  
Mohammad R. Khosravi ◽  
Sadegh Samadi ◽  
Reza Mohseni

Background: Real-time video coding is a very interesting area of research with extensive applications into remote sensing and medical imaging. Many research works and multimedia standards for this purpose have been developed. Some processing ideas in the area are focused on second-step (additional) compression of videos coded by existing standards like MPEG 4.14. Materials and Methods: In this article, an evaluation of some techniques with different complexity orders for video compression problem is performed. All compared techniques are based on interpolation algorithms in spatial domain. In details, the acquired data is according to four different interpolators in terms of computational complexity including fixed weights quartered interpolation (FWQI) technique, Nearest Neighbor (NN), Bi-Linear (BL) and Cubic Cnvolution (CC) interpolators. They are used for the compression of some HD color videos in real-time applications, real frames of video synthetic aperture radar (video SAR or ViSAR) and a high resolution medical sample. Results: Comparative results are also described for three different metrics including two reference- based Quality Assessment (QA) measures and an edge preservation factor to achieve a general perception of various dimensions of the mentioned problem. Conclusion: Comparisons show that there is a decidable trade-off among video codecs in terms of more similarity to a reference, preserving high frequency edge information and having low computational complexity.


Author(s):  
Mohsen Ansari ◽  
Amir Yeganeh-Khaksar ◽  
Sepideh Safari ◽  
Alireza Ejlali

Author(s):  
R.K. Clark ◽  
I.B. Greenberg ◽  
P.K. Boucher ◽  
T.F. Lunt ◽  
P.G. Neumann ◽  
...  

Data ◽  
2020 ◽  
Vol 6 (1) ◽  
pp. 1
Author(s):  
Ahmed Elmogy ◽  
Hamada Rizk ◽  
Amany M. Sarhan

In data mining, outlier detection is a major challenge as it has an important role in many applications such as medical data, image processing, fraud detection, intrusion detection, and so forth. An extensive variety of clustering based approaches have been developed to detect outliers. However they are by nature time consuming which restrict their utilization with real-time applications. Furthermore, outlier detection requests are handled one at a time, which means that each request is initiated individually with a particular set of parameters. In this paper, the first clustering based outlier detection framework, (On the Fly Clustering Based Outlier Detection (OFCOD)) is presented. OFCOD enables analysts to effectively find out outliers on time with request even within huge datasets. The proposed framework has been tested and evaluated using two real world datasets with different features and applications; one with 699 records, and another with five millions records. The experimental results show that the performance of the proposed framework outperforms other existing approaches while considering several evaluation metrics.


Sign in / Sign up

Export Citation Format

Share Document