duplicate image detection
Recently Published Documents


TOTAL DOCUMENTS

47
(FIVE YEARS 8)

H-INDEX

8
(FIVE YEARS 2)

Author(s):  
Anusha B

The rapid development in the technology of Internet and the increase in the usage of mobile devices, it is very easy for users to capture, communicate and share the images through the networks. The spectacular achievement of convolution neural networks in the area of computer vision, will help us to match the features that are very similar between the images for detecting the duplicate version of the image. In this project we use Image Net model that mainly provide a large database that contains many images of different categories. Flask framework is used in this project, which includes many libraries, modules that helps the web developer to write web application. In this project the user is allowed to upload the image, then the image features will be extracted and fed to the CNN model. The CNN model will calculate the similarity distance between the images that is already present in the database and detect the top four images that are duplicate version of the uploaded image.


Sensors ◽  
2021 ◽  
Vol 21 (1) ◽  
pp. 255
Author(s):  
Yi Zhang ◽  
Shizhou Zhang ◽  
Ying Li ◽  
Yanning Zhang

Recently, both single modality and cross modality near-duplicate image detection tasks have received wide attention in the community of pattern recognition and computer vision. Existing deep neural networks-based methods have achieved remarkable performance in this task. However, most of the methods mainly focus on the learning of each image from the image pair, thus leading to less use of the information between the near duplicate image pairs to some extent. In this paper, to make more use of the correlations between image pairs, we propose a spatial transformer comparing convolutional neural network (CNN) model to compare near-duplicate image pairs. Specifically, we firstly propose a comparing CNN framework, which is equipped with a cross-stream to fully learn the correlation information between image pairs, while considering the features of each image. Furthermore, to deal with the local deformations led by cropping, translation, scaling, and non-rigid transformations, we additionally introduce a spatial transformer comparing CNN model by incorporating a spatial transformer module to the comparing CNN architecture. To demonstrate the effectiveness of the proposed method on both the single-modality and cross-modality (Optical-InfraRed) near-duplicate image pair detection tasks, we conduct extensive experiments on three popular benchmark datasets, namely CaliforniaND (ND means near duplicate), Mir-Flickr Near Duplicate, and TNO Multi-band Image Data Collection. The experimental results show that the proposed method can achieve superior performance compared with many state-of-the-art methods on both tasks.


Author(s):  
Zhili Zhou ◽  
Q. M. Jonathan Wu ◽  
Shaohua Wan ◽  
Wendi Sun ◽  
Xingming Sun

Mathematics ◽  
2020 ◽  
Vol 8 (4) ◽  
pp. 644 ◽  
Author(s):  
Zhili Zhou ◽  
Kunde Lin ◽  
Yi Cao ◽  
Ching-Nung Yang ◽  
Yuling Liu

Due to the great success of convolutional neural networks (CNNs) in the area of computer vision, the existing methods tend to match the global or local CNN features between images for near-duplicate image detection. However, global CNN features are not robust enough to combat background clutter and partial occlusion, while local CNN features lead to high computational complexity in the step of feature matching. To achieve high efficiency while maintaining good accuracy, we propose a coarse-to-fine feature matching scheme using both global and local CNN features for real-time near-duplicate image detection. In the coarse matching stage, we implement the sum-pooling operation on convolutional feature maps (CFMs) to generate the global CNN features, and match these global CNN features between a given query image and database images to efficiently filter most of irrelevant images of the query. In the fine matching stage, the local CNN features are extracted by using maximum values of the CFMs and the saliency map generated by the graph-based visual saliency detection (GBVS) algorithm. These local CNN features are then matched between images to detect the near-duplicate versions of the query. Experimental results demonstrate that our proposed method not only achieves a real-time detection, but also provides higher accuracy than the state-of-the-art methods.


2018 ◽  
Vol 27 (9) ◽  
pp. 4452-4464 ◽  
Author(s):  
Weiming Hu ◽  
Yabo Fan ◽  
Junliang Xing ◽  
Liang Sun ◽  
Zhaoquan Cai ◽  
...  

2018 ◽  
Vol 91 (6) ◽  
pp. 575-586 ◽  
Author(s):  
Hyunwoo Kim ◽  
SungRyull Sohn ◽  
Junmo Kim

Sign in / Sign up

Export Citation Format

Share Document