interest point detector
Recently Published Documents


TOTAL DOCUMENTS

48
(FIVE YEARS 3)

H-INDEX

9
(FIVE YEARS 0)

2021 ◽  
Author(s):  
Jawad Khan

Due to the number of image editing tools available online, image tampering has been easy to execute. The quality of these tools has led these tamperings to steer clear from the naked eye. One such tampering method is called the Copy-Move tampering where a region of the image is copied and pasted elsewhere in the image. We propose a method to deal with this. First, the image is broken to blocks using discrete cosine transform. Next, the dimensionality is reduced using the gaussian RBF kernel PCA. Finally, a new iterative interest point detector is proposed and the image is then sent as input to a CNN that predicts whether the image has been forged or not. The experimental results showed that the algorithm gave an excellent percentage of accuracy, outperforming state of the art methods.


2019 ◽  
Vol 28 (03) ◽  
pp. 1
Author(s):  
Yanshan Li ◽  
Qingteng Li ◽  
Qinghua Huang ◽  
Rongjie Xia ◽  
Xuelong Li

Author(s):  
Abel Méndez-Porras ◽  
Jorge Alfaro-Velasco ◽  
Marcelo Jenkins ◽  
Alexandra Martínez Porras

Context: Mobile applications support a set of user-interaction features that are inde- pendent of the application logic. Rotating the device, scrolling, or zooming are examples of such features. Some bugs in mobile applications can be attributed to user-interaction features. Objective: This paper proposes and evaluates a bug analyzer based on user- interaction features that uses digital image processing to find bugs. Method: Our bug analyzer detects bugs by comparing the similarity between images taken before and after a user-interaction. SURF, an interest point detector and descriptor, is used to compare the images. To evaluate the bug analyzer, we conducted a case study with 15 randomly selected mobile applications. First, we identified user-interaction bugs by manually testing the applications. Images were captured before and after applying each user-interaction feature. Then, image pairs were processed with SURF to obtain interest points, from which a similarity percentage was computed, to finally decide whether there was a bug. Results: We performed a total of 49 user-interaction feature tests. When manually testing the applications, 17 bugs were found, whereas when using image processing, 15 bugs were detected. Conclusions: 8 out of 15 mobile applications tested had bugs associated to user-interaction features. Our bug analyzer based on image processing was able to detect 88% (15 out of 17) of the user-interaction bugs found with manual testing.


Sign in / Sign up

Export Citation Format

Share Document