source camera identification
Recently Published Documents


TOTAL DOCUMENTS

132
(FIVE YEARS 35)

H-INDEX

18
(FIVE YEARS 3)

2021 ◽  
Author(s):  
Eitan Flor ◽  
Ramazan Aygun ◽  
Suat Mercan ◽  
Kemal Akkaya

2021 ◽  
Vol 11 (15) ◽  
pp. 6752
Author(s):  
Changhui You ◽  
Hong Zheng ◽  
Zhongyuan Guo ◽  
Tianyu Wang ◽  
Xiongbin Wu

In recent years, source camera identification has become a research hotspot in the field of image forensics and has received increasing attention. It has high application value in combating the spread of pornographic photos, copyright authentication of art photos, image tampering forensics, and so on. Although the existing algorithms greatly promote the research progress of source camera identification, they still cannot effectively reduce the interference of image content with image forensics. To suppress the influence of image content on source camera identification, a multiscale content-independent feature fusion network (MCIFFN) is proposed to solve the problem of source camera identification. MCIFFN is composed of three parallel branch networks. Before the image is sent to the first two branch networks, an adaptive filtering module is needed to filter the image content and extract the noise features, and then the noise features are sent to the corresponding convolutional neural networks (CNN), respectively. In order to retain the information related to the image color, this paper does not preprocess the third branch network, but directly sends the image data to CNN. Finally, the content-independent features of different scales extracted from the three branch networks are fused, and the fused features are used for image source identification. The CNN feature extraction network in MCIFFN is a shallow network embedded with a squeeze and exception (SE) structure called SE-SCINet. The experimental results show that the proposed MCIFFN is effective and robust, and the classification accuracy is improved by approximately 2% compared with the SE-SCINet network.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4701
Author(s):  
Yunxia Liu ◽  
Zeyu Zou ◽  
Yang Yang ◽  
Ngai-Fong Bonnie Law ◽  
Anil Anthony Bharath

Source camera identification has long been a hot topic in the field of image forensics. Besides conventional feature engineering algorithms developed based on studying the traces left upon shooting, several deep-learning-based methods have also emerged recently. However, identification performance is susceptible to image content and is far from satisfactory for small image patches in real demanding applications. In this paper, an efficient patch-level source camera identification method is proposed based on a convolutional neural network. First, in order to obtain improved robustness with reduced training cost, representative patches are selected according to multiple criteria for enhanced diversity in training data. Second, a fine-grained multiscale deep residual prediction module is proposed to reduce the impact of scene content. Finally, a modified VGG network is proposed for source camera identification at brand, model, and instance levels. A more critical patch-level evaluation protocol is also proposed for fair performance comparison. Abundant experimental results show that the proposed method achieves better results as compared with the state-of-the-art algorithms.


Author(s):  
Vittoria Bruni ◽  
Michela Tartaglione ◽  
Domenico Vitulano

AbstractThis paper presents a method for Photo Response Non Uniformity (PRNU) pattern noise based camera identification. It takes advantage of the coherence between different PRNU estimations restricted to specific image regions. The main idea is based on the following observations: different methods can be used for estimating PRNU contribution in a given image; the estimation has not the same accuracy in the whole image as a more faithful estimation is expected from flat regions. Hence, two different estimations of the reference PRNU have been considered in the classification procedure, and the coherence of the similarity metric between them, when evaluated in three different image regions, is used as classification feature. More coherence is expected in case of matching, i.e. the image has been acquired by the analysed device, than in the opposite case, where similarity metric is almost noisy and then unpredictable. Presented results show that the proposed approach provides comparable and often better classification results of some state of the art methods, showing to be robust to lack of flat field (FF) images availability, devices of the same brand or model, uploading/downloading from social networks.


2021 ◽  
Vol 100 ◽  
pp. 102079
Author(s):  
Hui Lin ◽  
Yan Wo ◽  
Yuanlu Wu ◽  
Ke Meng ◽  
Guoqiang Han

2021 ◽  
Vol 100 ◽  
pp. 102076
Author(s):  
Guowen Zhang ◽  
Bo Wang ◽  
Fei Wei ◽  
Kaize Shi ◽  
Yue Wang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document