scholarly journals An image enhancement algorithm of video surveillance scene based on deep learning

2021 ◽  
Author(s):  
Wei‐wei Shen ◽  
Lin Chen ◽  
Shuai Liu ◽  
Yu‐Dong Zhang
Author(s):  
Yuma Kinoshita ◽  
Hitoshi Kiya

In this paper, we propose a novel hue-correction scheme for color-image-enhancement algorithms including deep-learning-based ones. Although hue-correction schemes for color-image enhancement have already been proposed, there are no schemes that can both perfectly remove perceptual hue-distortion on the basis of CIEDE2000 and be applicable to any image-enhancement algorithms. In contrast, the proposed scheme can perfectly remove hue distortion caused by any image-enhancement algorithm such as deep-learning-based ones on the basis of CIEDE2000. Furthermore, the use of a gamut-mapping method in the proposed scheme enables us to compress a color gamut into an output RGB color gamut, without hue changes. Experimental results show that the proposed scheme can completely correct hue distortion caused by image-enhancement algorithms while maintaining the performance of the algorithms and ensuring the color gamut of output images.


2021 ◽  
Vol 11 (13) ◽  
pp. 6085
Author(s):  
Jesus Salido ◽  
Vanesa Lomas ◽  
Jesus Ruiz-Santaquiteria ◽  
Oscar Deniz

There is a great need to implement preventive mechanisms against shootings and terrorist acts in public spaces with a large influx of people. While surveillance cameras have become common, the need for monitoring 24/7 and real-time response requires automatic detection methods. This paper presents a study based on three convolutional neural network (CNN) models applied to the automatic detection of handguns in video surveillance images. It aims to investigate the reduction of false positives by including pose information associated with the way the handguns are held in the images belonging to the training dataset. The results highlighted the best average precision (96.36%) and recall (97.23%) obtained by RetinaNet fine-tuned with the unfrozen ResNet-50 backbone and the best precision (96.23%) and F1 score values (93.36%) obtained by YOLOv3 when it was trained on the dataset including pose information. This last architecture was the only one that showed a consistent improvement—around 2%—when pose information was expressly considered during training.


2021 ◽  
Author(s):  
Dang Bich Thuy Le ◽  
Meredith Sadinski ◽  
Aleksandar Nacev ◽  
Ram Narayanan ◽  
Dinesh Kumar

Sign in / Sign up

Export Citation Format

Share Document