scholarly journals Fully End-to-End Composite Recurrent Convolution Network for Deformable Facial Tracking In The Wild

Author(s):  
Decky Aspandi ◽  
Oriol Martinez ◽  
Federico Sukno ◽  
Xavier Binefa
Author(s):  
Ojasvi Yadav ◽  
Koustav Ghosal ◽  
Sebastian Lutz ◽  
Aljosa Smolic

AbstractWe address the problem of exposure correction of dark, blurry and noisy images captured in low-light conditions in the wild. Classical image-denoising filters work well in the frequency space but are constrained by several factors such as the correct choice of thresholds and frequency estimates. On the other hand, traditional deep networks are trained end to end in the RGB space by formulating this task as an image translation problem. However, that is done without any explicit constraints on the inherent noise of the dark images and thus produces noisy and blurry outputs. To this end, we propose a DCT/FFT-based multi-scale loss function, which when combined with traditional losses, trains a network to translate the important features for visually pleasing output. Our loss function is end to end differentiable, scale-agnostic and generic; i.e., it can be applied to both RAW and JPEG images in most existing frameworks without additional overhead. Using this loss function, we report significant improvements over the state of the art using quantitative metrics and subjective tests.


2021 ◽  
Vol 25 (1) ◽  
pp. 22-34 ◽  
Author(s):  
Konstantinos Kyritsis ◽  
Christos Diou ◽  
Anastasios Delopoulos

2017 ◽  
Vol 37 (4-5) ◽  
pp. 492-512 ◽  
Author(s):  
Julie Dequaire ◽  
Peter Ondrúška ◽  
Dushyant Rao ◽  
Dominic Wang ◽  
Ingmar Posner

This paper presents a novel approach for tracking static and dynamic objects for an autonomous vehicle operating in complex urban environments. Whereas traditional approaches for tracking often feature numerous hand-engineered stages, this method is learned end-to-end and can directly predict a fully unoccluded occupancy grid from raw laser input. We employ a recurrent neural network to capture the state and evolution of the environment, and train the model in an entirely unsupervised manner. In doing so, our use case compares to model-free, multi-object tracking although we do not explicitly perform the underlying data-association process. Further, we demonstrate that the underlying representation learned for the tracking task can be leveraged via inductive transfer to train an object detector in a data efficient manner. We motivate a number of architectural features and show the positive contribution of dilated convolutions, dynamic and static memory units to the task of tracking and classifying complex dynamic scenes through full occlusion. Our experimental results illustrate the ability of the model to track cars, buses, pedestrians, and cyclists from both moving and stationary platforms. Further, we compare and contrast the approach with a more traditional model-free multi-object tracking pipeline, demonstrating that it can more accurately predict future states of objects from current inputs.


Author(s):  
Shuaitao Zhang ◽  
Yuliang Liu ◽  
Lianwen Jin ◽  
Yaoxiong Huang ◽  
Songxuan Lai

A new method is proposed for removing text from natural images. The challenge is to first accurately localize text on the stroke-level and then replace it with a visually plausible background. Unlike previous methods that require image patches to erase scene text, our method, namely ensconce network (EnsNet), can operate end-to-end on a single image without any prior knowledge. The overall structure is an end-to-end trainable FCN-ResNet-18 network with a conditional generative adversarial network (cGAN). The feature of the former is first enhanced by a novel lateral connection structure and then refined by four carefully designed losses: multiscale regression loss and content loss, which capture the global discrepancy of different level features; texture loss and total variation loss, which primarily target filling the text region and preserving the reality of the background. The latter is a novel local-sensitive GAN, which attentively assesses the local consistency of the text erased regions. Both qualitative and quantitative sensitivity experiments on synthetic images and the ICDAR 2013 dataset demonstrate that each component of the EnsNet is essential to achieve a good performance. Moreover, our EnsNet can significantly outperform previous state-of-the-art methods in terms of all metrics. In addition, a qualitative experiment conducted on the SBMNet dataset further demonstrates that the proposed method can also preform well on general object (such as pedestrians) removal tasks. EnsNet is extremely fast, which can preform at 333 fps on an i5-8600 CPU device.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1074
Author(s):  
Song-Lu Chen ◽  
Qi Liu ◽  
Jia-Wei Ma ◽  
Chun Yang

As the license plate is multiscale and multidirectional in the natural scene image, its detection is challenging in many applications. In this work, a novel network that combines indirect and direct branches is proposed for license plate detection in the wild. The indirect detection branch performs small-sized vehicle plate detection with high precision in a coarse-to-fine scheme using vehicle–plate relationships. The direct detection branch detects the license plate directly in the input image, reducing false negatives in the indirect detection branch due to the miss of vehicles’ detection. We propose a universal multidirectional license plate refinement method by localizing the four corners of the license plate. Finally, we construct an end-to-end trainable network for license plate detection by combining these two branches via post-processing operations. The network can effectively detect the small-sized license plate and localize the multidirectional license plate in real applications. To our knowledge, the proposed method is the first one that combines indirect and direct methods into an end-to-end network for license plate detection. Extensive experiments verify that our method outperforms the indirect methods and direct methods significantly.


Author(s):  
Yizhang Xia ◽  
Bailing Zhang ◽  
Frans Coenen

With the rise of crimes associated with Automated Teller Machines (ATMs), security reinforcement by surveillance techniques has been a hot topic on the security agenda. As a result, cameras are frequently installed with ATMs, so as to capture the facial images of users. The main objective is to support follow-up criminal investigations in the event of an incident. However, in the case of miss-use, the user’s face is often occluded. Therefore, face occlusion detection has become very important to prevent crimes connected with ATM usage. Traditional approaches to solving the problem typically comprise a succession of steps: localization, segmentation, feature extraction and recognition. This paper proposes an end-to-end facial occlusion detection framework, which is robust and effective by combining region proposal algorithm and Convolutional Neural Networks (CNN). The framework utilizes a coarse-to-fine strategy, which consists of two CNNs. The first CNN detects the head element within an upper body image while the second distinguishes which facial part is occluded from the head image. In comparison with previous approaches, the usage of CNN is optimal from a system point of view as the design is based on the end-to-end principle and the model operates directly on image pixels. For evaluation purposes, a face occlusion database consisting of over 50[Formula: see text]000 images, with annotated facial parts, was used. Experimental results revealed that the proposed framework is very effective. Using the bespoke face occlusion dataset, Aleix and Robert (AR) face dataset and the Labeled Face in the Wild (LFW) database, we achieved over 85.61%, 97.58% and 100% accuracies for head detection when the Intersection over Union-section (IoU) is larger than 0.5, and 94.55%, 98.58% and 95.41% accuracies for occlusion discrimination, respectively.


2020 ◽  
Vol 29 ◽  
pp. 8760-8775
Author(s):  
Chongyu Liu ◽  
Yuliang Liu ◽  
Lianwen Jin ◽  
Shuaitao Zhang ◽  
Canjie Luo ◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document