Spatio-Temporal Inference Transformer Network for Video Inpainting

Author(s):  
Gajanan Tudavekar ◽  
Santosh S. Saraf ◽  
Sanjay R. Patil

Video inpainting aims to complete in a visually pleasing way the missing regions in video frames. Video inpainting is an exciting task due to the variety of motions across different frames. The existing methods usually use attention models to inpaint videos by seeking the damaged content from other frames. Nevertheless, these methods suffer due to irregular attention weight from spatio-temporal dimensions, thus giving rise to artifacts in the inpainted video. To overcome the above problem, Spatio-Temporal Inference Transformer Network (STITN) has been proposed. The STITN aligns the frames to be inpainted and concurrently inpaints all the frames, and a spatio-temporal adversarial loss function improves the STITN. Our method performs considerably better than the existing deep learning approaches in quantitative and qualitative evaluation.

Cryptography ◽  
2021 ◽  
Vol 5 (4) ◽  
pp. 30
Author(s):  
Bang Yuan Chong ◽  
Iftekhar Salam

This paper studies the use of deep learning (DL) models under a known-plaintext scenario. The goal of the models is to predict the secret key of a cipher using DL techniques. We investigate the DL techniques against different ciphers, namely, Simplified Data Encryption Standard (S-DES), Speck, Simeck and Katan. For S-DES, we examine the classification of the full key set, and the results are better than a random guess. However, we found that it is difficult to apply the same classification model beyond 2-round Speck. We also demonstrate that DL models trained under a known-plaintext scenario can successfully recover the random key of S-DES. However, the same method has been less successful when applied to modern ciphers Speck, Simeck, and Katan. The ciphers Simeck and Katan are further investigated using the DL models but with a text-based key. This application found the linear approximations between the plaintext–ciphertext pairs and the text-based key.


2021 ◽  
pp. 343-354
Author(s):  
M. Suresha ◽  
S. Kuppa ◽  
D. S. Raghukumar

Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1669
Author(s):  
Philip Boyer ◽  
David Burns ◽  
Cari Whyne

Out-of-distribution (OOD) in the context of Human Activity Recognition (HAR) refers to data from activity classes that are not represented in the training data of a Machine Learning (ML) algorithm. OOD data are a challenge to classify accurately for most ML algorithms, especially deep learning models that are prone to overconfident predictions based on in-distribution (IIN) classes. To simulate the OOD problem in physiotherapy, our team collected a new dataset (SPARS9x) consisting of inertial data captured by smartwatches worn by 20 healthy subjects as they performed supervised physiotherapy exercises (IIN), followed by a minimum 3 h of data captured for each subject as they engaged in unrelated and unstructured activities (OOD). In this paper, we experiment with three traditional algorithms for OOD-detection using engineered statistical features, deep learning-generated features, and several popular deep learning approaches on SPARS9x and two other publicly-available human activity datasets (MHEALTH and SPARS). We demonstrate that, while deep learning algorithms perform better than simple traditional algorithms such as KNN with engineered features for in-distribution classification, traditional algorithms outperform deep learning approaches for OOD detection for these HAR time series datasets.


2021 ◽  
Vol 15 ◽  
Author(s):  
Niklas Zdarsky ◽  
Stefan Treue ◽  
Moein Esghaei

Real-time gaze tracking provides crucial input to psychophysics studies and neuromarketing applications. Many of the modern eye-tracking solutions are expensive mainly due to the high-end processing hardware specialized for processing infrared-camera pictures. Here, we introduce a deep learning-based approach which uses the video frames of low-cost web cameras. Using DeepLabCut (DLC), an open-source toolbox for extracting points of interest from videos, we obtained facial landmarks critical to gaze location and estimated the point of gaze on a computer screen via a shallow neural network. Tested for three extreme poses, this architecture reached a median error of about one degree of visual angle. Our results contribute to the growing field of deep-learning approaches to eye-tracking, laying the foundation for further investigation by researchers in psychophysics or neuromarketing.


Author(s):  
Asim Abbas ◽  
Muhammad Afzal ◽  
Jamil Hussain ◽  
Taqdir Ali ◽  
Hafiz Syed Muhammad Bilal ◽  
...  

Extracting clinical concepts, such as problems, diagnosis, and treatment, from unstructured clinical narrative documents enables data-driven approaches such as machine and deep learning to support advanced applications such as clinical decision-support systems, the assessment of disease progression, and the intelligent analysis of treatment efficacy. Various tools such as cTAKES, Sophia, MetaMap, and other rules-based approaches and algorithms have been used for automatic concept extraction. Recently, machine- and deep-learning approaches have been used to extract, classify, and accurately annotate terms and phrases. However, the requirement of an annotated dataset, which is labor-intensive, impedes the success of data-driven approaches. A rule-based mechanism could support the process of annotation, but existing rule-based approaches fail to adequately capture contextual, syntactic, and semantic patterns. This study intends to introduce a comprehensive rule-based system that automatically extracts clinical concepts from unstructured narratives with higher accuracy and transparency. The proposed system is a pipelined approach, capable of recognizing clinical concepts of three types, problem, treatment, and test, in the dataset collected from a published repository as a part of the I2b2 challenge 2010. The system’s performance is compared with that of three existing systems: Quick UMLS, BIO-CRF, and the Rules (i2b2) model. Compared to the baseline systems, the average F1-score of 72.94% was found to be 13% better than Quick UMLS, 3% better than BIO CRF, and 30.1% better than the Rules (i2b2) model. Individually, the system performance was noticeably higher for problem-related concepts, with an F1-score of 80.45%, followed by treatment-related concepts and test-related concepts, with F1-scores of 76.06% and 55.3%, respectively. The proposed methodology significantly improves the performance of concept extraction from unstructured clinical narratives by exploiting the linguistic and lexical semantic features. The approach can ease the automatic annotation process of clinical data, which ultimately improves the performance of supervised data-driven applications trained with these data.


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3979 ◽  
Author(s):  
Shahela Saif ◽  
Samabia Tehseen ◽  
Sumaira Kausar

Recognition of human actions form videos has been an active area of research because it has applications in various domains. The results of work in this field are used in video surveillance, automatic video labeling and human-computer interaction, among others. Any advancements in this field are tied to advances in the interrelated fields of object recognition, spatio- temporal video analysis and semantic segmentation. Activity recognition is a challenging task since it faces many problems such as occlusion, view point variation, background differences and clutter and illumination variations. Scientific achievements in the field have been numerous and rapid as the applications are far reaching. In this survey, we cover the growth of the field from the earliest solutions, where handcrafted features were used, to later deep learning approaches that use millions of images and videos to learn features automatically. By this discussion, we intend to highlight the major breakthroughs and the directions the future research might take while benefiting from the state-of-the-art methods.


Information ◽  
2021 ◽  
Vol 12 (6) ◽  
pp. 233
Author(s):  
Haoran Xu ◽  
Yanbai He ◽  
Xinya Li ◽  
Xiaoying Hu ◽  
Chuanyan Hao ◽  
...  

Subtitles are crucial for video content understanding. However, a large amount of videos have only burned-in, hardcoded subtitles that prevent video re-editing, translation, etc. In this paper, we construct a deep-learning-based system for the inverse conversion of a burned-in subtitle video to a subtitle file and an inpainted video, by coupling three deep neural networks (CTPN, CRNN, and EdgeConnect). We evaluated the performance of the proposed method and found that the deep learning method achieved high-precision separation of the subtitles and video frames and significantly improved the video inpainting results compared to the existing methods. This research fills a gap in the application of deep learning to burned-in subtitle video reconstruction and is expected to be widely applied in the reconstruction and re-editing of videos with subtitles, advertisements, logos, and other occlusions.


Author(s):  
Ruixin Liu ◽  
Zhenyu Weng ◽  
Yuesheng Zhu ◽  
Bairong Li

Video inpainting aims to synthesize visually pleasant and temporally consistent content in missing regions of video. Due to a variety of motions across different frames, it is highly challenging to utilize effective temporal information to recover videos. Existing deep learning based methods usually estimate optical flow to align frames and thereby exploit useful information between frames. However, these methods tend to generate artifacts once the estimated optical flow is inaccurate. To alleviate above problem, we propose a novel end-to-end Temporal Adaptive Alignment Network(TAAN) for video inpainting. The TAAN aligns reference frames with target frame via implicit motion estimation at a feature level and then reconstruct target frame by taking the aggregated aligned reference frame features as input. In the proposed network, a Temporal Adaptive Alignment (TAA) module based on deformable convolutions is designed to perform temporal alignment in a local, dense and adaptive manner. Both quantitative and qualitative evaluation results show that our method significantly outperforms existing deep learning based methods.


2017 ◽  
Vol 14 (2) ◽  
pp. 229-244 ◽  
Author(s):  
Viacheslav Voronin ◽  
Vladimir Marchuk ◽  
Sergey Makov ◽  
Vladimir Mladenovic ◽  
Yigang Cen

Video inpainting or completion is a vital video improvement technique used to repair or edit digital videos. This paper describes a framework for temporally consistent video completion. The proposed method allows to remove dynamic objects or restore missing or tainted regions presented in a video sequence by utilizing spatial and temporal information from neighboring scenes. Masking algorithm is used for detection of scratches or damaged portions in video frames. The algorithm iteratively performs the following operations: achieve frame; update the scene model; update positions of moving objects; replace parts of the frame occupied by the objects marked for remove by using a background model. In this paper, we extend an image inpainting algorithm based texture and structure reconstruction by incorporating an improved strategy for video. Our algorithm is able to deal with a variety of challenging situations which naturally arise in video inpainting, such as the correct reconstruction of dynamic textures, multiple moving objects and moving background. Experimental comparisons to state-of-the-art video completion methods demonstrate the effectiveness of the proposed approach. It is shown that the proposed spatio-temporal image inpainting method allows restoring a missing blocks and removing a text from the scenes on videos. <br><br><font color="red"><b> This article has been retracted. Link to the retraction <u><a href="http://dx.doi.org/10.2298/SJEE1803373E">10.2298/SJEE1803373E</a><u></b></font>


2019 ◽  
Vol 2019 (1) ◽  
pp. 360-368
Author(s):  
Mekides Assefa Abebe ◽  
Jon Yngve Hardeberg

Different whiteboard image degradations highly reduce the legibility of pen-stroke content as well as the overall quality of the images. Consequently, different researchers addressed the problem through different image enhancement techniques. Most of the state-of-the-art approaches applied common image processing techniques such as background foreground segmentation, text extraction, contrast and color enhancements and white balancing. However, such types of conventional enhancement methods are incapable of recovering severely degraded pen-stroke contents and produce artifacts in the presence of complex pen-stroke illustrations. In order to surmount such problems, the authors have proposed a deep learning based solution. They have contributed a new whiteboard image data set and adopted two deep convolutional neural network architectures for whiteboard image quality enhancement applications. Their different evaluations of the trained models demonstrated their superior performances over the conventional methods.


Sign in / Sign up

Export Citation Format

Share Document