scholarly journals Leveraging spatio-temporal redundancy for RFID data cleansing

Author(s):  
Haiquan Chen ◽  
Wei-Shinn Ku ◽  
Haixun Wang ◽  
Min-Te Sun
2013 ◽  
Vol 25 (10) ◽  
pp. 2177-2191 ◽  
Author(s):  
Wei-Shinn Ku ◽  
Haiquan Chen ◽  
Haixun Wang ◽  
Min-Te Sun

2022 ◽  
Vol 116 ◽  
pp. 151-162
Author(s):  
Yonghong Liu ◽  
Wenfeng Huang ◽  
Xiaofang Lin ◽  
Rui Xu ◽  
Li Li ◽  
...  

Author(s):  
Prasanga Dhungel ◽  
Prashant Tandan ◽  
Sandesh Bhusal ◽  
Sobit Neupane ◽  
Subarna Shakya

We present a new approach to video compression for video surveillance by refining the shortcomings of conventional approach and substitute each traditional component with their neural network counterpart. Our proposed work consists of motion estimation, compression and compensation and residue compression, learned end-to-end to minimize the rate-distortion trade off. The whole model is jointly optimized using a single loss function. Our work is based on a standard method to exploit the spatio-temporal redundancy in video frames to reduce the bit rate along with the minimization of distortions in decoded frames. We implement a neural network version of conventional video compression approach and encode the redundant frames with lower number of bits. Although, our approach is more concerned toward surveillance, it can be extended easily to general purpose videos too. Experiments show that our technique is efficient and outperforms standard MPEG encoding at comparable bitrates while preserving the visual quality.


2020 ◽  
Vol 34 (07) ◽  
pp. 13098-13105 ◽  
Author(s):  
Linchao Zhu ◽  
Du Tran ◽  
Laura Sevilla-Lara ◽  
Yi Yang ◽  
Matt Feiszli ◽  
...  

Typical video classification methods often divide a video into short clips, do inference on each clip independently, then aggregate the clip-level predictions to generate the video-level results. However, processing visually similar clips independently ignores the temporal structure of the video sequence, and increases the computational cost at inference time. In this paper, we propose a novel framework named FASTER, i.e., Feature Aggregation for Spatio-TEmporal Redundancy. FASTER aims to leverage the redundancy between neighboring clips and reduce the computational cost by learning to aggregate the predictions from models of different complexities. The FASTER framework can integrate high quality representations from expensive models to capture subtle motion information and lightweight representations from cheap models to cover scene changes in the video. A new recurrent network (i.e., FAST-GRU) is designed to aggregate the mixture of different representations. Compared with existing approaches, FASTER can reduce the FLOPs by over 10× while maintaining the state-of-the-art accuracy across popular datasets, such as Kinetics, UCF-101 and HMDB-51.


2021 ◽  
Author(s):  
Ranier A. A. Moura ◽  
Domingos B. S. Santos ◽  
Daniel G. M. Lira ◽  
José E. B. Maia

Aplicações computacionais baseadas em dados de sensores são uma realidade, mas os dados coletados e transmitidos para as aplicações raramente chegam prontos para o uso devido a perdas e ruídos de vários tipos. Neste trabalho desenvolve-se uma abordagem baseada em correlação espaço temporal para limpeza de dados de múltiplas séries temporais de sensores quanto à ruído, dados ausentes e outliers. O método foi testato em seis conjuntos de dados reais publicamente disponíveis e o seu desempenho foi comparado com um método baseline, com um autoencoder denoising e com outro método publicado. Os resultados mostram que a abordagem proposta é competitiva e requer menos dados de treinamento do que os concorrentes.


Sign in / Sign up

Export Citation Format

Share Document