A Survey on Original Video-based Watermarking

Author(s):  
Zhongze Lv ◽  
Hu Guan ◽  
Ying Huang ◽  
Shuwu Zhang
Keyword(s):  

Video Surveillance System uses video cameras to capture images and videos that can be compressed, stored and send to place with the limited set of monitors .Now a Days all the public places such as bank, educational institutions, Offices, Hospitals are equipped with multiple surveillance cameras having overlapping field of view for security and environment monitoring purposes. A Video Summarization is a technique to generate the summary of entire Video Content either by still images or through video skim. The summarized video length should be less than the original video length and it should covers maximum information from the original video. Video summarization studies concentrating on monocular videos cannot be applied directly to multiple-view videos due to redundancy in multiple views. Generating Summary for Surveillance videos is more challenging because, videos Captured by surveillance cameras is long, contains uninteresting events, same scene recorded in different views leading to inter-view dependencies and variation in illuminations. In this paper, we present a survey on the research work carried on video summarization techniques for videos captured through multiple views. The summarized video generated can be used for the analysis of post-accident scenarios, identifying suspicious events, theft in public which supports Crime department for the investigation purposes.


2013 ◽  
Vol 433-435 ◽  
pp. 297-300
Author(s):  
Zong Yue Wang

Video summaries provide a compact video representation preserving the essential activities of the original video, but the summaries may be confusing when mixing different activities together. Summaries Clustered methodology, showing similar activities simultaneously, enables to view much easier and more efficiently. However, it is very time consuming in generating summaries, especially in calculating motion distance and collision cost. To improve the efficiency of generating summaries, a parallel video synopsis generation algorithm is proposed based on GPGPU. The experiment result shows generation efficiency is improved greatly through GPU parallel computing. The acceleration radio can reach at 5.75 when data size is above 1600*960*30000.


Author(s):  
Shefali Gandhi ◽  
Tushar V. Ratanpara

Video synopsis provides representation of the long surveillance video, while preserving the essential activities of the original video. The activity in the original video is covered into a shorter period by simultaneously displaying multiple activities, which originally occurred at different time segments. As activities are to be displayed in different time segments than original video, the process begins with extracting moving objects. Temporal median algorithm is used to model background and foreground objects are detected using background subtraction method. Each moving object is represented as a space-time activity tube in the video. The concept of genetic algorithm is used for optimized temporal shifting of activity tubes. The temporal arrangement of tubes which results in minimum collision and maintains chronological order of events is considered as the best solution. The time-lapse background video is generated next, which is used as background for the synopsis video. Finally, the activity tubes are stitched on the time-lapse background video using Poisson image editing.


2010 ◽  
Vol 2 (4) ◽  
pp. 16-36
Author(s):  
Yaqing Niu ◽  
Sridhar Krishnan ◽  
Qin Zhang

Perceptual Watermarking should take full advantage of the results from human visual system (HVS) studies. Just noticeable distortion (JND), which refers to the maximum distortion that the HVS does not perceive, gives a way to model the HVS accurately. An effective Spatio-Temporal JND model guided video watermarking scheme in DCT domain is proposed in this paper. The watermarking scheme is based on the design of an additional accurate JND visual model which incorporates spatial Contrast Sensitivity Function (CSF), temporal modulation factor, retinal velocity, luminance adaptation and contrast masking. The proposed watermarking scheme, where the JND model is fully used to determine scene-adaptive upper bounds on watermark insertion, allows providing the maximum strength transparent watermark. Experimental results confirm the improved performance of the Spatio-Temporal JND model. The authors’ Spatio-Temporal JND model is capable of yielding higher injected-watermark energy without introducing noticeable distortion to the original video sequences and outperforms the relevant existing visual models. Simulation results show that the proposed Spatio-Temporal JND model guided video watermarking scheme is more robust than other algorithms based on the relevant existing perceptual models while retaining the watermark transparency.


2011 ◽  
Vol 179-180 ◽  
pp. 103-108
Author(s):  
Sheng Bing Che ◽  
Jin Kai Luo ◽  
Bin Ma

A quasi-blind adaptive video watermarking algorithm in uncompressed video wavelet domain based on human visual system is proposed in this paper. This algorithm established a visual model according to the human visual masking and the characteristics of wavelet coefficients. It did not require the scene segmentation of the video, choosing the video frames which used for embedding watermarking by the key, and embedding different watermarking which processed by Arnold scrambling into different frame, so it is robust to statistical analysis, frame crop and so on. To ensure the spatial synchrony when extracting the watermark information, zero-watermarking method and chaotic system are used to generate the synchronization information. The quantized central limit theorem is applied to adjust the low frequency coefficients, it makes the extracted watermark information keeps invariant when its element value changed in robust region. A new correlation detection method of watermarking information was put forward. Watermark detection does not require original video. It realized the quasi-blind watermark detection.


2011 ◽  
Vol 92 (6) ◽  
pp. 2000-2005 ◽  
Author(s):  
Sadanori Takeo ◽  
Shuichi Tsukamoto ◽  
Daigo Kawano ◽  
Masakazu Katsura

Author(s):  
Maria Torres Vega ◽  
Vittorio Sguazzo ◽  
Decebal Constantin Mocanu ◽  
Antonio Liotta

Purpose The Video Quality Metric (VQM) is one of the most used objective methods to assess video quality, because of its high correlation with the human visual system (HVS). VQM is, however, not viable in real-time deployments such as mobile streaming, not only due to its high computational demands but also because, as a Full Reference (FR) metric, it requires both the original video and its impaired counterpart. In contrast, No Reference (NR) objective algorithms operate directly on the impaired video and are considerably faster but loose out in accuracy. The purpose of this paper is to study how differently NR metrics perform in the presence of network impairments. Design/methodology/approach The authors assess eight NR metrics, alongside a lightweight FR metric, using VQM as benchmark in a self-developed network-impaired video data set. This paper covers a range of methods, a diverse set of video types and encoding conditions and a variety of network impairment test-cases. Findings The authors show the extent by which packet loss affects different video types, correlating the accuracy of NR metrics to the FR benchmark. This paper helps identifying the conditions under which simple metrics may be used effectively and indicates an avenue to control the quality of streaming systems. Originality/value Most studies in literature have focused on assessing streams that are either unaffected by the network (e.g. looking at the effects of video compression algorithms) or are affected by synthetic network impairments (i.e. via simulated network conditions). The authors show that when streams are affected by real network conditions, assessing Quality of Experience becomes even harder, as the existing metrics perform poorly.


Sign in / Sign up

Export Citation Format

Share Document