scholarly journals Removing Raindrops in Continuous Video Images for Fixed-Object Surveillance Systems

Author(s):  
Nam-Bong Ha ◽  
Namgi Kim
Author(s):  
A A Morozov ◽  
O S Sushkova ◽  
I A Kershner ◽  
A F Polupanov

The terahertz video surveillance opens up new unique opportunities in the field of security in public places, as it allows to detect and thus to prevent usage of hidden weapons and other dangerous items. Although the first generation of terahertz video surveillance systems has already been created and is available on the security systems market, it has not yet found wide application. The main reason for this is in that the existing methods for analyzing terahertz images are not capable of providing hidden and fully-automatic recognition of weapons and other dangerous objects and can only be used under the control of a specially trained operator. As a result, the terahertz video surveillance appears to be more expensive and less efficient in comparison with the standard approach based on the organizing security perimeters and manual inspection of the visitors. In the paper, the problem of the development of a method of automatic analysis of the terahertz video images is considered. As a basis for this method, it is proposed to use the semantic fusion of video images obtained using different physical principles, the idea of which is in that the semantic content of one video image is used to control the processing and analysis of another video image. For example, the information about 3D coordinates of the body, arms, and legs of a person can be used for analysis and proper interpretation of color areas observed on a terahertz video image. Special means of the object-oriented logic programming are developed for the implementation of the semantic fusion of the video data, including special built-in classes of the Actor Prolog logic language for acquisition, processing, and analysis of video data in the visible, infrared, and terahertz ranges as well as 3D video data.


Author(s):  
D. Nethaji ◽  
Mary Joans ◽  
Mrs. S. J. Grace Shoba

Video surveillance has been a popular security tool for years. Video surveillance systems produce huge amounts of data for storage and display. Long-term human monitoring of the acquired video is impractical and in-effective. This paper presents a novel solution for real-time cases that identify and record only “interesting” video frames containing motion. In addition to traditional methods for compressing individual video images, we could identify and record only “interesting” video images, such as those images with significant amounts of motion in the field of view. The model would be built in simulink, one of tools in matlab and incorporated with davinci code processor, a video processor. That could significantly help reduce the data rates for surveillance-specific applications.


2021 ◽  
Vol 18 (4) ◽  
pp. 446-462
Author(s):  
Ben Li ◽  
Shanjun Mao ◽  
Mei Li

Abstract Video surveillance systems can be applied in coal mines for remote monitoring and for production control. Stitching video images into a panorama enhances the usability of video systems, since a panorama offers a wider view than single images do. But there are big challenges when conventional image stitching methods are applied to the domain of coal mine, especially in the mining faces. These challenges consist of non-uniform illumination, missed scenes and oblique panoramas. In this paper, a robust method was proposed to solve these three problems: (i) to overcome the non-uniform illumination on a mining face, the wide dynamic range technology and the histogram matching algorithm were used to enhance single images and reduce differences among images, respectively; (ii) to eliminate the missed scenes, overlapped images were quickly taken, then the feature matching method and template recognition method were adaptively used to achieve robust stitching and (iii) to mitigate the obliqueness of panoramas, vertical correction technology was used, which exploited the posture information of the camera. Next, the adjacent panoramas were concatenated and experiments were conducted on a fully mechanized mining face. The results show that the proposed method solves these three problems well and a dynamic panorama of the partial long-wall mining face is outputted. The research provides a new approach for displaying extended scenes of stope faces in intelligent collieries.


Author(s):  
Tim Oliver ◽  
Akira Ishihara ◽  
Ken Jacobsen ◽  
Micah Dembo

In order to better understand the distribution of cell traction forces generated by rapidly locomoting cells, we have applied a mathematical analysis to our modified silicone rubber traction assay, based on the plane stress Green’s function of linear elasticity. To achieve this, we made crosslinked silicone rubber films into which we incorporated many more latex beads than previously possible (Figs. 1 and 6), using a modified airbrush. These films could be deformed by fish keratocytes, were virtually drift-free, and showed better than a 90% elastic recovery to micromanipulation (data not shown). Video images of cells locomoting on these films were recorded. From a pair of images representing the undisturbed and stressed states of the film, we recorded the cell’s outline and the associated displacements of bead centroids using Image-1 (Fig. 1). Next, using our own software, a mesh of quadrilaterals was plotted (Fig. 2) to represent the cell outline and to superimpose on the outline a traction density distribution. The net displacement of each bead in the film was calculated from centroid data and displayed with the mesh outline (Fig. 3).


Sign in / Sign up

Export Citation Format

Share Document