scholarly journals Real Time System for Human Identification and Tracking from Surveillance Videos

2018 ◽  
Vol 7 (3.12) ◽  
pp. 244 ◽  
Author(s):  
D Vishaka Gayathri ◽  
Shrutee Shree ◽  
Taru Jain ◽  
K Sornalakshmi

The need for intelligent surveillance systems has raised the concerns of security. A viable system with automated methods for person identification to detect, track and recognize persons in real time is required. The traditional detection techniques have not been able to analyze such a huge amount of live video generated in real-time. So, there is a necessity for live streaming video analytics which includes processing and analyzing large scale visual data such as images or videos to find content that are useful for interpretation. In this work, an automated surveillance system for real-time detection, recognition and tracking of persons in video streams from multiple video inputs is presented. In addition, the current location of an individual can be searched with the tool bar provided. A model is proposed, which uses a messaging queue to receive/transfer video feeds and the frames in the video are analyzed using image processing modules to identify and recognize the person with respect to the training data sets. The main aim of this project is to overcome the challenges faced in integrating the open source tools that build up the system for tagging and searching people.  

2014 ◽  
Vol 513-517 ◽  
pp. 927-930
Author(s):  
Zhi Cheng Wen ◽  
Zhi Gang Chen

Object-Z, an extension to formal specification language Z, is good for describing large scale Object-Oriented software specification. While Object-Z has found application in a number of areas, its utility is limited by its inability to specify continuous variables and real-time constraints. Linear temporal logic can describe real-time system, but it can not deal with time variables well and also can not describe formal specification modularly. This paper extends linear temporal logic with clocks (LTLC) and presents an approach to adding linear temporal logic with clocks to Object-Z. Extended Object-Z with LTLC, a modular formal specification language, is a minimum extension of the syntax and semantics of Object-Z. The main advantage of this extension lies in that it is convenient to describe and verify the complex real-time software specification.


Author(s):  
Jun Huang ◽  
Linchuan Xu ◽  
Jing Wang ◽  
Lei Feng ◽  
Kenji Yamanishi

Existing multi-label learning (MLL) approaches mainly assume all the labels are observed and construct classification models with a fixed set of target labels (known labels). However, in some real applications, multiple latent labels may exist outside this set and hide in the data, especially for large-scale data sets. Discovering and exploring the latent labels hidden in the data may not only find interesting knowledge but also help us to build a more robust learning model. In this paper, a novel approach named DLCL (i.e., Discovering Latent Class Labels for MLL) is proposed which can not only discover the latent labels in the training data but also predict new instances with the latent and known labels simultaneously. Extensive experiments show a competitive performance of DLCL against other state-of-the-art MLL approaches.


2014 ◽  
Vol 571-572 ◽  
pp. 497-501 ◽  
Author(s):  
Qi Lv ◽  
Wei Xie

Real-time log analysis on large scale data is important for applications. Specifically, real-time refers to UI latency within 100ms. Therefore, techniques which efficiently support real-time analysis over large log data sets are desired. MongoDB provides well query performance, aggregation frameworks, and distributed architecture which is suitable for real-time data query and massive log analysis. In this paper, a novel implementation approach for an event driven file log analyzer is presented, and performance comparison of query, scan and aggregation operations over MongoDB, HBase and MySQL is analyzed. Our experimental results show that HBase performs best balanced in all operations, while MongoDB provides less than 10ms query speed in some operations which is most suitable for real-time applications.


2021 ◽  
Author(s):  
Aurore Lafond ◽  
Maurice Ringer ◽  
Florian Le Blay ◽  
Jiaxu Liu ◽  
Ekaterina Millan ◽  
...  

Abstract Abnormal surface pressure is typically the first indicator of a number of problematic events, including kicks, losses, washouts and stuck pipe. These events account for 60–70% of all drilling-related nonproductive time, so their early and accurate detection has the potential to save the industry billions of dollars. Detecting these events today requires an expert user watching multiple curves, which can be costly, and subject to human errors. The solution presented in this paper is aiming at augmenting traditional models with new machine learning techniques, which enable to detect these events automatically and help the monitoring of the drilling well. Today’s real-time monitoring systems employ complex physical models to estimate surface standpipe pressure while drilling. These require many inputs and are difficult to calibrate. Machine learning is an alternative method to predict pump pressure, but this alone needs significant labelled training data, which is often lacking in the drilling world. The new system combines these approaches: a machine learning framework is used to enable automated learning while the physical models work to compensate any gaps in the training data. The system uses only standard surface measurements, is fully automated, and is continuously retrained while drilling to ensure the most accurate pressure prediction. In addition, a stochastic (Bayesian) machine learning technique is used, which enables not only a prediction of the pressure, but also the uncertainty and confidence of this prediction. Last, the new system includes a data quality control workflow. It discards periods of low data quality for the pressure anomaly detection and enables to have a smarter real-time events analysis. The new system has been tested on historical wells using a new test and validation framework. The framework runs the system automatically on large volumes of both historical and simulated data, to enable cross-referencing the results with observations. In this paper, we show the results of the automated test framework as well as the capabilities of the new system in two specific case studies, one on land and another offshore. Moreover, large scale statistics enlighten the reliability and the efficiency of this new detection workflow. The new system builds on the trend in our industry to better capture and utilize digital data for optimizing drilling.


Processes ◽  
2020 ◽  
Vol 8 (6) ◽  
pp. 649
Author(s):  
Yifeng Liu ◽  
Wei Zhang ◽  
Wenhao Du

Deep learning based on a large number of high-quality data plays an important role in many industries. However, deep learning is hard to directly embed in the real-time system, because the data accumulation of the system depends on real-time acquisitions. However, the analysis tasks of such systems need to be carried out in real time, which makes it impossible to complete the analysis tasks by accumulating data for a long time. In order to solve the problems of high-quality data accumulation, high timeliness of the data analysis, and difficulty in embedding deep-learning algorithms directly in real-time systems, this paper proposes a new progressive deep-learning framework and conducts experiments on image recognition. The experimental results show that the proposed framework is effective and performs well and can reach a conclusion similar to the deep-learning framework based on large-scale data.


2019 ◽  
Vol 7 (3) ◽  
pp. SE113-SE122 ◽  
Author(s):  
Yunzhi Shi ◽  
Xinming Wu ◽  
Sergey Fomel

Salt boundary interpretation is important for the understanding of salt tectonics and velocity model building for seismic migration. Conventional methods consist of computing salt attributes and extracting salt boundaries. We have formulated the problem as 3D image segmentation and evaluated an efficient approach based on deep convolutional neural networks (CNNs) with an encoder-decoder architecture. To train the model, we design a data generator that extracts randomly positioned subvolumes from large-scale 3D training data set followed by data augmentation, then feed a large number of subvolumes into the network while using salt/nonsalt binary labels generated by thresholding the velocity model as ground truth labels. We test the model on validation data sets and compare the blind test predictions with the ground truth. Our results indicate that our method is capable of automatically capturing subtle salt features from the 3D seismic image with less or no need for manual input. We further test the model on a field example to indicate the generalization of this deep CNN method across different data sets.


2012 ◽  
pp. 235-257
Author(s):  
Christopher Oehmen ◽  
Scott Dowson ◽  
Wes Hatley ◽  
Justin Almquist ◽  
Bobbie-Jo Webb-Robertson ◽  
...  

Author(s):  
G Sai Kiranmayi ◽  
B Bhanu ◽  
B Manikanta ◽  
N Ashok ◽  
G Govinda Raju

The main objective of this paper is to develop a virtual environment for detecting suspicious and targeted places for user without any loss of human life.The purpose of this project is to regulate robot with interface board of the raspberry pi,sensors and software to full fill real time equipment. There are sundry surveillance systems such as camera, CCTV etc. available in the market. In these systems, the person located in that particular area can only view what is transpiring in that place. We proposed a system to build an authentictime live streaming and monitoring system utilizing Raspberry pi with installed Wi-Fi connectivity. It can continuously monitor the objects. Robot can move in every direction (left, right,forward and backward). The webcam which is placed on the robotic unit will capture the video and it transmits vivacious to the remote end. The major application of this paper can be analysed utilizing HTML web page which can be acclimated to control the movement of the robot.


2021 ◽  
Author(s):  
Joshua A Salomon ◽  
Alex Reinhart ◽  
Alyssa Bilinski ◽  
Eu Jing Chua ◽  
Wichida La Motte-Kerr ◽  
...  

The U.S. COVID-19 Trends and Impact Survey (CTIS) is a large, cross-sectional, Internet-based survey that has operated continuously since April 6, 2020. By inviting a random sample of Facebook active users each day, CTIS collects information about COVID-19 symptoms, risks, mitigating behaviors, mental health, testing, vaccination, and other key priorities. The large scale of the survey -- over 20 million responses in its first year of operation -- allows tracking of trends over short timescales and allows comparisons at fine demographic and geographic detail. The survey has been repeatedly revised to respond to emerging public health priorities. In this paper, we describe the survey methods and content and give examples of CTIS results that illuminate key patterns and trends and help answer high-priority policy questions relevant to the COVID-19 epidemic and response. These results demonstrate how large online surveys can provide continuous, real-time indicators of important outcomes that are not subject to public health reporting delays and backlogs. The CTIS offers high value as a supplement to official reporting data by supplying essential information about behaviors, attitudes toward policy and preventive measures, economic impacts, and other topics not reported in public health surveillance systems.


Sign in / Sign up

Export Citation Format

Share Document