scholarly journals CCTV Surveillance for Unprecedented Violence and Traffic Monitoring

2020 ◽  
Vol 2 (1) ◽  
pp. 25-34 ◽  
Author(s):  
Dr. Dhaya R.

Monitoring of traffic and unprecedented violence has become very much necessary in the urban as well as the rural areas, so the paper attempts to develop a CCTV surveillance for unprecedented violence and traffic monitoring. The proffered method performs the synchronization of the videos and does proper alliance employing the algorithms of motion detection and contour filtering. The steps in motion detection identifies the movement of the objects such as vehicles and unprecedented activities whereas the filtering is used to identify the object itself using its color. The synchronization and the alignment process affords to provide the details of the each objects on the scenario. The proposed algorithm is developed in Java which assists its model using its library that is open source. The validation of the proposed model was carried out using the data set acquired from real time and results were acquired. Moreover the results acquired were compared with the algorithms that were created in the early stages, the comparison proved that the proffered model was capable of obtaining a consecutive quick outcomes of 12.3912 *factor than the existing methods for the resolution of the video used in testing was 240.01x 320.01 with 40 frames per second with cameras of high definition. Further the results acquired were computed to run the application of the embedded CPU and the GPU processors.

2021 ◽  
pp. 1-11
Author(s):  
Tingting Zhao ◽  
Xiaoli Yi ◽  
Zhiyong Zeng ◽  
Tao Feng

YTNR (Yunnan Tongbiguan Nature Reserve) is located in the westernmost part of China’s tropical regions and is the only area in China with the tropical biota of the Irrawaddy River system. The reserve has abundant tropical flora and fauna resources. In order to realize the real-time detection of wild animals in this area, this paper proposes an improved YOLO (You only look once) network. The original YOLO model can achieve higher detection accuracy, but due to the complex model structure, it cannot achieve a faster detection speed on the CPU detection platform. Therefore, the lightweight network MobileNet is introduced to replace the backbone feature extraction network in YOLO, which realizes real-time detection on the CPU platform. In response to the difficulty in collecting wild animal image data, the research team deployed 50 high-definition cameras in the study area and conducted continuous observations for more than 1,000 hours. In the end, this research uses 1410 images of wildlife collected in the field and 1577 wildlife images from the internet to construct a research data set combined with the manual annotation of domain experts. At the same time, transfer learning is introduced to solve the problem of insufficient training data and the network is difficult to fit. The experimental results show that our model trained on a training set containing 2419 animal images has a mean average precision of 93.6% and an FPS (Frame Per Second) of 3.8 under the CPU. Compared with YOLO, the mean average precision is increased by 7.7%, and the FPS value is increased by 3.


2019 ◽  
Vol 8 (3) ◽  
pp. 6069-6076

Many computer vision applications needs to detect moving object from an input video sequences. The main applications of this are traffic monitoring, visual surveillance, people tracking and security etc. Among these, traffic monitoring is one of the most difficult tasks in real time video processing. Many algorithms are introduced to monitor traffic accurately. But most of the cases, the detection accuracy is very less and the detection time is higher which makes the algorithms are not suitable for real time applications. In this paper, a new technique to detect moving vehicle efficiently using Modified Gaussian Mixture Model and Modified Blob Detection techniques is proposed. The modified Gaussian Mixture model generates the background from overall probability of the complete data set and by calculating the required step size from the frame differences. The modified Blob Analysis is then used to classify proper moving objects. The simulation results shows that the method accurately detect the target


Author(s):  
Pranav Kale ◽  
Mayuresh Panchpor ◽  
Saloni Dingore ◽  
Saloni Gaikwad ◽  
Prof. Dr. Laxmi Bewoor

In today's world, deep learning fields are getting boosted with increasing speed. Lot of innovations and different algorithms are being developed. In field of computer vision, related to autonomous driving sector, traffic signs play an important role to provide real time data of an environment. Different algorithms were developed to classify these Signs. But performance still needs to improve for real time environment. Even the computational power required to train such model is high. In this paper, Convolutional Neural Network model is used to Classify Traffic Sign. The experiments are conducted on a real-world data set with images and videos captured from ordinary car driving as well as on GTSRB dataset [15] available on Kaggle. This proposed model is able to outperform previous models and resulted with accuracy of 99.6% on validation set. This idea has been granted Innovation Patent by Australian IP to Authors of this Research Paper. [24]


Author(s):  
Ping Kuang ◽  
Tingsong Ma ◽  
Fan Li ◽  
Ziwei Chen

Pedestrian detection provides manager of a smart city with a great opportunity to manage their city effectively and automatically. Specifically, pedestrian detection technology can improve our secure environment and make our traffic more efficient. In this paper, all of our work both modification and improvement are made based on YOLO, which is a real-time Convolutional Neural Network detector. In our work, we extend YOLO’s original network structure, and also give a new definition of loss function to boost the performance for pedestrian detection, especially when the targets are small, and that is exactly what YOLO is not good at. In our experiment, the proposed model is tested on INRIA, UCF YouTube Action Data Set and Caltech Pedestrian Detection Benchmark. Experimental results indicate that after our modification and improvement, the revised YOLO network outperforms the original version and also is better than other solutions.


2019 ◽  
Vol 16 (12) ◽  
pp. 5089-5098 ◽  
Author(s):  
Sangeeta ◽  
Kapil Sharma ◽  
Manju Bala

Growing software demand in the present virtual world introduces new competitive dynamics for software developers. Recently, Open Source Software (OSS) systems are providing a faster way of software production. To survive in the competitive market, developed OSS system needs enhancement in previous versions. Each enhanced versions are found to be more liable to risks of failures. In the recent software development process, the primary concern of researchers is always to find new ways for assessing the reliability of developed OSS versions. To incorporate modern software development environments and technologies, new failure rate model for reliability estimation of multiple versions of OSS systems has been developed in this paper. Proposed model incorporates a new testing effort factor for integrating varying needs in each release of software development. It comprises imperfect debugging with the possibility of fault introduction. The proposed model has been validated on various releases of Firefox and Genome project failure data set. Parameter estimation for the proposed model has been done using a flower pollination algorithm. Experimental results have shown the enhanced capability of the proposed model in comparison to Goel-Okumotto model, Inflection S-shaped model and PTZ model in simulating real OSS development environment.


2020 ◽  
Vol 10 (10) ◽  
pp. 3651
Author(s):  
Nohpill Park ◽  
Abhilash Kancharla ◽  
Hye-Young Kim

This paper proposes a real-time chain and a novel embedded Markovian queueing model with variable bulk arrival (VBA) and variable bulk service (VBS) in order to establish and assure a theoretical foundation to design a blockchain-based real-time system with particular interest in Ethereum. Based on the proposed model, various performances are simulated in a numerical manner in order to validate the efficacy of the model by checking good agreements with the results against intuitive and typical expectations as a baseline. A demo of the proposed real-time chain is developed in this work by modifying the open source of Ethereum Geth 1.9.11. The work in this paper will provide both a theoretical foundation to design and optimize the performances of the proposed real-time chain, and ultimately address and resolve the performance bottleneck due to the conventional block-synchrony by employing an asynchrony by the real-time deadline to some extent.


2021 ◽  
Vol 17 (3) ◽  
pp. 249-271
Author(s):  
Tanmay Singha ◽  
Duc-Son Pham ◽  
Aneesh Krishna

Urban street scene analysis is an important problem in computer vision with many off-line models achieving outstanding semantic segmentation results. However, it is an ongoing challenge for the research community to develop and optimize the deep neural architecture with real-time low computing requirements whilst maintaining good performance. Balancing between model complexity and performance has been a major hurdle with many models dropping too much accuracy for a slight reduction in model size and unable to handle high-resolution input images. The study aims to address this issue with a novel model, named M2FANet, that provides a much better balance between model’s efficiency and accuracy for scene segmentation than other alternatives. The proposed optimised backbone helps to increase model’s efficiency whereas, suggested Multi-level Multi-path (M2) feature aggregation approach enhances model’s performance in the real-time environment. By exploiting multi-feature scaling technique, M2FANet produces state-of-the-art results in resource-constrained situations by handling full input resolution. On the Cityscapes benchmark data set, the proposed model produces 68.5% and 68.3% class accuracy on validation and test sets respectively, whilst having only 1.3 million parameters. Compared with all real-time models of less than 5 million parameters, the proposed model is the most competitive in both performance and real-time capability.


2014 ◽  
Vol 664 ◽  
pp. 355-359
Author(s):  
Muralindran Mariappan ◽  
Manimehala Nadarajan ◽  
Rosalyn R. Porle ◽  
Brendan Khoo ◽  
Wong Wei Kitt ◽  
...  

The use of medical robots in healthcare industry especially in rural areas are hitting limelight these days. Development of Medical Tele-diagnosis Robot (MTR) has gain importance to unravel the need of medical emergencies. Nevertheless, challenges for a better visual communication still arises. Thus, a face identification and tracking system for MTR is designed to allow an automated visual which will ease the medical specialist to identify and keep the patient in the best view for visual communication. This paper emphasis on the motion detection module which is the first module of the system. An improved motion detection technique is proposed which suits a real-time application for a dynamic background. Frame differencing method was used to detect the motion of the target. The developed motion detection module succeeded an accuracy of 96% resulting an average of 97% of the whole MTR.


2016 ◽  
Vol 14 (1) ◽  
pp. 172988141668270 ◽  
Author(s):  
Di Guo ◽  
Fuchun Sun ◽  
Tao Kong ◽  
Huaping Liu

Grasping has always been a great challenge for robots due to its lack of the ability to well understand the perceived sensing data. In this work, we propose an end-to-end deep vision network model to predict possible good grasps from real-world images in real time. In order to accelerate the speed of the grasp detection, reference rectangles are designed to suggest potential grasp locations and then refined to indicate robotic grasps in the image. With the proposed model, the graspable scores for each location in the image and the corresponding predicted grasp rectangles can be obtained in real time at a rate of 80 frames per second on a graphic processing unit. The model is evaluated on a real robot-collected data set and different reference rectangle settings are compared to yield the best detection performance. The experimental results demonstrate that the proposed approach can assist the robot to learn the graspable part of the object from the image in a fast manner.


2019 ◽  
Vol 4 (2) ◽  
pp. 356-362
Author(s):  
Jennifer W. Means ◽  
Casey McCaffrey

Purpose The use of real-time recording technology for clinical instruction allows student clinicians to more easily collect data, self-reflect, and move toward independence as supervisors continue to provide continuation of supportive methods. This article discusses how the use of high-definition real-time recording, Bluetooth technology, and embedded annotation may enhance the supervisory process. It also reports results of graduate students' perception of the benefits and satisfaction with the types of technology used. Method Survey data were collected from graduate students about their use and perceived benefits of advanced technology to support supervision during their 1st clinical experience. Results Survey results indicate that students found the use of their video recordings useful for self-evaluation, data collection, and therapy preparation. The students also perceived an increase in self-confidence through the use of the Bluetooth headsets as their supervisors could provide guidance and encouragement without interrupting the flow of their therapy sessions by entering the room to redirect them. Conclusions The use of video recording technology can provide opportunities for students to review: videos of prospective clients they will be treating, their treatment videos for self-assessment purposes, and for additional data collection. Bluetooth technology provides immediate communication between the clinical educator and the student. Students reported that the result of that communication can improve their self-confidence, perceived performance, and subsequent shift toward independence.


Sign in / Sign up

Export Citation Format

Share Document