Real-Time Human Detection in Thermal Infrared Images

Author(s):  
Samah A. F. Manssor ◽  
Shaoyuan Sun ◽  
Mohammed Abdalmajed ◽  
Shima Ali

Abstract Human detection is a technology that detects pre-determined human shapes in the image and ignores everything else, which plays an irreplaceable role in video surveillance. However, modern person detectors have some inefficiencies in detecting pedestrians at night, and the accuracy rate is still insufficient. This paper presents a novel practical model for automatic real-time human detection at night-time. For this purpose, a new network architecture was proposed by improving the ting-yolov3 network for detecting pedestrians from TIR images based on the YOLO algorithm's tasks. The K-means clustering method clusters the image data, which contributes to obtaining excellent priority bounding-boxes. The proposed network was pre-trained on the original COCO dataset to obtain the initial weights. Through the comparison with the other three methods on the FLIR and DHU Night datasets showed that the proposed method performance was outperformed, in addition, to achieve a high score of accuracy (mAP%) in the TIR images. The method has a delay in detection time of 4.88ms. By improving the performance rates of human detection in TIR images, we expect this research to detect intruders in the night surveillance system.

Author(s):  
Hui Huang ◽  
Zhe Li

In this paper, a real-time image transmission algorithm in WSN with limited bandwidth networks is studied. Firstly, a simple and effective monitoring network architecture is established, which allows multiple video monitoring nodes to access the network, and the data transmission is controlled by the synchronization mechanism without collision. Then, the image data is compressed locally at the monitoring nodes (over 85%), so that the image of each node can meet the needs of real-time data transmission, and the overall power consumption of the system is greatly reduced. Finally, based on NVIDIA TX1, four test nodes are constructed to test the algorithm cumulatively, which verifies the effectiveness of the system framework and compression algorithm.


Author(s):  
Apirak Worrakantapon ◽  
◽  
Wattana Pongsena ◽  
Kittisak Kerdprasop ◽  
Nittaya Kerdprasop

A process to receive raw materials from suppliers in an animal feed industry utilizes both automatic and semi-automatic machine control systems. The process called “truck dumper system” is the procedure that the suppliers provide raw materials carried by trucks; then, their tailgates open, and the raw materials are discharged by raising front end part of a truck to gather raw materials in a collection area. In general, the truck dumper system has been controlled manually by staff in a control room, not by a truck driver. However, serious accidents may occur during the process because when the dumper lifts up, the staff's vision has been blocked by the raised part of a truck. Therefore, if the staff controls the dumper to lift down by lacking safety awareness, people in the restricted area can be endangered. In this study, we proposed a framework of automatic human detection to prevent any accident that may occur from the truck dumper in the restricted area. The human detection model was developed to detect humans possibly in different blind corners that are difficult for staff in a control room to monitor these unseen areas for safety-awareness. The main technology of the proposed framework was the real-time human detection with fully convolutional neural network architecture called You Only Look Once, or YOLO. The framework has been designed to send a signal to terminate the truck dumper system immediately after the model detects people in the restricted area. In experiments, we discovered that the model could detect a human in all blind corners, including the corners that the staff's sight was completely bloacked by some barriers. The overall efficiency of this framework in an aspect of speed was high. The average time to process per image was 397 milliseconds by using CPUs and only 52 milliseconds by using GPUs. The results also showed that the model was effectively applicable to detect human in real-time due to its high-speed process.


2021 ◽  
Vol 13 (9) ◽  
pp. 5108
Author(s):  
Navin Ranjan ◽  
Sovit Bhandari ◽  
Pervez Khan ◽  
Youn-Sik Hong ◽  
Hoon Kim

The transportation system, especially the road network, is the backbone of any modern economy. However, with rapid urbanization, the congestion level has surged drastically, causing a direct effect on the quality of urban life, the environment, and the economy. In this paper, we propose (i) an inexpensive and efficient Traffic Congestion Pattern Analysis algorithm based on Image Processing, which identifies the group of roads in a network that suffers from reoccurring congestion; (ii) deep neural network architecture, formed from Convolutional Autoencoder, which learns both spatial and temporal relationships from the sequence of image data to predict the city-wide grid congestion index. Our experiment shows that both algorithms are efficient because the pattern analysis is based on the basic operations of arithmetic, whereas the prediction algorithm outperforms two other deep neural networks (Convolutional Recurrent Autoencoder and ConvLSTM) in terms of large-scale traffic network prediction performance. A case study was conducted on the dataset from Seoul city.


2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Svenja Ipsen ◽  
Sven Böttger ◽  
Holger Schwegmann ◽  
Floris Ernst

AbstractUltrasound (US) imaging, in contrast to other image guidance techniques, offers the distinct advantage of providing volumetric image data in real-time (4D) without using ionizing radiation. The goal of this study was to perform the first quantitative comparison of three different 4D US systems with fast matrix array probes and real-time data streaming regarding their target tracking accuracy and system latency. Sinusoidal motion of varying amplitudes and frequencies was used to simulate breathing motion with a robotic arm and a static US phantom. US volumes and robot positions were acquired online and stored for retrospective analysis. A template matching approach was used for target localization in the US data. Target motion measured in US was compared to the reference trajectory performed by the robot to determine localization accuracy and system latency. Using the robotic setup, all investigated 4D US systems could detect a moving target with sub-millimeter accuracy. However, especially high system latency increased tracking errors substantially and should be compensated with prediction algorithms for respiratory motion compensation.


Author(s):  
Yefan Xie ◽  
Jiangbin Zheng ◽  
Xuan Hou ◽  
Yue Xi ◽  
Fengming Tian

Sensors ◽  
2021 ◽  
Vol 21 (1) ◽  
pp. 275
Author(s):  
Ruben Panero Martinez ◽  
Ionut Schiopu ◽  
Bruno Cornelis ◽  
Adrian Munteanu

The paper proposes a novel instance segmentation method for traffic videos devised for deployment on real-time embedded devices. A novel neural network architecture is proposed using a multi-resolution feature extraction backbone and improved network designs for the object detection and instance segmentation branches. A novel post-processing method is introduced to ensure a reduced rate of false detection by evaluating the quality of the output masks. An improved network training procedure is proposed based on a novel label assignment algorithm. An ablation study on speed-vs.-performance trade-off further modifies the two branches and replaces the conventional ResNet-based performance-oriented backbone with a lightweight speed-oriented design. The proposed architectural variations achieve real-time performance when deployed on embedded devices. The experimental results demonstrate that the proposed instance segmentation method for traffic videos outperforms the you only look at coefficients algorithm, the state-of-the-art real-time instance segmentation method. The proposed architecture achieves qualitative results with 31.57 average precision on the COCO dataset, while its speed-oriented variations achieve speeds of up to 66.25 frames per second on the Jetson AGX Xavier module.


Author(s):  
Ashish Singh ◽  
Kakali Chatterjee ◽  
Suresh Chandra Satapathy

AbstractThe Mobile Edge Computing (MEC) model attracts more users to its services due to its characteristics and rapid delivery approach. This network architecture capability enables users to access the information from the edge of the network. But, the security of this edge network architecture is a big challenge. All the MEC services are available in a shared manner and accessed by users via the Internet. Attacks like the user to root, remote login, Denial of Service (DoS), snooping, port scanning, etc., can be possible in this computing environment due to Internet-based remote service. Intrusion detection is an approach to protect the network by detecting attacks. Existing detection models can detect only the known attacks and the efficiency for monitoring the real-time network traffic is low. The existing intrusion detection solutions cannot identify new unknown attacks. Hence, there is a need of an Edge-based Hybrid Intrusion Detection Framework (EHIDF) that not only detects known attacks but also capable of detecting unknown attacks in real time with low False Alarm Rate (FAR). This paper aims to propose an EHIDF which is mainly considered the Machine Learning (ML) approach for detecting intrusive traffics in the MEC environment. The proposed framework consists of three intrusion detection modules with three different classifiers. The Signature Detection Module (SDM) uses a C4.5 classifier, Anomaly Detection Module (ADM) uses Naive-based classifier, and Hybrid Detection Module (HDM) uses the Meta-AdaboostM1 algorithm. The developed EHIDF can solve the present detection problems by detecting new unknown attacks with low FAR. The implementation results illustrate that EHIDF accuracy is 90.25% and FAR is 1.1%. These results are compared with previous works and found improved performance. The accuracy is improved up to 10.78% and FAR is reduced up to 93%. A game-theoretical approach is also discussed to analyze the security strength of the proposed framework.


2021 ◽  
Vol 11 (15) ◽  
pp. 7148
Author(s):  
Bedada Endale ◽  
Abera Tullu ◽  
Hayoung Shi ◽  
Beom-Soo Kang

Unmanned aerial vehicles (UAVs) are being widely utilized for various missions: in both civilian and military sectors. Many of these missions demand UAVs to acquire artificial intelligence about the environments they are navigating in. This perception can be realized by training a computing machine to classify objects in the environment. One of the well known machine training approaches is supervised deep learning, which enables a machine to classify objects. However, supervised deep learning comes with huge sacrifice in terms of time and computational resources. Collecting big input data, pre-training processes, such as labeling training data, and the need for a high performance computer for training are some of the challenges that supervised deep learning poses. To address these setbacks, this study proposes mission specific input data augmentation techniques and the design of light-weight deep neural network architecture that is capable of real-time object classification. Semi-direct visual odometry (SVO) data of augmented images are used to train the network for object classification. Ten classes of 10,000 different images in each class were used as input data where 80% were for training the network and the remaining 20% were used for network validation. For the optimization of the designed deep neural network, a sequential gradient descent algorithm was implemented. This algorithm has the advantage of handling redundancy in the data more efficiently than other algorithms.


Sign in / Sign up

Export Citation Format

Share Document