Real-Time Analysis of Human Body Parts and Gesture-Activity Recognition in 3D

2011 ◽  
pp. 130-174
Author(s):  
Burak Ozer ◽  
Tiehan Lv ◽  
Wayne Wolf

This chapter focuses on real-time processing techniques for the reconstruction of visual information from multiple views and its analysis for human detection and gesture and activity recognition. It presents a review of the main components of three-dimensional visual processing techniques and visual analysis of multiple cameras, i.e., projection of three-dimensional models onto two-dimensional images and three-dimensional visual reconstruction from multiple images. It discusses real-time aspects of these techniques and shows how these aspects affect the software and hardware architectures. Furthermore, the authors present their multiple-camera system to investigate the relationship between the activity recognition algorithms and the architectures required to perform these tasks in real time. The chapter describes the proposed activity recognition method that consists of a distributed algorithm and a data fusion scheme for two and three-dimensional visual analysis, respectively. The authors analyze the available data independencies for this algorithm and discuss the potential architectures to exploit the parallelism resulting from these independencies.

Biometrics ◽  
2017 ◽  
pp. 761-777
Author(s):  
Di Zhao

Mobile GPU computing, or System on Chip with embedded GPU (SoC GPU), becomes in great demand recently. Since these SoCs are designed for mobile devices with real-time applications such as image processing and video processing, high-efficient implementations of wavelet transform are essential for these chips. In this paper, the author develops two SoC GPU based DWT: signal based parallelization for discrete wavelet transform (sDWT) and coefficient based parallelization for discrete wavelet transform (cDWT), and the author evaluates the performance of three-dimensional wavelet transform on SoC GPU Tegra K1. Computational results show that, SoC GPU based DWT is significantly faster than SoC CPU based DWT. Computational results also show that, sDWT can generally satisfy the requirement of real-time processing (30 frames per second) with the image sizes of 352×288, 480×320, 720×480 and 1280×720, while cDWT can only obtain read-time processing with small image sizes of 352×288 and 480×320.


2017 ◽  
Vol 12 (5) ◽  
pp. 956-966
Author(s):  
Ken-ichi Shimose ◽  
◽  
Shingo Shimizu ◽  
Ryohei Kato ◽  
Koyuru Iwanami

This study reports preliminary results from the three-dimensional variational method (3DVAR) with incremental analysis updates (IAU) of the surface wind field, which is suitable for real-time processing. In this study, 3DVAR with IAU was calculated for the case of a tornadic storm using 500-m horizontal grid spacing with updates every 10 min, for 6 h. Radial velocity observations by eight X-band multi-parameter Doppler radars and three Doppler lidars around the Tokyo Metropolitan area, Japan, were used for the analysis. In this study, three types of analyses were performed between 1800 to 2400 LST (local standard time: UTC + 9 h) 6 September 2015. The first used only 3DVAR (3DVAR), the second used 3DVAR with IAU (3DVAR+IAU), and the third analysis did not use data assimilation (CNTL). 3DVAR+IAU showed the best accuracy of the three analyses, and 3DVAR alone showed the worst accuracy, even though the background was updated every 10 min. Sharp spike signals were observed in the time series of wind speed at 10 m AGL, analyzed by 3DVAR, strongly suggesting that a “shock” was caused by dynamic imbalance due to the instantaneous addition of analysis increments to the background wind components. The spike signal was not shown in 3DVAR+IAU analysis, therefore, we suggest that the IAU method reduces the shock caused by the addition of analysis increments. This study provides useful information on the most suitable DA method for the real-time analysis of surface wind fields.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Naived George Eapen ◽  
Debabrata Samanta ◽  
Manjit Kaur ◽  
Jehad F. Al-Amri ◽  
Mehedi Masud

The increase in computational power in recent years has opened a new door for image processing techniques. Three-dimensional object recognition, identification, pose estimation, and mapping are becoming popular. The need for real-world objects to be mapped into three-dimensional spatial representation is greatly increasing, especially considering the heap jump we obtained in the past decade in virtual reality and augmented reality. This paper discusses an algorithm to convert an array of captured images into estimated 3D coordinates of their external mappings. Elementary methods for generating three-dimensional models are also discussed. This framework will help the community in estimating three-dimensional coordinates of a convex-shaped object from a series of two-dimension images. The built model could be further processed for increasing the resemblance of the input object in terms of its shapes, contour, and texture.


Safety ◽  
2019 ◽  
Vol 5 (3) ◽  
pp. 55 ◽  
Author(s):  
Subharthi Banerjee ◽  
Jose Santos ◽  
Michael Hempel ◽  
Pejman Ghasemzadeh ◽  
Hamid Sharif

Railyards are one of the most challenging and complex workplace environments in any industry. Railyard workers are constantly surrounded by dangerous moving objects, in a noisy environment where distractions can easily result in accidents or casualties. Throughout the years, yards have been contributing 20–30% of the total accidents that happen in railroads. Monitoring the railyard workspace to keep personnel safe from falls, slips, being struck by large object, etc. and preventing fatal accidents can be particularly challenging due to the sheer number of factors involved, such as the need to protect a large geographical space, the inherent dynamicity of the situation workers find themselves in, the presence of heavy rolling stock, blind spots, uneven surfaces and a plethora of trip hazards, just to name a few. Since workers spend the majority of time outdoors, weather conditions also play an important role, i.e., snow, fog, rain, etc. Conventional sensor deployments in yards thus fail to consistently monitor this workspace. In this paper, the authors have identified these challenges and addressed them with a novel detection method using a multi-sensor approach. They have also proposed novel algorithms to detect, classify and remotely monitor Employees-on-Duty (EoDs) without hindering real-time decision-making of the EoD. In the proposed solution, the authors have used a fast spherical-to-rectilinear transform algorithm on fish-eye images to monitor a wide area and to address blindspots in visual monitoring, and employed Software-Defined RADAR (SDRADAR) to address the low-visibility problem. The sensors manage to monitor the workspace for 100 m with blind detection and classification. These algorithms have successfully maintained real-time processing delay of ≤0.1 s between consecutive frames for both SDRADAR and visual processing.


2018 ◽  
Vol 2018 ◽  
pp. 1-12 ◽  
Author(s):  
Wenji Zhang ◽  
Ahmad Hoorfar ◽  
Christopher Thajudeen

A fast and efficient microwave tomographic algorithm is proposed for 2-D and 3-D real-time intrawall imaging. The exploding reflection model is utilized to simplify the imaging formulation, and the half-space Green’s function is expanded in the spectral domain to facilitate the easy implementation of the imaging algorithm with the fast Fourier transform (FFT) and inverse fast Fourier transform (IFFT). The linearization of the inversion scheme and employment of FFT/IFFT in the imaging formula make the algorithm suitable for various applications pertaining to the inspection of a large probed region and allow real-time processing. Representative numerical and experimental results are presented to show the effectiveness and efficiency of the proposed algorithm for real-time intrawall characterization.


2019 ◽  
Vol 9 (19) ◽  
pp. 4128
Author(s):  
Tae Wuk Bae ◽  
Kee Koo Kwon

Recently, with the active development of wearable electrocardiogram (ECG) devices such as smart-bands or portable ECG devices, efficient ECG signal processing technology that can be applied in real-time has been actively studied. However, a wearable ECG device is exposed to various noise situations, thereby reducing the reliability of the detected R point or QRS interval. In addition, as early warning techniques in healthcare systems have been studied, real-time ECG signal processing techniques have become very important in wearable ECG devices. In this paper, we propose an efficient real-time R and QRS detection method using two kinds of first-order derivative filters and a max filter to analyze ECG signals measured from wearable ECG devices in real-time. The proposed method detects the R point and QRS interval in units of a sliding window for real-time processing and combines the detected R points in each sliding window. Also, the reliability of the detected R points and RR intervals is examined through noise region analysis using the histogram characteristic of a sample point. The performance of the proposed method was verified by the MIT-BIH database (DB), CYBHi DB and real ECG data measured from the developed wearable ECG patch. The proposed method achieves Se = 99.80%, +P = 99.80%, and DER = 0.36% against MIT-BIH DB. In addition, the proposed method enables accurate R point detection and heart rate variability (HRV) analysis even with noisy ECG signals.


Author(s):  
R. A. Peshkov ◽  
D. R. Ismagilov

The paper introduces a mathematical model for calculating the gas-dynamic parameters in the launch container. The model takes into account chemical interactions between the main components of the combustion products, i.e. carbon monoxide and hydrogen, and oxygen. The resulting energy can be used to increase the initiating pulse of the rocket. Within the research, we described the basic requirements for the grid model, and analyzed the accuracy of the results obtained. Furthermore, we compared calculation data of pressure in the launch container with the results of the known method. Findings of research show that the use of two-dimensional and three-dimensional models makes it possible to obtain not only medium-volume gas-dynamic parameters, such as pressure, temperature, density, but also the distribution of these parameters over the computational domain. The developed method of numerical simulation will allow us to estimate the effect of changes in the configuration of the sub-rocket volume and other parameters on the dynamics of the rocket movement without conducting an expensive experiment


Smart Cities ◽  
2021 ◽  
Vol 4 (4) ◽  
pp. 1496-1518
Author(s):  
Nyayu Latifah Husni ◽  
Putri Adelia Rahmah Sari ◽  
Ade Silvia Handayani ◽  
Tresna Dewi ◽  
Seyed Amin Hosseini Seno ◽  
...  

This paper describes the implementation of real time human activity recognition systems in public areas. The objective of the study is to develop an alarm system to identify people who do not care for their surrounding environment. In this research, the actions recognized are limited to littering activity using two methods, i.e., CNN and CNN-LSTM. The proposed system captures, classifies, and recognizes the activity by using two main components, a namely camera and mini-PC. The proposed system was implemented in two locations, i.e., Sekanak River and the mini garden near the Sekanak market. It was able to recognize the littering activity successfully. Based on the proposed model, the validation results from the prediction of the testing data in simulation show a loss value of 70% and an accuracy value of 56% for CNN of model 8 that used 500 epochs and a loss value of 10.61%, and an accuracy value of 97% for CNN-LSTM that used 100 epochs. For real experiment of CNN model 8, it is obtained 66.7% and 75% success for detecting littering activity at mini garden and Sekanak River respectively, while using CNN-LSTM in real experiment sequentially gives 94.4% and 100% success for mini garden and Sekanak river.


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5170 ◽  
Author(s):  
Dat Ngo ◽  
Seungmin Lee ◽  
Quoc-Hieu Nguyen ◽  
Tri Minh Ngo ◽  
Gi-Dong Lee ◽  
...  

Vision-based systems operating outdoors are significantly affected by weather conditions, notably those related to atmospheric turbidity. Accordingly, haze removal algorithms, actively being researched over the last decade, have come into use as a pre-processing step. Although numerous approaches have existed previously, an efficient method coupled with fast implementation is still in great demand. This paper proposes a single image haze removal algorithm with a corresponding hardware implementation for facilitating real-time processing. Contrary to methods that invert the physical model describing the formation of hazy images, the proposed approach mainly exploits computationally efficient image processing techniques such as detail enhancement, multiple-exposure image fusion, and adaptive tone remapping. Therefore, it possesses low computational complexity while achieving good performance compared to other state-of-the-art methods. Moreover, the low computational cost also brings about a compact hardware implementation capable of handling high-quality videos at an acceptable rate, that is, greater than 25 frames per second, as verified with a Field Programmable Gate Array chip. The software source code and datasets are available online for public use.


Due to advancement in technology, availability of resources and by increased utilization of on node sensors enormous amount of data is obtained. There is a necessity of analyzing and classifying this physiological information by efficient and effective approaches such as deep learning and artificial intelligence. Human Activity Recognition (HAR) is assuming a dominant role in sports, security, anti-crime, healthcare and also in environmental applications like wildlife observation etc. Most techniques work well for processing offline instead of real- time processing. There are few approaches which provide maximum accuracy for real time processing of large-scale data, one of the compromising approaches is deep learning. Limitation of resources is one of the causes to restrict the usage of deep learning for low power devices which can be worn on our body. Deep learning implementations are known to produce precise results for different computing systems.We suggest a deep learning approach in this paper which integrates features and data learned from inertial sensors with complementary knowledge obtained from a collection of shallow features which generates the possibility of performing real time activity classification accurately. Eliminating the obstructions caused by using deep learning methods for real-time analysis is the aim of this integrated design. Before passing the data into the deep learning framework, we perform spectral analysis to optimize the planned methodology for on-node computation. The accuracy obtained by combined approach is tested by utilizing datasets obtained from laboratory and real world controlled and uncontrolled environment. Our outcomes demonstrate the legitimacy of the methodology on various human action datasets, beating different techniques, including the two strategies utilized inside our consolidated pipeline. We additionally exhibit that our integrated design's classification times are reliable with on node real-time analysis criteria on smart phones and wearable technology.


Sign in / Sign up

Export Citation Format

Share Document