scholarly journals Detection of sitting posture using hierarchical image composition and deep learning

2021 ◽  
Vol 7 ◽  
pp. e442
Author(s):  
Audrius Kulikajevas ◽  
Rytis Maskeliunas ◽  
Robertas Damaševičius

Human posture detection allows the capture of the kinematic parameters of the human body, which is important for many applications, such as assisted living, healthcare, physical exercising and rehabilitation. This task can greatly benefit from recent development in deep learning and computer vision. In this paper, we propose a novel deep recurrent hierarchical network (DRHN) model based on MobileNetV2 that allows for greater flexibility by reducing or eliminating posture detection problems related to a limited visibility human torso in the frame, i.e., the occlusion problem. The DRHN network accepts the RGB-Depth frame sequences and produces a representation of semantically related posture states. We achieved 91.47% accuracy at 10 fps rate for sitting posture recognition.

2020 ◽  
Author(s):  
Jahnvi Gupta ◽  
Nitin Gupta ◽  
Mukesh Kumar ◽  
Ritwik Duggal

Analysis of human posture has many applications in the field of sports and medical science including patient monitoring, lifestyle analysis, elderly care etc. Many of the works in this area have been based on computer vision techniques. These are limited in providing real-time solution. Thus, Internet of Things (IoT) based solution are being planned and used for the human posture recognition and detection. The data collected from sensors is then passed to machine learning or deep learning algorithms to find different patterns. In this chapter an introduction to IoT based posture detection is provided with an introduction to underlying sensor technology, which can help in selection for appropriate sensors for the posture detection.<br>


2020 ◽  
Author(s):  
Jahnvi Gupta ◽  
Nitin Gupta ◽  
Mukesh Kumar ◽  
Ritwik Duggal

Analysis of human posture has many applications in the field of sports and medical science including patient monitoring, lifestyle analysis, elderly care etc. Many of the works in this area have been based on computer vision techniques. These are limited in providing real-time solution. Thus, Internet of Things (IoT) based solution are being planned and used for the human posture recognition and detection. The data collected from sensors is then passed to machine learning or deep learning algorithms to find different patterns. In this chapter an introduction to IoT based posture detection is provided with an introduction to underlying sensor technology, which can help in selection for appropriate sensors for the posture detection.<br>


2021 ◽  
Vol 11 (11) ◽  
pp. 4752
Author(s):  
Gian Domenico Licciardo ◽  
Alessandro Russo ◽  
Alessandro Naddeo ◽  
Nicola Cappetti ◽  
Luigi Di Benedetto ◽  
...  

A custom HW design of a Fully Convolutional Neural Network (FCN) is presented in this paper to implement an embeddable Human Posture Recognition (HPR) system capable of very high accuracy both for laying and sitting posture recognition. The FCN exploits a new base-2 quantization scheme for weight and binarized activations to meet the optimal trade-off between low power dissipation, a very reduced set of instantiated physical resources and state-of-the-art accuracy to classify human postures. By using a limited number of pressure sensors only, the optimized HW implementation allows keeping the computation close to the data sources according to the edge computing paradigm and enables the design of embedded HP systems. The FCN can be simply reconfigured to be used for laying and sitting posture recognition. Tested on a public dataset for in-bed posture classification, the proposed FCN obtains a mean accuracy value of 96.77% to recognize 17 different postures, while a small custom dataset has been used for training and testing for sitting posture recognition, where the FCN achieves 98.88% accuracy to recognize eight positions. The FCN has been prototyped on a Xilinx Artix 7 FPGA where it exhibits a dynamic power dissipation lower than 11 mW and 7 mW for laying and sitting posture recognition, respectively, and a maximum operation frequency of 47.64 MHz and 26.6 MHz, corresponding to an Output Data Rate (ODR) of the sensors of 16.50 kHz and 9.13 kHz, respectively. Furthermore, synthesis results with a CMOS 130 nm technology have been reported, to give an estimation about the possibility of an in-sensor circuital implementation.


2019 ◽  
Author(s):  
Wenjun Liu ◽  
Yunfei Guo ◽  
Jun Yang ◽  
Yun Hu ◽  
Dapeng Wei

2005 ◽  
Vol 05 (04) ◽  
pp. 825-837
Author(s):  
M. MASUDUR RAHMAN ◽  
SEIJI ISHIKAWA

This paper investigates an appearance-change issue due to various human body shapes in an eigenspace analysis, which is responsible for generating person-based eigenspaces employing a conventional eigenspace method. We call this a figure effect in this study for this phenomenon. As a consequence, an appearance-based eigenspace method cannot be effective for recognizing human postures with its present available formulation. We propose to employ a generalized eigenspace for avoiding this problem, which is developed by calculating a mean of some selected eigenspaces. We also investigate a dress effect due to human wearing clothes in this paper. The study proposes image pre-processing by Laplacian of Gaussian (LoG) filter for reducing the dress problem. Since the proposed method tunes a conventional eigenspace as an appropriate method for human posture recognition, the proposed scheme is known as an eigenspace tuning. An eigenspace called a tuned eigenspace is obtained from this tuning scheme and it is used for further recognition of unfamiliar postures. We have tested the proposed approach employing a number of human models wearing various clothes along with their different body shapes, and the significance of the method to human posture recognition has been demonstrated.


2022 ◽  
Vol 73 ◽  
pp. 103432
Author(s):  
Zhe Fan ◽  
Xing Hu ◽  
Wen-Ming Chen ◽  
Da-Wei Zhang ◽  
Xin Ma

2021 ◽  
Vol 109 (5) ◽  
pp. 863-890
Author(s):  
Yannis Panagakis ◽  
Jean Kossaifi ◽  
Grigorios G. Chrysos ◽  
James Oldfield ◽  
Mihalis A. Nicolaou ◽  
...  

Plant Methods ◽  
2021 ◽  
Vol 17 (1) ◽  
Author(s):  
Shuo Zhou ◽  
Xiujuan Chai ◽  
Zixuan Yang ◽  
Hongwu Wang ◽  
Chenxue Yang ◽  
...  

Abstract Background Maize (Zea mays L.) is one of the most important food sources in the world and has been one of the main targets of plant genetics and phenotypic research for centuries. Observation and analysis of various morphological phenotypic traits during maize growth are essential for genetic and breeding study. The generally huge number of samples produce an enormous amount of high-resolution image data. While high throughput plant phenotyping platforms are increasingly used in maize breeding trials, there is a reasonable need for software tools that can automatically identify visual phenotypic features of maize plants and implement batch processing on image datasets. Results On the boundary between computer vision and plant science, we utilize advanced deep learning methods based on convolutional neural networks to empower the workflow of maize phenotyping analysis. This paper presents Maize-IAS (Maize Image Analysis Software), an integrated application supporting one-click analysis of maize phenotype, embedding multiple functions: (I) Projection, (II) Color Analysis, (III) Internode length, (IV) Height, (V) Stem Diameter and (VI) Leaves Counting. Taking the RGB image of maize as input, the software provides a user-friendly graphical interaction interface and rapid calculation of multiple important phenotypic characteristics, including leaf sheath points detection and leaves segmentation. In function Leaves Counting, the mean and standard deviation of difference between prediction and ground truth are 1.60 and 1.625. Conclusion The Maize-IAS is easy-to-use and demands neither professional knowledge of computer vision nor deep learning. All functions for batch processing are incorporated, enabling automated and labor-reduced tasks of recording, measurement and quantitative analysis of maize growth traits on a large dataset. We prove the efficiency and potential capability of our techniques and software to image-based plant research, which also demonstrates the feasibility and capability of AI technology implemented in agriculture and plant science.


Sign in / Sign up

Export Citation Format

Share Document