MARS: mmWave-based Assistive Rehabilitation System for Smart Healthcare

2021 ◽  
Vol 20 (5s) ◽  
pp. 1-22
Author(s):  
Sizhe An ◽  
Umit Y. Ogras

Rehabilitation is a crucial process for patients suffering from motor disorders. The current practice is performing rehabilitation exercises under clinical expert supervision. New approaches are needed to allow patients to perform prescribed exercises at their homes and alleviate commuting requirements, expert shortages, and healthcare costs. Human joint estimation is a substantial component of these programs since it offers valuable visualization and feedback based on body movements. Camera-based systems have been popular for capturing joint motion. However, they have high-cost, raise serious privacy concerns, and require strict lighting and placement settings. We propose a millimeter-wave (mmWave)-based assistive rehabilitation system (MARS) for motor disorders to address these challenges. MARS provides a low-cost solution with a competitive object localization and detection accuracy. It first maps the 5D time-series point cloud from mmWave to a lower dimension. Then, it uses a convolution neural network (CNN) to estimate the accurate location of human joints. MARS can reconstruct 19 human joints and their skeleton from the point cloud generated by mmWave radar. We evaluate MARS using ten specific rehabilitation movements performed by four human subjects involving all body parts and obtain an average mean absolute error of 5.87 cm for all joint positions. To the best of our knowledge, this is the first rehabilitation movements dataset using mmWave point cloud. MARS is evaluated on the Nvidia Jetson Xavier-NX board. Model inference takes only 64 s and consumes 442 J energy. These results demonstrate the practicality of MARS on low-power edge devices.

Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1684
Author(s):  
Tianxu Xu ◽  
Dong An ◽  
Yuetong Jia ◽  
Yang Yue

Joint estimation of the human body is suitable for many fields such as human–computer interaction, autonomous driving, video analysis and virtual reality. Although many depth-based researches have been classified and generalized in previous review or survey papers, the point cloud-based pose estimation of human body is still difficult due to the disorder and rotation invariance of the point cloud. In this review, we summarize the recent development on the point cloud-based pose estimation of the human body. The existing works are divided into three categories based on their working principles, including template-based method, feature-based method and machine learning-based method. Especially, the significant works are highlighted with a detailed introduction to analyze their characteristics and limitations. The widely used datasets in the field are summarized, and quantitative comparisons are provided for the representative methods. Moreover, this review helps further understand the pertinent applications in many frontier research directions. Finally, we conclude the challenges involved and problems to be solved in future researches.


Author(s):  
Raj Desai ◽  
Anirban Guha ◽  
Pasumarthy Seshu

Long duration automobile-induced vibration is the cause of many ailments to humans. Predicting and mitigating these vibrations through seat requires a good model of seated human body. A good model is the one that strikes the right balance between modelling difficulty and simulation results accuracy. Increasing the number of body parts which have been separately modelled and increasing the number of ways these parts are connected to each other increase the number of degrees of freedom of the entire model. A number of such models have been reported in the literature. These range from simple lumped parameter models with limited accuracy to advanced models with high computational cost. However, a systematic comparison of these models has not been reported till date. This work creates eight such models ranging from 8 to 26 degrees of freedom and tries to identify the model which strikes the right balance between modelling complexity and results accuracy. A comparison of the models’ prediction with experimental data published in the literature allows the identification of a 12 degree of freedom backrest supported model as optimum for modelling complexity and prediction accuracy.


Procedia CIRP ◽  
2020 ◽  
Vol 93 ◽  
pp. 508-513
Author(s):  
Gergely Horváth ◽  
Gábor Erdős

2019 ◽  
Vol 2019 ◽  
pp. 1-9 ◽  
Author(s):  
Hai Wang ◽  
Xinyu Lou ◽  
Yingfeng Cai ◽  
Yicheng Li ◽  
Long Chen

Vehicle detection is one of the most important environment perception tasks for autonomous vehicles. The traditional vision-based vehicle detection methods are not accurate enough especially for small and occluded targets, while the light detection and ranging- (lidar-) based methods are good in detecting obstacles but they are time-consuming and have a low classification rate for different target types. Focusing on these shortcomings to make the full use of the advantages of the depth information of lidar and the obstacle classification ability of vision, this work proposes a real-time vehicle detection algorithm which fuses vision and lidar point cloud information. Firstly, the obstacles are detected by the grid projection method using the lidar point cloud information. Then, the obstacles are mapped to the image to get several separated regions of interest (ROIs). After that, the ROIs are expanded based on the dynamic threshold and merged to generate the final ROI. Finally, a deep learning method named You Only Look Once (YOLO) is applied on the ROI to detect vehicles. The experimental results on the KITTI dataset demonstrate that the proposed algorithm has high detection accuracy and good real-time performance. Compared with the detection method based only on the YOLO deep learning, the mean average precision (mAP) is increased by 17%.


2019 ◽  
Vol 10 (1) ◽  
pp. 235 ◽  
Author(s):  
Hongyao Shen ◽  
Wangzhe Du ◽  
Weijun Sun ◽  
Yuetong Xu ◽  
Jianzhong Fu

Fused Deposition Modeling (FDM) additive manufacturing technology is widely applied in recent years. However, there are many defects that may affect the surface quality, accuracy, or even cause the collapse of the parts in the printing process. In the existing defect detection technology, the characteristics of parts themselves may be misjudged as defects. This paper presents a solution to the problem of distinguishing the defects and their own characteristics in robot 3-D printing. A self-feature extraction method of shape defect detection of 3D printing products is introduced. Discrete point cloud after model slicing is used both for path planning in 3D printing and self-feature extraction at the same time. In 3-D printing, it can generate G-code and control the shooting direction of the camera. Once the current coordinates have been received, the self-feature extraction begins, whose key steps are keeping a visual point cloud of the printed part and projecting the feature points to the picture under the equal mapping condition. After image processing technology, the contours of pictured projected and picture captured will be detected. At last, the final defects can be identified after evaluation of contour similarity based on empirical formula. This work will help to detect the defects online, improve the detection accuracy, and reduce the false detection rate without being affected by its own characteristics.


Author(s):  
Joanna Stanisz ◽  
Konrad Lis ◽  
Marek Gorgon

AbstractIn this paper, we present a hardware-software implementation of a deep neural network for object detection based on a point cloud obtained by a LiDAR sensor. The PointPillars network was used in the research, as it is a reasonable compromise between detection accuracy and calculation complexity. The Brevitas / PyTorch tools were used for network quantisation (described in our previous paper) and the FINN tool for hardware implementation in the reprogrammable Zynq UltraScale+ MPSoC device. The obtained results show that quite a significant computation precision limitation along with a few network architecture simplifications allows the solution to be implemented on a heterogeneous embedded platform with maximum 19% AP loss in 3D, maximum 8% AP loss in BEV and execution time 375ms (the FPGA part takes 262ms). We have also compared our solution in terms of inference speed with a Vitis AI implementation proposed by Xilinx (19 Hz frame rate). Especially, we have thoroughly investigated the fundamental causes of differences in the frame rate of both solutions. The code is available at https://github.com/vision-agh/pp-finn.


2020 ◽  
Author(s):  
Joanna Stanisz ◽  
Konrad Lis ◽  
Tomasz Kryjak ◽  
Marek Gorgon

In this paper we present our research on the optimisation of a deep neural network for 3D object detection in a point cloud. Techniques like quantisation and pruning available in the Brevitas and PyTorch tools were used. We performed the experiments for the PointPillars network, which offers a reasonable compromise between detection accuracy and calculation complexity. The aim of this work was to propose a variant of the network which we will ultimately implement in an FPGA device. This will allow for real-time LiDAR data processing with low energy consumption. The obtained results indicate that even a significant quantisation from 32-bit floating point to 2-bit integer in the main part of the algorithm, results in 5%-9% decrease of the detection accuracy, while allowing for almost a 16-fold reduction in size of the model.


2000 ◽  
Vol 25 (1) ◽  
pp. 29-39
Author(s):  
Liora Malka

Woyzeck 91, was staged by the Itim Ensemble and the Cameri Theatre, Tel Aviv, in 1991. The production was adapted from Büchner's Woyzeck and directed by Rina Yerushalmi. The adaptation expands Büchner's play text mainly through the addition of scientific lectures, mostly about human physiology, which present the human being as a biological organism: heart, sex organs, reproducing cells, nervous system as the source of feelings. These additional scenes focus attention on Woyzeck's body as an experimental model, along with other performative devices (slides of body parts, and a skeleton). The juxtaposition of the human body (human subject) with its scientific and technological fragmentation reflects the performance's central theme: it objectifies the human subjects in our modern world of genetic experiments, technological innovations and socio-political reactions, which threaten the destruction of humanity.


Sign in / Sign up

Export Citation Format

Share Document