kinect camera
Recently Published Documents


TOTAL DOCUMENTS

130
(FIVE YEARS 47)

H-INDEX

11
(FIVE YEARS 3)

Author(s):  
Souhila Kahlouche ◽  
Mahmoud Belhocine ◽  
Abdallah Menouar

In this work, efficient human activity recognition (HAR) algorithm based on deep learning architecture is proposed to classify activities into seven different classes. In order to learn spatial and temporal features from only 3D skeleton data captured from a “Microsoft Kinect” camera, the proposed algorithm combines both convolution neural network (CNN) and long short-term memory (LSTM) architectures. This combination allows taking advantage of LSTM in modeling temporal data and of CNN in modeling spatial data. The captured skeleton sequences are used to create a specific dataset of interactive activities; these data are then transformed according to a view invariant and a symmetry criterion. To demonstrate the effectiveness of the developed algorithm, it has been tested on several public datasets and it has achieved and sometimes has overcome state-of-the-art performance. In order to verify the uncertainty of the proposed algorithm, some tools are provided and discussed to ensure its efficiency for continuous human action recognition in real time.


Smart Science ◽  
2021 ◽  
pp. 1-8
Author(s):  
Ariij Naufal ◽  
Choirul Anam ◽  
Catur Edi Widodo ◽  
Geoff Dougherty
Keyword(s):  

Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4588
Author(s):  
Vinicio Alejandro Rosas-Cervantes ◽  
Quoc-Dong Hoang ◽  
Soon-Geul Lee ◽  
Jae-Hwan Choi

Most indoor environments have wheelchair adaptations or ramps, providing an opportunity for mobile robots to navigate sloped areas avoiding steps. These indoor environments with integrated sloped areas are divided into different levels. The multi-level areas represent a challenge for mobile robot navigation due to the sudden change in reference sensors as visual, inertial, or laser scan instruments. Using multiple cooperative robots is advantageous for mapping and localization since they permit rapid exploration of the environment and provide higher redundancy than using a single robot. This study proposes a multi-robot localization using two robots (leader and follower) to perform a fast and robust environment exploration on multi-level areas. The leader robot is equipped with a 3D LIDAR for 2.5D mapping and a Kinect camera for RGB image acquisition. Using 3D LIDAR, the leader robot obtains information for particle localization, with particles sampled from the walls and obstacle tangents. We employ a convolutional neural network on the RGB images for multi-level area detection. Once the leader robot detects a multi-level area, it generates a path and sends a notification to the follower robot to go into the detected location. The follower robot utilizes a 2D LIDAR to explore the boundaries of the even areas and generate a 2D map using an extension of the iterative closest point. The 2D map is utilized as a re-localization resource in case of failure of the leader robot.


Author(s):  
Jyotindra Narayan ◽  
Santosha Kumar Dwivedy

Abstract This work aims to estimate the lower-limb joint angles in the sagittal plane using Microsoft Kinect-based experimental setup and apply an efficient machine learning technique for predicting the same based on kinematic, spatiotemporal and biological parameters. Ten healthy participants from 19-50 years (33 ± 11.24 years) were asked to walk in front of the Kinect camera. Based on the skeleton image, the biomechanical hip, knee, and ankle joint angles of the lower-limb were measured using NI-LabView. Thereafter, two Bayesian regularisation-based backpropagation multilayer perceptron neural network models were designed to predict the joint angles in the stance and swing phase. The joint angles of two individuals, as a testing dataset, were predicted and compared with the experimental results. The test correlation coefficient for predicted joint angles have shown a promising effect of the proposed neural network models. Finally, a qualitative comparison was presented between the joint angles of healthy people and unhealthy people of similar age groups.


2021 ◽  
Author(s):  
Niclas Zeller

This thesis presents the development of image processing algorithms based on a Microsoft Kinect camera system. The algorithms developed during this thesis are applied on the depth image received from Kinect and are supposed to model a three dimensional object based representation of the recorded scene. The motivation behind this thesis is to develop a system which assists visually impaired people by navigating through unknown environments. The developed system is able to detect obstacles in the recorded scene and to warn about these obstacles. Since the goal of this thesis was not to develop a complete real time system but to invent reliable algorithms solving this task, the algorithms were developed in MATLAB. Additionally a control software was developed by which depth as well as color images can be received from Kinect. The developed algorithms are a combination of already known plane fitting algorithms and novel approaches. The algorithms perform a plane segmentation of the 3D point cloud and model objects out of the received segments. Each obstacle is defined by a cuboid box and thus can be illustrated easily to the blind person. For plane segmentation different approaches were compared to each other to find the most suitable approach. The first algorithm analyzed in this thesis is a normal vector based plane fitting algorithm. This algorithm supplies very accurate results but also has a high computation effort. The second approach, which was finally implemented, is a gradient based 2D image segmentation combined with a RANSAC plane segmentation (6) in a 3D points cloud. This approach has the advantage to find very small edges within the scene but also builds planes based on global constrains. Beside the development of the algorithm results of the image processing, which are really promising, are presented. Thus the algorithm is worth to be improved by further development. The developed algorithm is able to detect very small but significant obstacles but on the other hand does not represent the scene too detailed such that the result can be illustrated accurately to a blind person.


2021 ◽  
Author(s):  
Niclas Zeller

This thesis presents the development of image processing algorithms based on a Microsoft Kinect camera system. The algorithms developed during this thesis are applied on the depth image received from Kinect and are supposed to model a three dimensional object based representation of the recorded scene. The motivation behind this thesis is to develop a system which assists visually impaired people by navigating through unknown environments. The developed system is able to detect obstacles in the recorded scene and to warn about these obstacles. Since the goal of this thesis was not to develop a complete real time system but to invent reliable algorithms solving this task, the algorithms were developed in MATLAB. Additionally a control software was developed by which depth as well as color images can be received from Kinect. The developed algorithms are a combination of already known plane fitting algorithms and novel approaches. The algorithms perform a plane segmentation of the 3D point cloud and model objects out of the received segments. Each obstacle is defined by a cuboid box and thus can be illustrated easily to the blind person. For plane segmentation different approaches were compared to each other to find the most suitable approach. The first algorithm analyzed in this thesis is a normal vector based plane fitting algorithm. This algorithm supplies very accurate results but also has a high computation effort. The second approach, which was finally implemented, is a gradient based 2D image segmentation combined with a RANSAC plane segmentation (6) in a 3D points cloud. This approach has the advantage to find very small edges within the scene but also builds planes based on global constrains. Beside the development of the algorithm results of the image processing, which are really promising, are presented. Thus the algorithm is worth to be improved by further development. The developed algorithm is able to detect very small but significant obstacles but on the other hand does not represent the scene too detailed such that the result can be illustrated accurately to a blind person.


2021 ◽  
Author(s):  
Danqi Xu ◽  
Lintao Chen ◽  
Xiangwei Mou ◽  
Qian Wu ◽  
Guoqi Sun

Sign in / Sign up

Export Citation Format

Share Document