microsoft kinect
Recently Published Documents


TOTAL DOCUMENTS

692
(FIVE YEARS 186)

H-INDEX

38
(FIVE YEARS 6)

2022 ◽  
Author(s):  
Madhav Rao

This study examines the system integration of a game engine with robotics middleware to drive an 8 degree offreedom (DoF) robotic upper limb to generate human-like motion for telerobotic applications. The developed architectureencompasses a pipeline execution design using Blender Game Engine (BGE) including the acquisition of real humanmovements via the Microsoft Kinect V2, interfaced with a modeled virtual arm, and replication of similar arm movements on the physical robotic arm. In particular, this study emphasizes the integration of a human “pilot” with ways to drive such a robotic arm through simulation and later, into a finished system. Additionally, using motion capture technology, a human upper limb action was recorded and applied onto the robot arm using the proposed architecture flow. Also, we showcase the robotic arm’s actions which include reaching, picking, holding, and dropping an object. This paper presentsa simple and intuitive kinematic modeling and 3D simulation process, which is validated using 8-DoF articulated robot to demonstrate methods for animation, and simulation using the designed interface.


Animals ◽  
2021 ◽  
Vol 11 (12) ◽  
pp. 3595
Author(s):  
Severiano R. Silva ◽  
Mariana Almeida ◽  
Isabella Condotta ◽  
André Arantes ◽  
Cristina Guedes ◽  
...  

This study aimed to evaluate the accuracy of the leg volume obtained by the Microsoft Kinect sensor to predict the composition of light lamb carcasses. The trial was performed on carcasses of twenty-two male lambs (17.6 ± 1.8 kg, body weight). The carcasses were split into eight cuts, divided into three groups according to their commercial value: high-value, medium value, and low-value group. Linear, area, and volume of leg measurements were obtained to predict carcass and cuts composition. The leg volume was acquired by two different methodologies: 3D image reconstruction using a Microsoft Kinect sensor and Archimedes principle. The correlation between these two leg measurements was significant (r = 0.815, p < 0.01). The models to predict cuts and carcass traits that include leg Kinect 3D sensor volume are very good in predicting the weight of the medium value and leg cuts (R2 of 0.763 and 0.829, respectively). Furthermore, the model, which includes the Kinect leg volume, explained 85% of its variation for the carcass muscle. The results of this study confirm the good ability to estimate cuts and carcass traits of light lamb carcasses with leg volume obtained with the Kinect 3D sensor.


2021 ◽  
Vol 2021 ◽  
pp. 1-6
Author(s):  
Khalid Twarish Alhamazani ◽  
Jalawi Alshudukhi ◽  
Talal Saad Alharbi ◽  
Saud Aljaloud ◽  
Zelalem Meraf

In recent years, in combination with technological advances, new paradigms of interaction with the user have emerged. This has motivated the industry to create increasingly powerful and accessible natural user interface devices. In particular, depth cameras have achieved high levels of user adoption. These devices include the Microsoft Kinect, the Intel RealSense, and the Leap Motion Controller. This type of device facilitates the acquisition of data in human activity recognition. Hand gestures can be static or dynamic, depending on whether they present movement in the image sequences. Hand gesture recognition enables human-computer interaction (HCI) system developers to create more immersive, natural, and intuitive experiences and interactions. However, this task is not easy. That is why, in the academy, this problem has been addressed using machine learning techniques. The experiments carried out have shown very encouraging results indicating that the choice of this type of architecture allows obtaining an excellent efficiency of parameters and prediction times. It should be noted that the tests are carried out on a set of relevant data from the area. Based on this, the performance of this proposal is analysed about different scenarios such as lighting variation or camera movement, different types of gestures, and sensitivity or bias by people, among others. In this article, we will look at how infrared camera images can be used to segment, classify, and recognise one-handed gestures in a variety of lighting conditions. A standard webcam was modified, and an infrared filter was added to the lens to create the infrared camera. The scene was illuminated by additional infrared LED structures, allowing it to be used in various lighting conditions.


Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8186
Author(s):  
Peter Beshara ◽  
David B. Anderson ◽  
Matthew Pelletier ◽  
William R. Walsh

Advancements in motion sensing technology can potentially allow clinicians to make more accurate range-of-motion (ROM) measurements and informed decisions regarding patient management. The aim of this study was to systematically review and appraise the literature on the reliability of the Kinect, inertial sensors, smartphone applications and digital inclinometers/goniometers to measure shoulder ROM. Eleven databases were screened (MEDLINE, EMBASE, EMCARE, CINAHL, SPORTSDiscus, Compendex, IEEE Xplore, Web of Science, Proquest Science and Technology, Scopus, and PubMed). The methodological quality of the studies was assessed using the consensus-based standards for the selection of health Measurement Instruments (COSMIN) checklist. Reliability assessment used intra-class correlation coefficients (ICCs) and the criteria from Swinkels et al. (2005). Thirty-two studies were included. A total of 24 studies scored “adequate” and 2 scored “very good” for the reliability standards. Only one study scored “very good” and just over half of the studies (18/32) scored “adequate” for the measurement error standards. Good intra-rater reliability (ICC > 0.85) and inter-rater reliability (ICC > 0.80) was demonstrated with the Kinect, smartphone applications and digital inclinometers. Overall, the Kinect and ambulatory sensor-based human motion tracking devices demonstrate moderate–good levels of intra- and inter-rater reliability to measure shoulder ROM. Future reliability studies should focus on improving study design with larger sample sizes and recommended time intervals between repeated measurements.


Author(s):  
Souhila Kahlouche ◽  
Mahmoud Belhocine ◽  
Abdallah Menouar

In this work, efficient human activity recognition (HAR) algorithm based on deep learning architecture is proposed to classify activities into seven different classes. In order to learn spatial and temporal features from only 3D skeleton data captured from a “Microsoft Kinect” camera, the proposed algorithm combines both convolution neural network (CNN) and long short-term memory (LSTM) architectures. This combination allows taking advantage of LSTM in modeling temporal data and of CNN in modeling spatial data. The captured skeleton sequences are used to create a specific dataset of interactive activities; these data are then transformed according to a view invariant and a symmetry criterion. To demonstrate the effectiveness of the developed algorithm, it has been tested on several public datasets and it has achieved and sometimes has overcome state-of-the-art performance. In order to verify the uncertainty of the proposed algorithm, some tools are provided and discussed to ensure its efficiency for continuous human action recognition in real time.


2021 ◽  
Author(s):  
◽  
Callum Robinson

<p>MARVIN (Mobile Autonomous Robotic Vehicle for Indoor Navigation) was once the flagship of Victoria University’s mobile robotic fleet. However, over the years MARVIN has become obsolete. This thesis continues the the redevelopment of MARVIN, transforming it into a fully autonomous research platform for human-robot interaction (HRI).  MARVIN utilises a Segway RMP, a self balancing mobility platform. This provides agile locomotion, but increases sensor processing complexity due to its dynamic pitch. MARVIN’s existing sensing systems (including a laser rangefinder and ultrasonic sensors) are augmented with tactile sensors and a Microsoft Kinect v2 RGB-D camera for 3D sensing. This allows the detection of the obstacles often found in MARVIN’s unmodified office-like operating environment.  These sensors are processed using novel techniques to account for the Segway’s dynamic pitch. A newly developed navigation stack takes the processed sensor data to facilitate localisation, obstacle detection and motion planning.  MARVIN’s inherited humanoid robotic torso is augmented with a touch screen and voice interface, enabling HRI. MARVIN’s HRI capabilities are demonstrated by implementing it as a robotic guide. This implementation is evaluated through a usability study and found to be successful.  Through evaluations of MARVIN’s locomotion, sensing, localisation and motion planning systems, in addition to the usability study, MARVIN is found to be capable of both autonomous navigation and engaging HRI. These developed features open a diverse range of research directions and HRI tasks that MARVIN can be used to explore.</p>


2021 ◽  
Author(s):  
◽  
Emily Steel

<p>Natural, wearable game controllers explores how people interact with games and their potential uses. Since the early days of personal computing video games have been used for more than just fun. Such uses include exploration education, simulation of real world environments and the study of human thought processes (Wolf, 2008). As well as video games being used in a wide variety of settings, there has also been considerable variation in the way we interact with them - from basic mouse and keyboard interaction to the introduction of non-traditional gaming systems such the Nintendo Wii and Microsoft Kinect. These different inputs can be fall within a spectrum of abstract and natural game controllers. This thesis looks at the difference between the two and applies this to the creation of a natural wearable game controller.   The aim of this thesis was to create a customised human-computer interface (HCI) input device, using a reliable piece of hardware with accompanying software a user could interact with. Through design experiments a wearable game controller was created in the form of a wrap band. Once the wrap band was developed the next step was to see how it could be used as a game controller. Design experiments were conducted, focusing on integration with a pre-existing game, using it as an exercise assessment tool and developing a specific game which could be used for rehabilitation.The area of rehabilitation gaming is broad so this thesis focuses on Weight Bearing Asymmetry (WBA). This is a condition where a person does not evenly distribute their weight between their feet.   This thesis explores a range of hardware and software design experiments to see how wearable technology can be used to create a new way of interacting with video games. It looks at the benefits of using wearable technology and gaming for rehabilitation, it’s limitations and future applications of this technology. The thesis concludes that natural wearable game controllers do have potential real world application in both gaming and rehabilitation.</p>


2021 ◽  
Author(s):  
◽  
Emily Steel

<p>Natural, wearable game controllers explores how people interact with games and their potential uses. Since the early days of personal computing video games have been used for more than just fun. Such uses include exploration education, simulation of real world environments and the study of human thought processes (Wolf, 2008). As well as video games being used in a wide variety of settings, there has also been considerable variation in the way we interact with them - from basic mouse and keyboard interaction to the introduction of non-traditional gaming systems such the Nintendo Wii and Microsoft Kinect. These different inputs can be fall within a spectrum of abstract and natural game controllers. This thesis looks at the difference between the two and applies this to the creation of a natural wearable game controller.   The aim of this thesis was to create a customised human-computer interface (HCI) input device, using a reliable piece of hardware with accompanying software a user could interact with. Through design experiments a wearable game controller was created in the form of a wrap band. Once the wrap band was developed the next step was to see how it could be used as a game controller. Design experiments were conducted, focusing on integration with a pre-existing game, using it as an exercise assessment tool and developing a specific game which could be used for rehabilitation.The area of rehabilitation gaming is broad so this thesis focuses on Weight Bearing Asymmetry (WBA). This is a condition where a person does not evenly distribute their weight between their feet.   This thesis explores a range of hardware and software design experiments to see how wearable technology can be used to create a new way of interacting with video games. It looks at the benefits of using wearable technology and gaming for rehabilitation, it’s limitations and future applications of this technology. The thesis concludes that natural wearable game controllers do have potential real world application in both gaming and rehabilitation.</p>


2021 ◽  
Author(s):  
◽  
Callum Robinson

<p>MARVIN (Mobile Autonomous Robotic Vehicle for Indoor Navigation) was once the flagship of Victoria University’s mobile robotic fleet. However, over the years MARVIN has become obsolete. This thesis continues the the redevelopment of MARVIN, transforming it into a fully autonomous research platform for human-robot interaction (HRI).  MARVIN utilises a Segway RMP, a self balancing mobility platform. This provides agile locomotion, but increases sensor processing complexity due to its dynamic pitch. MARVIN’s existing sensing systems (including a laser rangefinder and ultrasonic sensors) are augmented with tactile sensors and a Microsoft Kinect v2 RGB-D camera for 3D sensing. This allows the detection of the obstacles often found in MARVIN’s unmodified office-like operating environment.  These sensors are processed using novel techniques to account for the Segway’s dynamic pitch. A newly developed navigation stack takes the processed sensor data to facilitate localisation, obstacle detection and motion planning.  MARVIN’s inherited humanoid robotic torso is augmented with a touch screen and voice interface, enabling HRI. MARVIN’s HRI capabilities are demonstrated by implementing it as a robotic guide. This implementation is evaluated through a usability study and found to be successful.  Through evaluations of MARVIN’s locomotion, sensing, localisation and motion planning systems, in addition to the usability study, MARVIN is found to be capable of both autonomous navigation and engaging HRI. These developed features open a diverse range of research directions and HRI tasks that MARVIN can be used to explore.</p>


2021 ◽  
Vol 17 (34) ◽  
pp. 170-180
Author(s):  
Juan Camilo Hernandez-Gomez ◽  
Alejandro Restrepo-Martínez ◽  
Juliana Valencia-Aguirre

Clasificar el movimiento humano se ha convertido en una necesidad tecnológica, en donde para definir la posición de un sujeto requiere identificar el recorrido de las extremidades y el tronco del cuerpo, y tener la capacidad de diferenciar esta posición respecto a otros sujetos o movimientos, generándose la necesidad tener datos y algoritmos que faciliten su clasificación. Es así, como en este trabajo, se evalúa la capacidad discriminante de datos de captura de movimiento en rehabilitación física, donde la posición de los sujetos es adquirida con el Kinect de Microsoft y marcadores ópticos, y atributos del movimiento generados con el marco de Frenet Serret, evaluando su capacidad discriminante con los algoritmos máquinas de soporte vectorial, redes neuronales y k vecinos más cercanos. Los resultados presentan porcentajes de acierto del 93.5% en la clasificación con datos obtenidos del Kinect, y un éxito del 100% para los movimientos con marcadores ópticos. Classify human movement has become a technological necessity, where defining the position of a subject requires identifying the trajectory of the limbs and trunk of the body, having the ability to differentiate this position from other subjects or movements, which generates the need to have data and algorithms that help their classification. Therefore, the discriminant capacity of motion capture data in physical rehabilitation is evaluated, where the position of the subjects is acquired with the Microsoft Kinect and optical markers. Attributes of the movement generated with the Frenet Serret framework. Evaluating their discriminant capacity by means of support vector machines, neural networks, and k nearest neighbors algorithms. The obtained results present an accuracy of 93.5% in the classification with data obtained from the Kinect, and success of 100% for movements where the position is defined with optical markers.


Sign in / Sign up

Export Citation Format

Share Document