A Platform for Mechanical Assembly Education Using the Microsoft Kinect

Author(s):  
Yizhe Chang ◽  
El-Sayed Aziz ◽  
Zhou Zhang ◽  
Mingshao Zhang ◽  
Sven Esche ◽  
...  

Mechanical assembly activities involve multiple factors including humans, mechanical parts, tools and assembly environments. In order to simulate assembly processes by computers for educational purposes, all these factors should be considered. Virtual reality (VR) technology, which aims to integrate natural human motion into real-world scenarios, provides an ideal simulation medium. Novel VR devices such as 3D glasses, motion-tracking gloves, haptic sensors, etc. are able to fulfill fundamental assembly simulation needs. However, most of these implementations focus on assembly simulations for computer-aided design, which are geared toward professionals rather than students, thus leading to complicated assembly procedures not suitable for students. Furthermore, the costs of these novel VR devices and specifically designed VR platforms represent an untenable financial burden for most educational institutions. In this paper, a virtual platform for mechanical assembly education based on the Microsoft Kinect sensor and Garry’s Mod (GMod) is presented. With the help of the Kinect’s body tracking function and voice recognition technology in conjunction with the graphics and physics simulation capabilities of GMod, a low-cost VR platform that enables educators to author their own assembly simulations was implemented. This platform utilizes the Kinect as the sole input device. Students can use voice commands to navigate their avatars inside of a GMod powered virtual laboratory as well as use their body’s motions to integrate pre-defined mechanical parts into assemblies. Under this platform, assembly procedures involving the picking, placing and attaching of parts can be performed collaboratively by multiple users. In addition, the platform allows collaborative learning without the need for the learners to be co-located. A pilot study for this platform showed that, with the instructor’s assistance, mechanical engineering undergraduate students are able to complete basic assembly operations.

Author(s):  
Daniele Regazzoni ◽  
Andrea Vitali ◽  
Filippo Colombo Zefinetti ◽  
Caterina Rizzi

Abstract Nowadays, healthcare centers are not familiar with quantitative approaches for patients’ gait evaluation. There is a clear need for methods to obtain objective figures characterizing patients’ performance. Actually, there are no diffused methods for comparing the pre- and post-operative conditions of the same patient, integrating clinical information and representing a measure of the efficiency of functional recovery, especially in the short-term distance of the surgical intervention. To this aim, human motion tracking for medical analysis is creating new frontiers for potential clinical and home applications. Motion Capture (Mocap) systems are used to allow detecting and tracking human body movements, such as gait or any other gesture or posture in a specific context. In particular, low-cost portable systems can be adopted for the tracking of patients’ movements. The pipeline going from tracking the scene to the creation of performance scores and indicators has its main challenge in the data elaboration, which depends on the specific context and to the detailed performance to be evaluated. The main objective of this research is to investigate whether the evaluation of the patient’s gait through markerless optical motion capture technology can be added to clinical evaluations scores and if it is able to provide a quantitative measure of recovery in the short postoperative period. A system has been conceived, including commercial sensors and a way to elaborate data captured according to caregivers’ requirements. This allows transforming the real gait of a patient right before and/or after the surgical procedure into a set of scores of medical relevance for his/her evaluation. The technical solution developed in this research will be the base for a large acquisition and data elaboration campaign performed in collaboration with an orthopedic team of surgeons specialized in hip arthroplasty. This will also allow assessing and comparing the short run results obtained by adopting different state-of-the-art surgical approach for the hip replacement.


2016 ◽  
Vol 138 (9) ◽  
Author(s):  
Arash Atrsaei ◽  
Hassan Salarieh ◽  
Aria Alasty

Due to various applications of human motion capture techniques, developing low-cost methods that would be applicable in nonlaboratory environments is under consideration. MEMS inertial sensors and Kinect are two low-cost devices that can be utilized in home-based motion capture systems, e.g., home-based rehabilitation. In this work, an unscented Kalman filter approach was developed based on the complementary properties of Kinect and the inertial sensors to fuse the orientation data of these two devices for human arm motion tracking during both stationary shoulder joint position and human body movement. A new measurement model of the fusion algorithm was obtained that can compensate for the inertial sensors drift problem in high dynamic motions and also joints occlusion in Kinect. The efficiency of the proposed algorithm was evaluated by an optical motion tracker system. The errors were reduced by almost 50% compared to cases when either inertial sensor or Kinect measurements were utilized.


Author(s):  
Daniele Regazzoni ◽  
Andrea Vitali ◽  
Caterina Rizzi ◽  
Giorgio Colombo

A number of pathologies impact on the way a patient can either move or control the movements of the body. Traumas, articulation arthritis or generic orthopedic disease affect the way a person can walk or perform everyday movements; brain or spine issues can lead to a complete or partial impairment, affecting both muscular response and sensitivity. Each of these disorder shares the need of assessing patient’s condition while doing specific tests and exercises or accomplishing everyday life tasks. Moreover, also high-level sport activity may be worth using digital tools to acquire physical performances to be improved. The assessment can be done for several purpose, such as creating a custom physical rehabilitation plan, monitoring improvements or worsening over time, correcting wrong postures or bad habits and, in the sportive domain to optimize effectiveness of gestures or related energy consumption. The paper shows the use of low-cost motion capture techniques to acquire human motion, the transfer of motion data to a digital human model and the extraction of desired information according to each specific medical or sportive purpose. We adopted the well-known and widespread Mocap technology implemented by Microsoft Kinect devices and we used iPisoft tools to perform acquisition and the preliminary data elaboration on the virtual skeleton of the patient. The focus of the paper is on the working method that can be generalized to be adopted in any medical, rehabilitative or sportive condition in which the analysis of the motion is crucial. The acquisition scene can be optimized in terms of size and shape of the working volume and in the number and positioning of sensors. However, the most important and decisive phase consist in the knowledge acquisition and management. For each application and even for each single exercise or tasks a set of evaluation rules and thresholds must be extracted from literature or, more often, directly form experienced personnel. This operation is generally time consuming and require further iterations to be refined, but it is the core to generate an effective metric and to correctly assess patients and athletes performances. Once rules are defined, proper algorithms are defined and implemented to automatically extract only the relevant data in specific time frames to calculate performance indexes. At last, a report is generated according to final user requests and skills.


Sensor Review ◽  
2019 ◽  
Vol 39 (2) ◽  
pp. 233-245 ◽  
Author(s):  
Ying Huang ◽  
Chao Hao ◽  
Jian Liu ◽  
Xiaohui Guo ◽  
Yangyang Zhang ◽  
...  

Purpose The purpose of this study is to present a highly stretchable and flexible strain sensor with simple and low cost of fabrication process and excellent dynamic characteristics, which make it suitable for human motion monitoring under large strain and high frequency. Design/methodology/approach The strain sensor was fabricated using the rubber/latex polymer as elastic carrier and single-walled carbon nanotubes (SWCNTs)/carbon black (CB) as a synergistic conductive network. The rubber/latex polymer was pre-treated in naphtha and then soaked in SWCNTs/CB/silicon rubber composite solution. The strain sensing and other performance of the sensor were measured and human motion tracking applications were tried. Findings These strain sensors based on aforementioned materials display high stretchability (500 per cent), excellent flexibility, fast response (approximately 45 ms), low creep (3.1 per cent at 100 per cent strain), temperature and humidity independence, superior stability and reproducibility during approximately 5,000 stretch/release cycles. Furthermore, the authors used these composites as human motion sensors, effectively monitoring joint motion, indicating that the stretchable strain sensor based on the rubber/latex polymer and the synergetic effects of mixed SWCNTs and CB could have promising applications in flexible and wearable devices for human motion tracking. Originality/value This paper presents a low-cost and a new type of strain sensor with excellent performance that can open up new fields of applications in flexible, stretchable and wearable electronics, especially in human motion tracking applications where very large strain should be accommodated by the strain sensor.


Author(s):  
Mingshao Zhang ◽  
Zhou Zhang ◽  
El-Sayed Aziz ◽  
Sven K. Esche ◽  
Constantin Chassapis

The Microsoft Kinect is part of a wave of new sensing technologies. Its RGB-D camera is capable of providing high quality synchronized video of both color and depth data. Compared to traditional 3-D tracking techniques that use two separate RGB cameras’ images to calculate depth data, the Kinect is able to produce more robust and reliable results in object recognition and motion tracking. Also, due to its low cost, the Kinect provides more opportunities for use in many areas compared to traditional more expensive 3-D scanners. In order to use the Kinect as a range sensor, algorithms must be designed to first recognize objects of interest and then track their motions. Although a large number of algorithms for both 2-D and 3-D object detection have been published, reliable and efficient algorithms for 3-D object motion tracking are rare, especially using Kinect as a range sensor. In this paper, algorithms for object recognition and tracking that can make use of both RGB and depth data in different scenarios are introduced. Subsequently, efficient methods for scene segmentation including background and noise filtering are discussed. Taking advantage of those two kinds of methods, a prototype system that is capable of working efficiently and stably in various applications related to educational laboratories is presented.


Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8186
Author(s):  
Peter Beshara ◽  
David B. Anderson ◽  
Matthew Pelletier ◽  
William R. Walsh

Advancements in motion sensing technology can potentially allow clinicians to make more accurate range-of-motion (ROM) measurements and informed decisions regarding patient management. The aim of this study was to systematically review and appraise the literature on the reliability of the Kinect, inertial sensors, smartphone applications and digital inclinometers/goniometers to measure shoulder ROM. Eleven databases were screened (MEDLINE, EMBASE, EMCARE, CINAHL, SPORTSDiscus, Compendex, IEEE Xplore, Web of Science, Proquest Science and Technology, Scopus, and PubMed). The methodological quality of the studies was assessed using the consensus-based standards for the selection of health Measurement Instruments (COSMIN) checklist. Reliability assessment used intra-class correlation coefficients (ICCs) and the criteria from Swinkels et al. (2005). Thirty-two studies were included. A total of 24 studies scored “adequate” and 2 scored “very good” for the reliability standards. Only one study scored “very good” and just over half of the studies (18/32) scored “adequate” for the measurement error standards. Good intra-rater reliability (ICC > 0.85) and inter-rater reliability (ICC > 0.80) was demonstrated with the Kinect, smartphone applications and digital inclinometers. Overall, the Kinect and ambulatory sensor-based human motion tracking devices demonstrate moderate–good levels of intra- and inter-rater reliability to measure shoulder ROM. Future reliability studies should focus on improving study design with larger sample sizes and recommended time intervals between repeated measurements.


Author(s):  
Robert Bogue

Purpose This paper aims to provide a technical insight into a selection of robotic people detection technologies and applications. Design/methodology/approach Following an introduction, this paper first discusses people-sensing technologies which seek to extend the capabilities of human-robot collaboration by allowing humans to operate alongside conventional, industrial robots. It then provides examples of developments in people detection and tracking in unstructured, dynamic environments. Developments in people sensing and monitoring by assistive robots are then considered and finally, brief concluding comments are drawn. Findings Robotic people detection technologies are the topic of an extensive research effort and are becoming increasingly important, as growing numbers of robots interact directly with humans. These are being deployed in industry, in public places and in the home. The sensing requirements vary according to the application and range from simple person detection and avoidance to human motion tracking, behaviour and safety monitoring, individual recognition and gesture sensing. Sensing technologies include cameras, lasers and ultrasonics, and low cost RGB-D cameras are having a major impact. Originality/value This article provides details of a range of developments involving people sensing in the important and rapidly developing field of human-robot interactions.


Author(s):  
Fabiana Di Ciaccio ◽  
Paolo Russo ◽  
Salvatore Troisi

The use of Attitude and Heading Reference Systems (AHRS) for orientation estimation is now common practice in a wide range of applications, e.g., robotics and human motion tracking, aerial vehicles and aerospace, gaming and virtual reality, indoor pedestrian navigation and maritime navigation. The integration of the high-rate measurements can provide very accurate estimates, but these can suffer from errors accumulation due to the sensors drift over longer time scales. To overcome this issue, inertial sensors are typically combined with additional sensors and techniques. As an example, camera-based solutions have drawn a large attention by the community, thanks to their low-costs and easy hardware setup; moreover, impressive results have been demonstrated in the context of Deep Learning. This work presents the preliminary results obtained by DOES , a supportive Deep Learning method specifically designed for maritime navigation, which aims at improving the roll and pitch estimations obtained by common AHRS. DOES recovers these estimations through the analysis of the frames acquired by a low-cost camera pointing the horizon at sea. The training has been performed on the novel ROPIS dataset, presented in the context of this work, acquired using the FrameWO application developed for the scope. Promising results encourage to test other network backbones and to further expand the dataset, improving the accuracy of the results and the range of applications of the method.


Sign in / Sign up

Export Citation Format

Share Document