Detecting humans in the robot workspace

Author(s):  
Robert Bogue

Purpose This paper aims to provide a technical insight into a selection of robotic people detection technologies and applications. Design/methodology/approach Following an introduction, this paper first discusses people-sensing technologies which seek to extend the capabilities of human-robot collaboration by allowing humans to operate alongside conventional, industrial robots. It then provides examples of developments in people detection and tracking in unstructured, dynamic environments. Developments in people sensing and monitoring by assistive robots are then considered and finally, brief concluding comments are drawn. Findings Robotic people detection technologies are the topic of an extensive research effort and are becoming increasingly important, as growing numbers of robots interact directly with humans. These are being deployed in industry, in public places and in the home. The sensing requirements vary according to the application and range from simple person detection and avoidance to human motion tracking, behaviour and safety monitoring, individual recognition and gesture sensing. Sensing technologies include cameras, lasers and ultrasonics, and low cost RGB-D cameras are having a major impact. Originality/value This article provides details of a range of developments involving people sensing in the important and rapidly developing field of human-robot interactions.

Sensor Review ◽  
2019 ◽  
Vol 39 (2) ◽  
pp. 233-245 ◽  
Author(s):  
Ying Huang ◽  
Chao Hao ◽  
Jian Liu ◽  
Xiaohui Guo ◽  
Yangyang Zhang ◽  
...  

Purpose The purpose of this study is to present a highly stretchable and flexible strain sensor with simple and low cost of fabrication process and excellent dynamic characteristics, which make it suitable for human motion monitoring under large strain and high frequency. Design/methodology/approach The strain sensor was fabricated using the rubber/latex polymer as elastic carrier and single-walled carbon nanotubes (SWCNTs)/carbon black (CB) as a synergistic conductive network. The rubber/latex polymer was pre-treated in naphtha and then soaked in SWCNTs/CB/silicon rubber composite solution. The strain sensing and other performance of the sensor were measured and human motion tracking applications were tried. Findings These strain sensors based on aforementioned materials display high stretchability (500 per cent), excellent flexibility, fast response (approximately 45 ms), low creep (3.1 per cent at 100 per cent strain), temperature and humidity independence, superior stability and reproducibility during approximately 5,000 stretch/release cycles. Furthermore, the authors used these composites as human motion sensors, effectively monitoring joint motion, indicating that the stretchable strain sensor based on the rubber/latex polymer and the synergetic effects of mixed SWCNTs and CB could have promising applications in flexible and wearable devices for human motion tracking. Originality/value This paper presents a low-cost and a new type of strain sensor with excellent performance that can open up new fields of applications in flexible, stretchable and wearable electronics, especially in human motion tracking applications where very large strain should be accommodated by the strain sensor.


Author(s):  
Daniele Regazzoni ◽  
Andrea Vitali ◽  
Filippo Colombo Zefinetti ◽  
Caterina Rizzi

Abstract Nowadays, healthcare centers are not familiar with quantitative approaches for patients’ gait evaluation. There is a clear need for methods to obtain objective figures characterizing patients’ performance. Actually, there are no diffused methods for comparing the pre- and post-operative conditions of the same patient, integrating clinical information and representing a measure of the efficiency of functional recovery, especially in the short-term distance of the surgical intervention. To this aim, human motion tracking for medical analysis is creating new frontiers for potential clinical and home applications. Motion Capture (Mocap) systems are used to allow detecting and tracking human body movements, such as gait or any other gesture or posture in a specific context. In particular, low-cost portable systems can be adopted for the tracking of patients’ movements. The pipeline going from tracking the scene to the creation of performance scores and indicators has its main challenge in the data elaboration, which depends on the specific context and to the detailed performance to be evaluated. The main objective of this research is to investigate whether the evaluation of the patient’s gait through markerless optical motion capture technology can be added to clinical evaluations scores and if it is able to provide a quantitative measure of recovery in the short postoperative period. A system has been conceived, including commercial sensors and a way to elaborate data captured according to caregivers’ requirements. This allows transforming the real gait of a patient right before and/or after the surgical procedure into a set of scores of medical relevance for his/her evaluation. The technical solution developed in this research will be the base for a large acquisition and data elaboration campaign performed in collaboration with an orthopedic team of surgeons specialized in hip arthroplasty. This will also allow assessing and comparing the short run results obtained by adopting different state-of-the-art surgical approach for the hip replacement.


2016 ◽  
Vol 138 (9) ◽  
Author(s):  
Arash Atrsaei ◽  
Hassan Salarieh ◽  
Aria Alasty

Due to various applications of human motion capture techniques, developing low-cost methods that would be applicable in nonlaboratory environments is under consideration. MEMS inertial sensors and Kinect are two low-cost devices that can be utilized in home-based motion capture systems, e.g., home-based rehabilitation. In this work, an unscented Kalman filter approach was developed based on the complementary properties of Kinect and the inertial sensors to fuse the orientation data of these two devices for human arm motion tracking during both stationary shoulder joint position and human body movement. A new measurement model of the fusion algorithm was obtained that can compensate for the inertial sensors drift problem in high dynamic motions and also joints occlusion in Kinect. The efficiency of the proposed algorithm was evaluated by an optical motion tracker system. The errors were reduced by almost 50% compared to cases when either inertial sensor or Kinect measurements were utilized.


Author(s):  
Gilbert Tang ◽  
Seemal Asif ◽  
Phil Webb

Purpose – The purpose of this paper is to describe the integration of a gesture control system for industrial collaborative robot. Human and robot collaborative systems can be a viable manufacturing solution, but efficient control and communication are required for operations to be carried out effectively and safely. Design/methodology/approach – The integrated system consists of facial recognition, static pose recognition and dynamic hand motion tracking. Each sub-system has been tested in isolation before integration and demonstration of a sample task. Findings – It is demonstrated that the combination of multiple gesture control methods can increase its potential applications for industrial robots. Originality/value – The novelty of the system is the combination of a dual gesture controls method which allows operators to command an industrial robot by posing hand gestures as well as control the robot motion by moving one of their hands in front of the sensor. A facial verification system is integrated to improve the robustness, reliability and security of the control system which also allows assignment of permission levels to different users.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Rajshree Varma ◽  
Yugandhara Verma ◽  
Priya Vijayvargiya ◽  
Prathamesh P. Churi

PurposeThe rapid advancement of technology in online communication and fingertip access to the Internet has resulted in the expedited dissemination of fake news to engage a global audience at a low cost by news channels, freelance reporters and websites. Amid the coronavirus disease 2019 (COVID-19) pandemic, individuals are inflicted with these false and potentially harmful claims and stories, which may harm the vaccination process. Psychological studies reveal that the human ability to detect deception is only slightly better than chance; therefore, there is a growing need for serious consideration for developing automated strategies to combat fake news that traverses these platforms at an alarming rate. This paper systematically reviews the existing fake news detection technologies by exploring various machine learning and deep learning techniques pre- and post-pandemic, which has never been done before to the best of the authors’ knowledge.Design/methodology/approachThe detailed literature review on fake news detection is divided into three major parts. The authors searched papers no later than 2017 on fake news detection approaches on deep learning and machine learning. The papers were initially searched through the Google scholar platform, and they have been scrutinized for quality. The authors kept “Scopus” and “Web of Science” as quality indexing parameters. All research gaps and available databases, data pre-processing, feature extraction techniques and evaluation methods for current fake news detection technologies have been explored, illustrating them using tables, charts and trees.FindingsThe paper is dissected into two approaches, namely machine learning and deep learning, to present a better understanding and a clear objective. Next, the authors present a viewpoint on which approach is better and future research trends, issues and challenges for researchers, given the relevance and urgency of a detailed and thorough analysis of existing models. This paper also delves into fake new detection during COVID-19, and it can be inferred that research and modeling are shifting toward the use of ensemble approaches.Originality/valueThe study also identifies several novel automated web-based approaches used by researchers to assess the validity of pandemic news that have proven to be successful, although currently reported accuracy has not yet reached consistent levels in the real world.


2020 ◽  
Vol 37 (5) ◽  
pp. 1683-1701
Author(s):  
Xin Wang ◽  
Jie Yan ◽  
Dongzhu Feng ◽  
Yonghua Fan ◽  
Dongsheng Yang

Purpose This paper aims to describe a novel hybrid inertial measurement unit (IMU) for motion capturing via a new configuration of strategically distributed inertial sensors, and a calibration approach for the accelerometer and gyroscope sensors mounted in a flight vehicle motion tracker built on the inertial navigation system. Design/methodology/approach The hybrid-IMU is designed with five accelerometers and one auxiliary gyroscope instead of the accelerometer and gyroscope triads in the conventional IMU. Findings Simulation studies for tracking with both attitude angles and translational movement of a flight vehicle are conducted to illustrate the effectiveness of the proposed method. Originality/value The cross-quadratic terms of angular velocity are selected to process the direct measurements of angular velocities of body frame and to avoid the integration of angular acceleration vector compared with gyro-free configuration based on only accelerometers. The inertial sensors are selected from the commercial microelectromechanical system devices to realize its low-cost applications.


Author(s):  
Fabiana Di Ciaccio ◽  
Paolo Russo ◽  
Salvatore Troisi

The use of Attitude and Heading Reference Systems (AHRS) for orientation estimation is now common practice in a wide range of applications, e.g., robotics and human motion tracking, aerial vehicles and aerospace, gaming and virtual reality, indoor pedestrian navigation and maritime navigation. The integration of the high-rate measurements can provide very accurate estimates, but these can suffer from errors accumulation due to the sensors drift over longer time scales. To overcome this issue, inertial sensors are typically combined with additional sensors and techniques. As an example, camera-based solutions have drawn a large attention by the community, thanks to their low-costs and easy hardware setup; moreover, impressive results have been demonstrated in the context of Deep Learning. This work presents the preliminary results obtained by DOES , a supportive Deep Learning method specifically designed for maritime navigation, which aims at improving the roll and pitch estimations obtained by common AHRS. DOES recovers these estimations through the analysis of the frames acquired by a low-cost camera pointing the horizon at sea. The training has been performed on the novel ROPIS dataset, presented in the context of this work, acquired using the FrameWO application developed for the scope. Promising results encourage to test other network backbones and to further expand the dataset, improving the accuracy of the results and the range of applications of the method.


Author(s):  
Yizhe Chang ◽  
El-Sayed Aziz ◽  
Zhou Zhang ◽  
Mingshao Zhang ◽  
Sven Esche ◽  
...  

Mechanical assembly activities involve multiple factors including humans, mechanical parts, tools and assembly environments. In order to simulate assembly processes by computers for educational purposes, all these factors should be considered. Virtual reality (VR) technology, which aims to integrate natural human motion into real-world scenarios, provides an ideal simulation medium. Novel VR devices such as 3D glasses, motion-tracking gloves, haptic sensors, etc. are able to fulfill fundamental assembly simulation needs. However, most of these implementations focus on assembly simulations for computer-aided design, which are geared toward professionals rather than students, thus leading to complicated assembly procedures not suitable for students. Furthermore, the costs of these novel VR devices and specifically designed VR platforms represent an untenable financial burden for most educational institutions. In this paper, a virtual platform for mechanical assembly education based on the Microsoft Kinect sensor and Garry’s Mod (GMod) is presented. With the help of the Kinect’s body tracking function and voice recognition technology in conjunction with the graphics and physics simulation capabilities of GMod, a low-cost VR platform that enables educators to author their own assembly simulations was implemented. This platform utilizes the Kinect as the sole input device. Students can use voice commands to navigate their avatars inside of a GMod powered virtual laboratory as well as use their body’s motions to integrate pre-defined mechanical parts into assemblies. Under this platform, assembly procedures involving the picking, placing and attaching of parts can be performed collaboratively by multiple users. In addition, the platform allows collaborative learning without the need for the learners to be co-located. A pilot study for this platform showed that, with the instructor’s assistance, mechanical engineering undergraduate students are able to complete basic assembly operations.


Author(s):  
Sajeev C. Puthenveetil ◽  
Chinmay P. Daphalapurkar ◽  
Wenjuan Zhu ◽  
Ming C. Leu ◽  
Xiaoqing F. Liu ◽  
...  

To generate graphic simulation of human motion, marker-based optical motion capture technology is widely used because of the accuracy and reliability of motion data provided by this technology. However, tracking of human motion without markers is very desirable on the factory floor because the human operator does not need to wear a special suit mounted with markers and there is no physical interference with the manufacturing or assembly operation during the motion tracking. In this paper, we compare marker-based and marker-less motion capture systems. First, the operational principles of these two different types of systems are compared. Then the quality of motion data obtained by a marker-less system using Kinect is compared with that obtained by a marker-based system using Optitrack cameras. The comparison also includes the accuracy of body joint angles and variations in body segment lengths measured by the two different systems. Furthermore, we compare the human motion simulation developed in the Jack digital human modeling software using the data captured by these two systems when a person is performing a fastening operation on a physical mockup of the belly section of an aircraft fuselage.


Sign in / Sign up

Export Citation Format

Share Document