scholarly journals Development of a Biomechatronic Device for Motion Analysis Through a RGB-D Camera

2020 ◽  
Vol 2 (3) ◽  
pp. 35-44
Author(s):  
Francesca Pristerà ◽  
Alessandro Gallo ◽  
Salvatore Fregola ◽  
Alessio Merola

This work investigates the validity and reliability of a novel biomechatronic device providing an interactive environment in Augmented Reality (AR) for neuromotor rehabilitation. A RGB-depth camera and telemonitoring/remote signaling module are the main components of the device, together with a PC-based interface. The interactive environment, which implements some optimized algorithms of body motion capture and novel methodologies for human body motion analysis, enables neuromotor rehabilitation treatments that are adaptable to the performance and individual characteristics of the patient. The RGB-Depth camera module is implemented through a Microsoft Kinect, ORBBEC ZED2K devices; the telemonitoring module for teleassistance and therapy supervision is implemented as a cloud service. Within the module of body motion tracking, the abduction and adduction movements of the limbs of the full-body structure are tracked and the joints angles are measured in real-time; the most distinctive feature of the tracking module is the control of the trunk and shoulder posture during the exercises performed by the patient. Indeed, the device recognizes an incorrect position of the patient's body that could affect the objective of the exercise to be performed.  The recognition of an incorrect exercise is associated to the generation of an alert both to the patient and the physician, in order to maximize the effectiveness of the treatment based on the user's potential and to increase the chances of a better biofeedback. The experimental tests, which have been carried out by reproducing several neuromotor exercises on the interactive environment, show that the feature recognition and extraction of the joints and segments of the musculo-skeletal structure of the patient's, and of wrong posture during exercises, can achieve good performance in the different experimental conditions.  The developed device is a valid tool for patients affected by chronic disability, but it could be extended to neurodegenerative diseases in the early stages of the disease. Thanks to the enhanced interactivity in augmented reality, the patient can overcome some difficulties in interaction with the most common IT tools and technologies; at the meanwhile she/he can perform rehabilitation at home. The physician can also check in real time the results and customize the care pathway. The enhanced interactivity provided by the device during rehabilitation session increases both the motivation by the patient and the continuity of the care, as well as it supports low-cost remote assistance and telemedicine by optimizing therapy costs. The key points are: i) making rehabilitation motivating for the patient, becoming a "player"; ii) optimize effectiveness and costs; iii) possibility of low-cost remote assistance and telemedicine.

2017 ◽  
Vol 79 (3) ◽  
pp. 176-183 ◽  
Author(s):  
Cristina Manrique-Juan ◽  
Zaira V. E. Grostieta-Dominguez ◽  
Ricardo Rojas-Ruiz ◽  
Moises Alencastre-Miranda ◽  
Lourdes Muñoz-Gómez ◽  
...  

In this paper, we present an augmented reality learning system that uses the input of a depth camera to interactively teach anatomy to high school students. The objective is to exemplify human anatomy by displaying 3D models over the body of a person in real time, using the Microsoft Kinect depth camera. The users can see how bones, muscles, or organs are distributed in their bodies without the use of targets for tracking.


2020 ◽  
Vol 13 (6) ◽  
pp. 512-521
Author(s):  
Mohamed Taha ◽  
◽  
Mohamed Ibrahim ◽  
Hala Zayed ◽  
◽  
...  

Vein detection is an important issue for the medical field. There are some commercial devices for detecting veins using infrared radiation. However, most of these commercial solutions are cost-prohibitive. Recently, veins detection has attracted much attention from research teams. The main focus is on developing real-time systems with low-cost hardware. Systems developed to reduce costs suffer from low frame rates. This, in turn, makes these systems not suitable for real-world applications. On the other hand, systems that use powerful processors to produce high frame rates suffer from high costs and a lack of mobility. In this paper, a real-time vein mapping prototype using augmented reality is proposed. The proposed prototype provides a compromised solution to produce high frame rates with a low-cost system. It consists of a USB camera attached to an Android smartphone used for real-time detection. Infrared radiation is employed to differentiate the veins using 20 Infrared Light Emitting Diodes (LEDs). The captured frames are processed to enhance vein detection using light computational algorithms to improve real-time processing and increase frame rate. Finally, the enhanced view of veins appears on the smartphone screen. Portability and economic cost are taken into consideration while developing the proposed prototype. The proposed prototype is tested with people of different ages and gender, as well as using mobile devices of different specifications. The results show a high vein detection rate and a high frame rate compared to other existing systems.


Robotics ◽  
2020 ◽  
Vol 9 (1) ◽  
pp. 3
Author(s):  
Marlon Aguero ◽  
Dilendra Maharjan ◽  
Maria del Pilar Rodriguez ◽  
David Dennis Lee Mascarenas ◽  
Fernando Moreu

Wireless sensor networks (WSN) are used by engineers to record the behavior of structures. The sensors provide data to be used by engineers to make informed choices and prioritize decisions concerning maintenance procedures, required repairs, and potential infrastructure replacements. However, reliable data collection in the field remains a challenge. The information obtained by the sensors in the field frequently needs further processing, either at the decision-making headquarters or in the office. Although WSN allows data collection and analysis, there is often a gap between WSN data analysis results and the way decisions are made in industry. The industry depends on inspectors’ decisions, so it is of vital necessity to improve the inspectors’ access in the field to data collected from sensors. This paper presents the results of an experiment that shows the way Augmented Reality (AR) may improve the availability of WSN data to inspectors. AR is a tool which overlays the known attributes of an object with the corresponding position on the headset screen. In this way, it allows the integration of reality with a virtual representation provided by a computer in real time. These additional synthetic overlays supply data that may be unavailable otherwise, but it may also display additional contextual information. The experiment reported in this paper involves the application of a smart Strain Gauge Platform, which automatically measures strain for different applications, using a wireless sensor. In this experiment, an AR headset was used to improve actionable data visualization. The results of the reported experiment indicate that since the AR headset makes it possible to visualize information collected from the sensors in a graphic form in real time, it enables automatic, effective, reliable, and instant communication from a smart low-cost sensor strain gauge to a database. Moreover, it allows inspectors to observe augmented data and compare it across time and space, which then leads to appropriate prioritization of infrastructure management decisions based on accurate observations.


2017 ◽  
Vol 17 (02) ◽  
pp. e20 ◽  
Author(s):  
Kevin E. Soulier ◽  
Matías Nicolás Selzer ◽  
Martín Leonardo Larrea

In recent years, Augmented Reality has become a very popular topic, both as a research and commercial field. This trend has originated with the use of mobile devices as computational core and display. The appearance of virtual objects and their interaction with the real world is a key element in the success of an Augmented Reality software. A common issue in this type of software is the visual inconsistency between the virtual and real objects due to wrong illumination. Although illumination is a common research topic in Computer Graphics, few studies have been made about real time estimation of illumination direction. In this work we present a low-cost approach to detect the direction of the environment illumination, allowing the illumination of virtual objects according to the real light of the ambient, improving the integration of the scene. Our solution is open-source, based on Arduino hardware and the presented system was developed on Android.


Author(s):  
Abhinav Biswas ◽  
Soumalya Dutta ◽  
Nilanjan Dey ◽  
Ahmad Taher Azar

The Virtual Trial Room (VTR) application software simulates an apparel dressing room by the implementation of a virtual mirror, portraying an augmented view of the user with virtual superimposed clothes. Traditional approach to the design and implementation of virtual dressing rooms have been wildly using either normal webcams with Tag/Marker based tracking or expensive 3D depth & motion sensing cameras like Microsoft Kinect. The main idea of this paper is to methodologically devise a novel VTR solution deploying ubiquitous 2D webcams with tag-less tracking, in a real-time live video mode using open source tools and technologies. The solution model implements a tag-less or marker-less Augmented Reality (AR) technique with face detection technology and provides an intuitive motion-augmented User Interface (UI) to the VTR application, in the form of an interactive human-friendly Virtual Mirror using simple hand gestures. A qualitative performance analysis of the application is evaluated at the end of the paper to determine the fundamental susceptibility of the VTR system against varied illumination conditions.


2020 ◽  
pp. 147592172097701
Author(s):  
D Maharjan ◽  
M Agüero ◽  
D Mascarenas ◽  
R Fierro ◽  
F Moreu

Decaying infrastructure maintenance cost allocation depends heavily on accurate and safe inspection in the field. New tools to conduct inspections can assist in prioritizing investments in maintenance and repairs. The industrial revolution termed as “Industry 4.0” is based on the intelligence of machines working with humans in a collaborative workspace. Contrarily, infrastructure management has relied on the human for making day-to-day decisions. New emerging technologies can assist during infrastructure inspections, to quantify structural condition with more objective data. However, today’s owners agree in trusting the inspector’s decision in the field over data collected with sensors. If data collected in the field is accessible during the inspections, the inspector decisions can be improved with sensors. New research opportunities in the human–infrastructure interface would allow researchers to improve the human awareness of their surrounding environment during inspections. This article studies the role of Augmented Reality (AR) technology as a tool to increase human awareness of infrastructure in their inspection work. The domains of interest of this research include both infrastructure inspections (emphasis on the collection of data of structures to inform management decisions) and emergency management (focus on the data collection of the environment to inform human actions). This article describes the use of a head-mounted device to access real-time data and information during their field inspection. The authors leverage the use of low-cost smart sensors and QR code scanners integrated with Augmented Reality applications for augmented human interface with the physical environment. This article presents a novel interface architecture for developing Augmented Reality–enabled inspection to assist the inspector’s workflow in conducting infrastructure inspection works with two new applications and summarizes the results from various experiments. The main contributions of this work to computer-aided community are enabling inspectors to visualize data files from database and real-time data access using an Augmented Reality environment.


Author(s):  
Roanna Lun ◽  
Wenbing Zhao

Microsoft Kinect is one of the most popular inexpensive gadgets released in recent years. Kinect is equipped with a color camera, a depth camera, and a microphone array. The device allows users to interact with a computer via a natural user interface in terms of gestures or voice commands. The authors believe that the research and development on using Kinect technology in healthcare will gain more momentum. The demand of Kinect-based applications is high, due to Kinect's low cost and portability, and its accurate and robust motion detection capability. In this chapter, the authors survey the current applications of using the Kinect technology in healthcare. Furthermore, they outline a number of open research issues that could overcome the limitations of the current Kinect technology.


2020 ◽  
Vol 2020 ◽  
pp. 1-10 ◽  
Author(s):  
Sasadara B. Adikari ◽  
Naleen C. Ganegoda ◽  
Ravinda G. N. Meegama ◽  
Indika L. Wanniarachchi

A busy lifestyle led people to buy readymade clothes from retail stores with or without fit-on, expecting a perfect match. The existing online cloth shopping systems are capable of providing only 2D images of the clothes, which does not lead to a perfect match for the individual user. To overcome this problem, the apparel industry conducts many studies to reduce the time gap between cloth selection and final purchase by introducing “virtual dressing rooms.” This paper discusses the design and implementation of augmented reality “virtual dressing room” for real-time simulation of 3D clothes. The system is developed using a single Microsoft Kinect V2 sensor as the depth sensor, to obtain user body parameter measurements, including 3D measurements such as the circumferences of chest, waist, hip, thigh, and knee to develop a unique model for each user. The size category of the clothes is chosen based on the measurements of each customer. The Unity3D game engine was incorporated for overlaying 3D clothes virtually on the user in real time. The system is also equipped with gender identification and gesture controllers to select the cloth. The developed application successfully augmented the selected dress model with physics motions according to the physical movements made by the user, which provides a realistic fitting experience. The performance evaluation reveals that a single depth sensor can be applied in the real-time simulation of 3D cloth with less than 10% of the average measurement error.


Sign in / Sign up

Export Citation Format

Share Document