scholarly journals Biomechanical Evaluation and Strength Test of 3D-Printed Foot Orthoses

2019 ◽  
Vol 2019 ◽  
pp. 1-8 ◽  
Author(s):  
Kuang-Wei Lin ◽  
Chia-Jung Hu ◽  
Wen-Wen Yang ◽  
Li-Wei Chou ◽  
Shun-Hwa Wei ◽  
...  

Foot orthoses (FOs) are commonly used as interventions for individuals with flatfoot. Advances in technologies such as three-dimensional (3D) scanning and 3D printing have facilitated the fabrication of custom FOs. However, few studies have been conducted on the mechanical properties and biomechanical effects of 3D-printed FOs. The purposes of this study were to evaluate the mechanical properties of 3D-printed FOs and determine their biomechanical effects in individuals with flexible flatfoot. During mechanical testing, a total of 18 FO samples with three orientations (0°, 45°, and 90°) were fabricated and tested. The maximum compressive load and stiffness were calculated. During a motion capture experiment, 12 individuals with flatfoot were enrolled, and the 3D-printed FOs were used as interventions. Kinematic and kinetic data were collected during walking by using an optical motion capture system. A one-way analysis of variance was performed to compare the mechanical parameters among the three build orientations. A paired t-test was conducted to compare the biomechanical variables under two conditions: walking in standard shoes (Shoe) and walking in shoes embedded with FOs (Shoe+FO). The results indicated that the 45° build orientation produced the strongest FOs. In addition, the maximum ankle evertor and external rotator moments under the Shoe+FO condition were significantly reduced by 35% and 16%, respectively, but the maximum ankle plantar flexor moments increased by 3%, compared with the Shoe condition. No significant difference in ground reaction force was observed between the two conditions. This study demonstrated that 3D-printed FOs could alter the ankle joint moments during gait.

2014 ◽  
Vol 568-570 ◽  
pp. 676-680
Author(s):  
Si Xi Chen ◽  
Shu Chen

The application of digital technology on the protection of intangible cultural heritage is a major topic of research in recent years. The motion capture technology of protection will gradually replace the traditional recording methods such as texts, pictures and videos. It is valuable to build a high-fidelity, high-modular and low-cost digital platform for choreographic data collection and extended application. This paper studies the intangible cultural heritage of Quanzhou breast-clapping dance, one of the most famous choreographic intangible cultural heritages from China with standard optical motion capture method. The data are acquiring and processing after the dance motion capture, we binds the motion data and three-dimensional model using Motion Builder and build digital demonstration platform base on an OGRE engine to display the movements. The viewer can view at any angle and distance. The system can be easily applied in motion intangible cultural heritages protection project. Furthermore, the system can be provided versatile motion data for additional use.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4799
Author(s):  
Calvin Young ◽  
Sarah DeDecker ◽  
Drew Anderson ◽  
Michele L. Oliver ◽  
Karen D. Gordon

Wrist motion provides an important metric for disease monitoring and occupational risk assessment. The collection of wrist kinematics in occupational or other real-world environments could augment traditional observational or video-analysis based assessment. We have developed a low-cost 3D printed wearable device, capable of being produced on consumer grade desktop 3D printers. Here we present a preliminary validation of the device against a gold standard optical motion capture system. Data were collected from 10 participants performing a static angle matching task while seated at a desk. The wearable device output was significantly correlated with the optical motion capture system yielding a coefficient of determination (R2) of 0.991 and 0.972 for flexion/extension (FE) and radial/ulnar deviation (RUD) respectively (p < 0.0001). Error was similarly low with a root mean squared error of 4.9° (FE) and 3.9° (RUD). Agreement between the two systems was quantified using Bland–Altman analysis, with bias and 95% limits of agreement of 3.1° ± 7.4° and −0.16° ± 7.7° for FE and RUD, respectively. These results compare favourably with current methods for occupational assessment, suggesting strong potential for field implementation.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Agnieszka Szczęsna ◽  
Monika Błaszczyszyn ◽  
Magdalena Pawlyta

AbstractHuman motion capture is commonly used in various fields, including sport, to analyze, understand, and synthesize kinematic and kinetic data. Specialized computer vision and marker-based optical motion capture techniques constitute the gold-standard for accurate and robust human motion capture. The dataset presented consists of recordings of 37 Kyokushin karate athletes of different ages (children, young people, and adults) and skill levels (from 4th dan to 9th kyu) executing the following techniques: reverse lunge punch (Gyaku-Zuki), front kick (Mae-Geri), roundhouse kick (Mawashi-Geri), and spinning back kick (Ushiro-Mawashi-Geri). Each technique was performed approximately three times per recording (i.e., to create a single data file), and under three conditions where participants kicked or punched (i) in the air, (ii) a training shield, or (iii) an opponent. Each participant undertook a minimum of two trials per condition. The data presented was captured using a Vicon optical motion capture system with Plug-In Gait software. Three dimensional trajectories of 39 reflective markers were recorded. The resultant dataset contains a total of 1,411 recordings, with 3,229 single kicks and punches. The recordings are available in C3D file format. The dataset provides the opportunity for kinematic analysis of different combat sport techniques in attacking and defensive situations.


Author(s):  
Guan Rong Tan ◽  
Nina Robson ◽  
Gim Song Soh

This paper describes a dimensional synthesis method used in the design of a passively actuated finger exoskeleton that takes into account the user limb anthropometric dimensions and contact requirements for grasping objects. The paper is the first step in our current efforts on design of wearable devices that use a common slider at the hand to passively actuate each exo-finger. The finger exoskeleton is comprised of a 3R serial limb and is constrained to an eight-bar slider mechanism. To design the exo-limb, the pose of the index finger was captured using an optical motion capture and its dimensions were determined using a constrained least square optimization of its center of rotation. To facilitate the data capture, a 3D printed wearable Infra-red (IR) marker system was designed and placed on the finger’s phalanx. To illustrate the approach, an example of the design of an index exo-finger is described.


2020 ◽  
Vol 86 ◽  
pp. 29-34
Author(s):  
Marion Mundt ◽  
Arnd Koeppe ◽  
Sina David ◽  
Franz Bamer ◽  
Wolfgang Potthast ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3496
Author(s):  
Li Wang ◽  
Yajun Li ◽  
Fei Xiong ◽  
Wenyu Zhang

Human identification based on motion capture data has received signification attentions for its wide applications in authentication and surveillance systems. The optical motion capture system (OMCS) can dynamically capture the high-precision three-dimensional locations of optical trackers that are implemented on a human body, but its potential in applications on gait recognition has not been studied in existing works. On the other hand, a typical OMCS can only support one player one time, which limits its capability and efficiency. In this paper, our goals are investigating the performance of OMCS-based gait recognition performance, and realizing gait recognition in OMCS such that it can support multiple players at the same time. We develop a gait recognition method based on decision fusion, and it includes the following four steps: feature extraction, unreliable feature calibration, classification of single motion frame, and decision fusion of multiple motion frame. We use kernel extreme learning machine (KELM) for single motion classification, and in particular we propose a reliability weighted sum (RWS) decision fusion method to combine the fuzzy decisions of the motion frames. We demonstrate the performance of the proposed method by using walking gait data collected from 76 participants, and results show that KELM significantly outperforms support vector machine (SVM) and random forest in the single motion frame classification task, and demonstrate that the proposed RWS decision fusion rule can achieve better fusion accuracy compared with conventional fusion rules. Our results also show that, with 10 motion trackers that are implemented on lower body locations, the proposed method can achieve 100% validation accuracy with less than 50 gait motion frames.


Author(s):  
Taisuke Ito ◽  
Yuichi Ota

AYUMI EYE is an accelerometer-based gait analysis device that measures the 3D accelerations of the human trunk. This study investigated the measurement accuracy of the AYUMI EYE as hardware as well as the accuracy of the gait cycle extraction program via simultaneous measurements using AYUMI EYE, a ground reaction force (GRF), and an optical motion capture system called VICON. The study was conducted with four healthy individuals as participants. The gait data were obtained by simulating four different patterns for three trials each: normal walking, anterior-tilt walking, hemiplegic walking, and shuffling walking. The AYUMI EYE and VICON showed good agreement for both the acceleration and displacement data. The durations of subsequent stride cycles calculated using the AYUMI EYE and GRF were in good agreement based on the calculated cross-correlation coefficients (CCs) with an r value of 0.896 and p-value less than 0.05, and their accuracies for these results were sufficient.


Proceedings ◽  
2020 ◽  
Vol 49 (1) ◽  
pp. 10 ◽  
Author(s):  
Tomohito Wada ◽  
Ryu Nagahara ◽  
Sam Gleadhill ◽  
Tatsuro Ishizuka ◽  
Hayato Ohnuma ◽  
...  

The purpose of this study was to elucidate pelvic orientation angles using a single lower back-mounted inertial sensor during sprinting. A single inertial sensor was attached to each sprinter’s lower back, used to measure continuous pelvic movements including pelvic obliquity (roll), anterior-posterior tilt (pitch) and rotation (yaw) during sprinting from a straight to bend section. The pelvic orientation angles were estimated with the three-dimensional sensor orientation using a sensor fusion algorithm. Absolute angles derived from the sensor were compared with angles obtained from an optical motion capture system over a 15 m length. The root mean squared error between the sensor and motion capture data were 4.1° for roll, 2.8° for pitch and 3.6° for yaw. Therefore, the sensor was comparable to the motion capture system for tracking pelvic angle changes. The inertial sensor is now supported as a valid tool to measure movements of the pelvis during sprinting.


Sign in / Sign up

Export Citation Format

Share Document