scholarly journals Dance motion capture and composition using multiple RGB and depth sensors

2017 ◽  
Vol 13 (2) ◽  
pp. 155014771769608 ◽  
Author(s):  
Yejin Kim

Dynamic human movements such as dance are difficult to capture without using external markers due to the high complexity of a dancer’s body. This article introduces a marker-free motion capture and composition system for dance motion that uses multiple RGB and depth sensors. Our motion capture system utilizes a set of high-speed RGB and depth sensors to generate skeletal motion data from an expert dancer. During the motion acquisition process, a skeleton tracking method based on a particle filter is provided to estimate the motion parameters for each frame from a sequence of color images and depth features retrieved from the sensors. The expert motion data become archived in a database. The authoring methods in our composition system automate most of the motion editing processes for general users by providing an online motion search with an input posture and then performing motion synthesis on an arbitrary motion path. Using the proposed system, we demonstrate that various dance performances can be composed in an intuitive and efficient way on client devices such as tablets and kiosk PCs.

2018 ◽  
Vol 7 (3.34) ◽  
pp. 521
Author(s):  
Yejin Kim ◽  
. .

Background/Objectives: Human movements in dance are difficult to train without taking an actual class. In this paper, an interactive system of dance guidance is proposed to teach dance motions using examples.Methods/Statistical analysis: In the proposed system, a set of example motions are captured from experts through a method of marker-free motion capture, which consists of multiple Kinect cameras. The captured motions are calibrated and optimally reconstructed into a motion database. For the efficient exchange of motion data between a student and an instructor, a posture-based motion search and multi-mode views are provided for online lessons.Findings: To capture accurate example motions, the proposed system solves the joint occlusion problem by using multiple Kinect cameras. An iterative closest point (ICP) method is used to unify the multiple camera data into the same coordinate system, which generates an output motion in real time. Comparing to a commercial system, our system can capture various dance motions over an average of 85% accuracy, as shown in the experimental results. Using the touch screen devices, a student can browse a desired motion from the database to start a dance practice and send own motion to an instructor for feedback. By conducting online dance lessons such as ballet, K-pop, and traditional Korean, our experimental results show that the participating students can train their dance skills over a given period.Improvements/Applications: Our system is applicable to any student who wants to learn dance motions without taking an actual class andto receive online feedback from a distant instructor.  


Dance is a body activity that unites body movements, art and certain meanings. Dance performances are sometimes only performed at certain times, so that it is not well known by the public, especially young people today, especially classical dance. They are more interested in presenting a more modern culture because of the development of the times and more advanced technology. The lack of public knowledge about current dance moves has encouraged researchers to conduct motion capture research for dance movements using the Kinect sensor. This paper proposes a technique called mechanical motion capture to capture the motion of objects, namely the dance movement Golek Menak which is one of the classical dances in Indonesia. The proposed Kinect motion capture technique requires special input devices such as cameras with the ability to capture motion up to 2000 frames per second. Kinect has the facility of RGB camera and depth sensor (depth sensor). Kinect's advantages over other tools that can capture and track the movements or actions of threedimensional (3D) objects (humans and animals) accurately, without marking under certain lighting conditions by utilizing depth sensors. The results showed that the Kinect sensor was able to perform motion capture (MOCAP) techniques in dance movements accurately to produce the right body frame with the movement of dance props which subsequently developed the results in various fields, one of which was the development of motion characters for animation. The results of the synchronization of dance motion data and the capture of motion with motion capture then in this study were developed in the animation of dance movements


Author(s):  
Natapon Pantuwong

Recently, motion data is becoming increasingly available for creating computer animation. Motion capture is one of the systems that can generate such motion data. However, it is not suitable to capture a lot of motion due to the cost of motion capture technique, and the diculty of its postprocessing. This paper presents a timeline-based motion-editing system that enables users to perform motion-editing tasks easily and quickly. A motion sequence is summarized and displayed in the 3D environment as a set of editable icons. Users can edit the motion data by performing a sequence of operations on a single key frame or over an interval. The recorded sequence is then propagated automatically to a set of target key frames or intervals, which can be either user dened or system dened. In addition, we provide a simple interaction method for manipulating the duration of specic intervals in the motion data. Methods for combining and synchronizing two dierent motions are also provided in this system. In contrast with the previous work that allows only temporal editing, the proposed system provides editing functions for both geometry and temporal editing. We describe a user study that demonstrated the eciency of the proposed system.


Author(s):  
JIBUM JUNG Et.al

Development of wearable robots is accelerating. Walking robots mimic human behavior and must operate without accidents. Human motion data are needed to train these robots. We developed a system for extracting human motion data and displaying them graphically.We extracted motion data using a Perception Neuron motion capture system and used the Unity engine for the simulation. Several experiments were performed to demonstrate the accuracy of the extracted motion data.Of the various methods used to collect human motion data, markerless motion capture is highly inaccurate, while optical motion capture is very expensive, requiring several high-resolution cameras and a large number of markers. Motion capture using a magnetic field sensor is subject to environmental interference. Therefore, we used an inertial motion capture system. Each movement sequence involved four and was repeated 10 times. The data were stored and standardized. The motions of three individuals were compared to those of a reference person; the similarity exceeded 90% in all cases. Our rehabilitation robot accurately simulated human movements: individually tailored wearable robots could be designed based on our data. Safe and stable robot operation can be verified in advance via simulation. Walking stability can be increased using walking robots trained via machine learning algorithms.


2021 ◽  
Vol 62 (9) ◽  
Author(s):  
Patrick M. Seltner ◽  
Sebastian Willems ◽  
Ali Gülhan ◽  
Eric C. Stern ◽  
Joseph M. Brock ◽  
...  

Abstract The influence of the flight attitude on aerodynamic coefficients and static stability of cylindrical bodies in hypersonic flows is of interest in understanding the re/entry of space debris, meteoroid fragments, launch-vehicle stages and other rotating objects. Experiments were therefore carried out in the hypersonic wind tunnel H2K at the German Aerospace Center (DLR) in Cologne. A free-flight technique was employed in H2K, which enables a continuous rotation of the cylinder without any sting interferences in a broad angular range from 0$$^{\circ }$$ ∘ to 90$$^{\circ }$$ ∘ . A high-speed stereo-tracking technique measured the model motion during free-flight and high-speed schlieren provided documentation of the flow topology. Aerodynamic coefficients were determined in careful post-processing, based on the measured 6-degrees-of-freedom (6DoF) motion data. Numerical simulations by NASA’s flow solvers Cart3D and US3D were performed for comparison purposes. As a result, the experimental and numerical data show a good agreement. The inclination of the cylinder strongly effects both the flowfield and aerodynamic loads. Experiments and simulations with concave cylinders showed marked difference in aerodynamic behavior due to the presence of a shock–shock interaction (SSI) near the middle of the model. Graphic abstract


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Debo Qi ◽  
Chengchun Zhang ◽  
Jingwei He ◽  
Yongli Yue ◽  
Jing Wang ◽  
...  

AbstractThe fast swimming speed, flexible cornering, and high propulsion efficiency of diving beetles are primarily achieved by their two powerful hind legs. Unlike other aquatic organisms, such as turtle, jellyfish, fish and frog et al., the diving beetle could complete retreating motion without turning around, and the turning radius is small for this kind of propulsion mode. However, most bionic vehicles have not contained these advantages, the study about this propulsion method is useful for the design of bionic robots. In this paper, the swimming videos of the diving beetle, including forwarding, turning and retreating, were captured by two synchronized high-speed cameras, and were analyzed via SIMI Motion. The analysis results revealed that the swimming speed initially increased quickly to a maximum at 60% of the power stroke, and then decreased. During the power stroke, the diving beetle stretched its tibias and tarsi, the bristles on both sides of which were shaped like paddles, to maximize the cross-sectional areas against the water to achieve the maximum thrust. During the recovery stroke, the diving beetle rotated its tarsi and folded the bristles to minimize the cross-sectional areas to reduce the drag force. For one turning motion (turn right about 90 degrees), it takes only one motion cycle for the diving beetle to complete it. During the retreating motion, the average acceleration was close to 9.8 m/s2 in the first 25 ms. Finally, based on the diving beetle's hind-leg movement pattern, a kinematic model was constructed, and according to this model and the motion data of the joint angles, the motion trajectories of the hind legs were obtained by using MATLAB. Since the advantages of this propulsion method, it may become a new bionic propulsion method, and the motion data and kinematic model of the hind legs will be helpful in the design of bionic underwater unmanned vehicles.


Author(s):  
Alireza Marzbanrad ◽  
Jalil Sharafi ◽  
Mohammad Eghtesad ◽  
Reza Kamali

This is report of design, construction and control of “Ariana-I”, an Underwater Remotely Operated Vehicle (ROV), built in Shiraz University Robotic Lab. This ROV is equipped with roll, pitch, heading, and depth sensors which provide sufficient feedback signals to give the system six degrees-of-freedom actuation. Although its center of gravity and center of buoyancy are positioned in such a way that Ariana-I ROV is self-stabilized, but the combinations of sensors and speed controlled drivers provide more stability of the system without the operator involvement. Video vision is provided for the system with Ethernet link to the operation unit. Control commands and sensor feedbacks are transferred on RS485 bus; video signal, water leakage alarm, and battery charging wires are provided on the same multi-core cable. While simple PI controllers would improve the pitch and roll stability of the system, various control schemes can be applied for heading to track different paths. The net weight of ROV out of water is about 130kg with frame dimensions of 130×100×65cm. Ariana-I ROV is designed such that it is possible to be equipped with different tools such as mechanical arms, thanks to microprocessor based control system provided with two directional high speed communication cables for on line vision and operation unit.


2021 ◽  
Author(s):  
Adithya Balasubramanyam ◽  
Ashok Kumar Patil ◽  
Bharatesh Chakravarthi ◽  
Jaeyeong Ryu ◽  
Young Ho Chai

2018 ◽  
Vol 89 (16) ◽  
pp. 3401-3410 ◽  
Author(s):  
Hong Liu ◽  
R Hugh Gong ◽  
Pinghua Xu ◽  
Xuemei Ding ◽  
Xiongying Wu

Textile motion in a front-loading washer has been characterized via video capturing, and a processing system developed based on image geometric moment. Textile motion significantly contributes to the mass transfer of the wash solution in porous materials, particularly in the radial direction (perpendicular to the rotational axis of the inner drum). In this paper, the velocity profiles and residence time distributions of tracer textiles have been investigated to characterize the textile dynamics in a front-loading washer. The results show that the textile motion varies significantly with the water volume and rotational speed, and that the motion path follows certain patterns. Two regions are observed in the velocity plots: a passive region where the textile moves up with low velocity and an active region where the textile falls down with relatively high speed. A stagnant area in the residence time profile is observed. This corresponds to the passive region in the velocity profile. The stagnant area affects the mechanical action, thus influencing washing efficiency and textile performance. The findings on textile dynamics will help in the development of better front-loading washers.


Author(s):  
M. Necip Sahinkaya ◽  
Yanzhi Li

Inverse dynamic analysis of a three degree of freedom parallel mechanism driven by three electrical motors is carried out to study the effect of motion speed on the system dynamics and control input requirements. Availability of inverse dynamics models offer many advantages, but controllers based on real-time inverse dynamic simulations are not practical for many applications due to computational limitations. An off-line linearisation of system and error dynamics based on the inverse dynamic analysis is developed. It is shown that accurate linear models can be obtained even at high motion speeds eliminating the need to use computationally intensive inverse dynamics models. A point-to-point motion path for the mechanism platform is formulated by using a third order exponential function. It is shown that the linearised model parameters vary significantly at high motion speeds, hence it is necessary to use adaptive controllers for high performance.


Sign in / Sign up

Export Citation Format

Share Document