scholarly journals Volumetric Motion Magnification: Subtle Motion Extraction from 4D Data

Measurement ◽  
2021 ◽  
pp. 109211
Author(s):  
Matthew Southwick ◽  
Zhu Mao ◽  
Christopher Niezrecki
Sensors ◽  
2021 ◽  
Vol 21 (18) ◽  
pp. 6248
Author(s):  
Jau-Yu Chou ◽  
Chia-Ming Chang

Vibrational measurements play an important role for structural health monitoring, e.g., modal extraction and damage diagnosis. Moreover, conditions of civil structures can be mostly assessed by displacement responses. However, installing displacement transducers between the ground and floors in real-world buildings is unrealistic due to lack of reference points and structural scales and complexity. Alternatively, structural displacements can be acquired using computer vision-based motion extraction techniques. These extracted motions not only provide vibrational responses but are also useful for identifying the modal properties. In this study, three methods, including the optical flow with the Lucas–Kanade method, the digital image correlation (DIC) with bilinear interpolation, and the in-plane phase-based motion magnification using the Riesz pyramid, are introduced and experimentally verified using a four-story steel-frame building with a commercially available camera. First, the three displacement acquiring methods are introduced in detail. Next, the displacements are experimentally obtained from these methods and compared to those sensed from linear variable displacement transducers. Moreover, these displacement responses are converted into modal properties by system identification. As seen in the experimental results, the DIC method has the lowest average root mean squared error (RMSE) of 1.2371 mm among these three methods. Although the phase-based motion magnification method has a larger RMSE of 1.4132 mm due to variations in edge detection, this method is capable of providing full-field mode shapes over the building.


2021 ◽  
Vol 13 (4) ◽  
pp. 796
Author(s):  
Long Zhang ◽  
Xuezhi Yang ◽  
Jing Shen

The locations and breathing signal of people in disaster areas are significant information for search and rescue missions in prioritizing operations to save more lives. For detecting the living people who are lying on the ground and covered with dust, debris or ashes, a motion magnification-based method has recently been proposed. This current method estimates the locations and breathing signal of people from a drone video by assuming that only human breathing-related motions exist in the video. However, in natural disasters, background motions, such as swing trees and grass caused by wind, are mixed with human breathing, that distort this assumption, resulting in misleading or even no life signs locations. Therefore, the life signs in disaster areas are challenging to be detected due to the undesired background motions. Note that human breathing is a natural physiological phenomenon, and it is a periodic motion with a steady peak frequency; while background motion always involves complex space-time behaviors, their peak frequencies seem to be variable over time. Therefore, in this work we analyze and focus on the frequency properties of motions to model a frequency variability feature used for extracting only human breathing, while eliminating irrelevant background motions in the video, which would ease the challenge in detection and localization of life signs. The proposed method was validated with both drone and camera videos recorded in the wild. The average precision measures of our method for drone and camera videos were 0.94 and 0.92, which are higher than that of compared methods, demonstrating that our method is more robust and accurate to background motions. The implications and limitations regarding the frequency variability feature were discussed.


Author(s):  
Kwok-Yun Yeung ◽  
Tsz-Ho Kwok ◽  
Charlie C. L. Wang

Recent development of per-frame motion extraction method can generate the skeleton of human motion in real-time with the help of RGB-D cameras such as Kinect. This leads to an economic device to provide human motion as input for real-time applications. As generated by a single-view image plus depth information, the extracted skeleton usually has problems of unwanted vibration, bone-length variation, self-occlusion, etc. This paper presents an approach to overcome these problems by synthesizing the skeletons generated by duplex Kinects, which capture the human motion in different views. The major technical difficulty of this synthesis comes from the inconsistency of two skeletons. Our algorithm is formulated under the constrained optimization framework by using the bone-lengths as hard constraints and the tradeoff between inconsistent joint positions as soft constraints. Schemes are developed to detect and re-position the problematic joints generated by per-frame method from duplex Kinects. As a result, we develop an easy, cheap and fast approach that can improve the skeleton of human motion at an average speed of 5 ms per frame.


Author(s):  
Luc Florack ◽  
Bart Janssen ◽  
Frans Kanters ◽  
Remco Duits

Author(s):  
A. Jonathan McLeod ◽  
John S. H. Baxter ◽  
Uditha Jayarathne ◽  
Stephen Pautler ◽  
Terry M. Peters ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document