motion extraction
Recently Published Documents


TOTAL DOCUMENTS

65
(FIVE YEARS 10)

H-INDEX

10
(FIVE YEARS 0)

2021 ◽  
Author(s):  
Roshan Reddy Upendra ◽  
S. M. Kamrul Hasan ◽  
Richard Simon ◽  
Brian Jamison Wentz ◽  
Suzanne M. Shontz ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (18) ◽  
pp. 6248
Author(s):  
Jau-Yu Chou ◽  
Chia-Ming Chang

Vibrational measurements play an important role for structural health monitoring, e.g., modal extraction and damage diagnosis. Moreover, conditions of civil structures can be mostly assessed by displacement responses. However, installing displacement transducers between the ground and floors in real-world buildings is unrealistic due to lack of reference points and structural scales and complexity. Alternatively, structural displacements can be acquired using computer vision-based motion extraction techniques. These extracted motions not only provide vibrational responses but are also useful for identifying the modal properties. In this study, three methods, including the optical flow with the Lucas–Kanade method, the digital image correlation (DIC) with bilinear interpolation, and the in-plane phase-based motion magnification using the Riesz pyramid, are introduced and experimentally verified using a four-story steel-frame building with a commercially available camera. First, the three displacement acquiring methods are introduced in detail. Next, the displacements are experimentally obtained from these methods and compared to those sensed from linear variable displacement transducers. Moreover, these displacement responses are converted into modal properties by system identification. As seen in the experimental results, the DIC method has the lowest average root mean squared error (RMSE) of 1.2371 mm among these three methods. Although the phase-based motion magnification method has a larger RMSE of 1.4132 mm due to variations in edge detection, this method is capable of providing full-field mode shapes over the building.


Author(s):  
Anwesha Khasnobish ◽  
Arindam Ray ◽  
Arijit Chowdhury ◽  
Smriti Rani ◽  
Tapas Chakravarty ◽  
...  

2021 ◽  
Vol 13 (4) ◽  
pp. 2250
Author(s):  
Heechan Kim ◽  
Soowon Lee

Video captioning is a problem that generates a natural language sentence as a video’s description. A video description includes not only words that express the objects in the video but also words that express the relationships between the objects, or grammatically necessary words. To reflect this characteristic explicitly using a deep learning model, we propose a multi-representation switching method. The proposed method consists of three components: entity extraction, motion extraction, and textual feature extraction. The proposed multi-representation switching method makes it possible for the three components to extract important information for a given video and description pair efficiently. In experiments conducted on the Microsoft Research Video Description dataset, the proposed method recorded scores that exceeded the performance of most existing video captioning methods. This result was achieved without any preprocessing based on computer vision and natural language processing, nor any additional loss function. Consequently, the proposed method has a high generality that can be extended to various domains in terms of sustainable computing.


Measurement ◽  
2021 ◽  
pp. 109211
Author(s):  
Matthew Southwick ◽  
Zhu Mao ◽  
Christopher Niezrecki

Author(s):  
Ikmal Faiq Albakri ◽  
Nik Wafiy ◽  
Norhaida Mohd Suaib ◽  
Mohd Shafry Mohd Rahim ◽  
Hongchuan Yu

Sign in / Sign up

Export Citation Format

Share Document