Semi-Automatic Video Frame Annotation for Construction Equipment Automation Using Scale-Models

Author(s):  
Carl Borngrund ◽  
Tom Hammarkvist ◽  
Ulf Bodin ◽  
Fredrik Sandin
2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Jinyue Zhang ◽  
Lijun Zi ◽  
Yuexian Hou ◽  
Mingen Wang ◽  
Wenting Jiang ◽  
...  

In order to support smart construction, digital twin has been a well-recognized concept for virtually representing the physical facility. It is equally important to recognize human actions and the movement of construction equipment in virtual construction scenes. Compared to the extensive research on human action recognition (HAR) that can be applied to identify construction workers, research in the field of construction equipment action recognition (CEAR) is very limited, mainly due to the lack of available datasets with videos showing the actions of construction equipment. The contributions of this research are as follows: (1) the development of a comprehensive video dataset of 2,064 clips with five action types for excavators and dump trucks; (2) a new deep learning-based CEAR approach (known as a simplified temporal convolutional network or STCN) that combines a convolutional neural network (CNN) with long short-term memory (LSTM, an artificial recurrent neural network), where CNN is used to extract image features and LSTM is used to extract temporal features from video frame sequences; and (3) the comparison between this proposed new approach and a similar CEAR method and two of the best-performing HAR approaches, namely, three-dimensional (3D) convolutional networks (ConvNets) and two-stream ConvNets, to evaluate the performance of STCN and investigate the possibility of directly transferring HAR approaches to the field of CEAR.


Author(s):  
Tim Oliver ◽  
Michelle Leonard ◽  
Juliet Lee ◽  
Akira Ishihara ◽  
Ken Jacobson

We are using video-enhanced light microscopy to investigate the pattern and magnitude of forces that fish keratocytes exert on flexible silicone rubber substrata. Our goal is a clearer understanding of the way molecular motors acting through the cytoskeleton co-ordinate their efforts into locomotion at cell velocities up to 1 μm/sec. Cell traction forces were previously observed as wrinkles(Fig.l) in strong silicone rubber films by Harris.(l) These forces are now measureable by two independant means.In the first of these assays, weakly crosslinked films are made, into which latex beads have been embedded.(Fig.2) These films report local cell-mediated traction forces as bead displacements in the plane of the film(Fig.3), which recover when the applied force is released. Calibrated flexible glass microneedles are then used to reproduce the translation of individual beads. We estimate the force required to distort these films to be 0.5 mdyne/μm of bead movement. Video-frame analysis of bead trajectories is providing data on the relative localisation, dissipation and kinetics of traction forces.


AIAA Journal ◽  
2000 ◽  
Vol 38 ◽  
pp. 1340-1350 ◽  
Author(s):  
E. Lenormand ◽  
P. Sagaut ◽  
L. Ta Phuoc ◽  
P. Comte

Sign in / Sign up

Export Citation Format

Share Document