scholarly journals Real-Time, Curvature-Sensitive Surface Simplification Using Depth Images

2018 ◽  
Vol 20 (6) ◽  
pp. 1489-1498 ◽  
Author(s):  
Kanchan Bahirat ◽  
Suraj Raghuraman ◽  
Balakrishnan Prabhakaran
Sensors ◽  
2015 ◽  
Vol 15 (6) ◽  
pp. 12410-12427 ◽  
Author(s):  
Hanguen Kim ◽  
Sangwon Lee ◽  
Dongsung Lee ◽  
Soonmin Choi ◽  
Jinsun Ju ◽  
...  

Author(s):  
Pooja Verlani ◽  
Aditi Goswami ◽  
P.J. Narayanan ◽  
Shekhar Dwivedi ◽  
Sashi Kumar Penta
Keyword(s):  

Author(s):  
Jamie Shotton ◽  
Andrew Fitzgibbon ◽  
Mat Cook ◽  
Toby Sharp ◽  
Mark Finocchio ◽  
...  

Author(s):  
Jiazhen Pang ◽  
Yuan Li ◽  
Jie Zhang ◽  
Jianfeng Yu

Abstract Manual work is a weak link within the intelligent manufacturing, however, it plays an important role in the highly customized and multi-variety assembling. Assisted by intelligent assembling technology such as augmented reality, a manual worker can integrate into the cyber-physics system to improve efficiency and reduce errors, which is of great engineering significance in the assembling field of industry 4.0. Assembly recognition is the initial part of progress analysis and it has predictable changing progress stages which can be matched with the digital model for recognition constraints. Therefore, based on the similarity between spatial increment information and part model, a real-time assembly recognition method is proposed in this paper. Firstly, the depth images from the multi-camera system were used to capture the assembling scene. Then, compared with the previous assembling scene, the spatial incremental information was used to quantitatively represent the assembled part. The spatial increment information and digital model are described with distance distribution. Finally, based on Earth mover’s distance algorithm, the matching between the spatial increment information and the part model indicates the part which had been assembled to realize the real-time assembly recognition. In the case study, an assembling process for 3D printing assembly which corresponded with the digital model was used to approve the feasibility of the real-time assembly recognition method.


2018 ◽  
Vol 2018 ◽  
pp. 1-22
Author(s):  
Pongsagorn Chalearnnetkul ◽  
Nikom Suvonvorn

Vision-based action recognition encounters different challenges in practice, including recognition of the subject from any viewpoint, processing of data in real time, and offering privacy in a real-world setting. Even recognizing profile-based human actions, a subset of vision-based action recognition, is a considerable challenge in computer vision which forms the basis for an understanding of complex actions, activities, and behaviors, especially in healthcare applications and video surveillance systems. Accordingly, we introduce a novel method to construct a layer feature model for a profile-based solution that allows the fusion of features for multiview depth images. This model enables recognition from several viewpoints with low complexity at a real-time running speed of 63 fps for four profile-based actions: standing/walking, sitting, stooping, and lying. The experiment using the Northwestern-UCLA 3D dataset resulted in an average precision of 86.40%. With the i3DPost dataset, the experiment achieved an average precision of 93.00%. With the PSU multiview profile-based action dataset, a new dataset for multiple viewpoints which provides profile-based action RGBD images built by our group, we achieved an average precision of 99.31%.


2007 ◽  
Vol 18 (4) ◽  
pp. 1531 ◽  
Author(s):  
Bao-Quan LIU
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document