silhouette extraction
Recently Published Documents


TOTAL DOCUMENTS

66
(FIVE YEARS 6)

H-INDEX

9
(FIVE YEARS 1)

2022 ◽  
pp. 1-12
Author(s):  
Md Rajib M Hasan ◽  
Noor H. S. Alani

Moving or dynamic object analysis continues to be an increasingly active research field in computer vision with many types of research investigating different methods for motion tracking, object recognition, pose estimation, or motion evaluation (e.g. in sports sciences). Many techniques are available to measure the forces and motion of the people, such as force plates to measure ground reaction forces for a jump or running sports. In training and commercial solution, the detailed motion of athlete's available motion capture devices based on optical markers on the athlete's body and multiple calibrated fixed cameras around the sides of the capture volume can be used. In some situations, it is not practical to attach any kind of marker or transducer to the athletes or the existing machinery are being used, while it is required by a pure vision-based approach to use the natural appearance of the person or object. When a sporting event is taking place, there are opportunities for computer vision to help the referee and other personnel involved in the sports to keep track of incidents occurring, which may provide full coverage and analysis in details of the event for sports viewers. The research aims at using computer vision methods, specially designed for monocular recording, for measuring sports activities, such as high jump, wide jump, or running. Just for indicating the complexity of the project: a single camera needs to understand the height at a particular distance using silhouette extraction. Moving object analysis benefits from silhouette extraction and this has been applied to many domains including sports activities. This paper comparatively discusses two significant techniques to extract silhouettes of a moving object (a jumping person) in monocular video data in different scenarios. The results show that the performance of silhouette extraction varies in dependency on the quality of used video data.


2021 ◽  
Vol E104.D (7) ◽  
pp. 992-1001
Author(s):  
Masakazu IWAMURA ◽  
Shunsuke MORI ◽  
Koichiro NAKAMURA ◽  
Takuya TANOUE ◽  
Yuzuko UTSUMI ◽  
...  

Electronics ◽  
2021 ◽  
Vol 10 (12) ◽  
pp. 1393
Author(s):  
Luis Brandon Garcia-Ortiz ◽  
Jose Portillo-Portillo ◽  
Aldo Hernandez-Suarez ◽  
Jesus Olivares-Mercado ◽  
Gabriel Sanchez-Perez ◽  
...  

This paper proposes the use of the FASSD-Net model for semantic segmentation of human silhouettes, these silhouettes can later be used in various applications that require specific characteristics of human interaction observed in video sequences for the understanding of human activities or for human identification. These applications are classified as high-level task semantic understanding. Since semantic segmentation is presented as one solution for human silhouette extraction, it is concluded that convolutional neural networks (CNN) have a clear advantage over traditional methods for computer vision, based on their ability to learn the representations of appropriate characteristics for the task of segmentation. In this work, the FASSD-Net model is used as a novel proposal that promises real-time segmentation in high-resolution images exceeding 20 FPS. To evaluate the proposed scheme, we use the Cityscapes database, which consists of sundry scenarios that represent human interaction with its environment (these scenarios show the semantic segmentation of people, difficult to solve, that favors the evaluation of our proposal), To adapt the FASSD-Net model to human silhouette semantic segmentation, the indexes of the 19 classes traditionally proposed for Cityscapes were modified, leaving only two labels: One for the class of interest labeled as person and one for the background. The Cityscapes database includes the category “human” composed for “rider” and “person” classes, in which the rider class contains incomplete human silhouettes due to self-occlusions for the activity or transport used. For this reason, we only train the model using the person class rather than human category. The implementation of the FASSD-Net model with only two classes shows promising results in both a qualitative and quantitative manner for the segmentation of human silhouettes.


Author(s):  
Luis Brandon Garcia-Ortiz ◽  
Gabriel Sanchez-Perez ◽  
Aldo Hernandez-Suarez ◽  
Jesus Olivares-Mercado ◽  
Hector Manuel Perez-Meana ◽  
...  

The intention of this article is to implement a system of detection and segmentation of human silhouettes, the above mentioned tasks present a great challenge in security topics and innovation, in the last years and mainly on automated video surveillance systems, which require understanding the presence and human interaction in video sequences, e.g. Human Computer Interaction (HCI), Human Behaviour comprehension, Human fall detection, among others, but the most important is behavioural biometrics, this paper tackles the common step in these research areas: the Human silhouette extraction through the bounding box. To evaluate the proposed system, standardized databases where used and also proper videos are obtained trying to emulate real-world scenarios, where the quality and the distance are factors that have demonstrated challenges for the detection with computer vision and machine learning.


Author(s):  
Guido Ascenso ◽  
Moi Hoon Yap ◽  
Thomas Allen ◽  
Simon S. Choppin ◽  
Carl Payton

2019 ◽  
Vol 27 (1) ◽  
Author(s):  
Josef Kobrtek ◽  
Tomáš Milet ◽  
Adam Herout

2017 ◽  
Vol 93 ◽  
pp. 182-191 ◽  
Author(s):  
Christophe Coniglio ◽  
Cyril Meurie ◽  
Olivier Lézoray ◽  
Marion Berbineau

Sign in / Sign up

Export Citation Format

Share Document