3d pose estimation
Recently Published Documents


TOTAL DOCUMENTS

279
(FIVE YEARS 123)

H-INDEX

21
(FIVE YEARS 6)

2021 ◽  
Author(s):  
Yongpeng Wu ◽  
Dehui Kong ◽  
Shaofan Wang ◽  
Jinghua Li ◽  
Baocai Yin

2021 ◽  
Author(s):  
Jesse D Marshall ◽  
Ugne Klibaite ◽  
Amanda J Gellis ◽  
Diego E Aldarondo ◽  
Bence P Olveczky ◽  
...  

Understanding the biological basis of social and collective behaviors in animals is a key goal of the life sciences, and may yield important insights for engineering intelligent multi-agent systems. A critical step in interrogating the mechanisms underlying social behaviors is a precise readout of the 3D pose of interacting animals. While approaches for multi-animal pose estimation are beginning to emerge, they remain challenging to compare due to the lack of standardized training and benchmark datasets. Here we introduce the PAIR-R24M (Paired Acquisition of Interacting oRganisms - Rat) dataset for multi-animal 3D pose estimation, which contains 24.3 million frames of RGB video and 3D ground-truth motion capture of dyadic interactions in laboratory rats. PAIR-R24M contains data from 18 distinct pairs of rats and 24 different viewpoints. We annotated the data with 11 behavioral labels and 3 interaction categories to facilitate benchmarking in rare but challenging behaviors. To establish a baseline for markerless multi-animal 3D pose estimation, we developed a multi-animal extension of DANNCE, a recently published network for 3D pose estimation in freely behaving laboratory animals. As the first large multi-animal 3D pose estimation dataset, PAIR-R24M will help advance 3D animal tracking approaches and aid in elucidating the neural basis of social behaviors.


2021 ◽  
pp. 108439
Author(s):  
Ji Yang ◽  
Youdong Ma ◽  
Xinxin Zuo ◽  
Sen Wang ◽  
Minglun Gong ◽  
...  

2021 ◽  
Author(s):  
Saddam Abdulwahab ◽  
Hatem A. Rashwan ◽  
Armin Masoumian ◽  
Najwa Sharaf ◽  
Domenec Puig

Pose estimation is typically performed through 3D images. In contrast, estimating the pose from a single RGB image is still a difficult task. RGB images do not only represent objects’ shape, but also represent the intensity that is relative to the viewpoint, texture, and lighting condition. While the 3D pose estimation from depth images is considered a promising approach since the depth image only represents objects’ shape. Thus, it is necessary to know what is the appropriate method that can be used for predicting the depth image from a 2D RGB image and then to use for getting the 3D pose estimation. In this paper, we propose a promising approach based on a deep learning model for depth estimation in order to improve the 3D pose estimation. The proposed model consists of two successive networks. The first network is an autoencoder network that maps from the RGB domain to the depth domain. The second network is a discriminator network that compares a real depth image to a generated depth image to support the first network to generate an accurate depth image. In this work, we do not use real depth images corresponding to the input color images. Our contribution is to use 3D CAD models corresponding to objects appearing in color images to render depth images from different viewpoints. These rendered images are then used as ground truth and to guide the autoencoder network to learn the mapping from the image domain to the depth domain. The proposed model outperforms state-of-the-art models on the publicly PASCAL 3D+ dataset.


2021 ◽  
Vol 69 (10) ◽  
pp. 880-891
Author(s):  
Simon Bäuerle ◽  
Moritz Böhland ◽  
Jonas Barth ◽  
Markus Reischl ◽  
Andreas Steimer ◽  
...  

Abstract Image processing techniques are widely used within automotive series production, including production of electronic control units (ECUs). Deep learning approaches have made rapid advances during the last years, but are not prominent in those industrial settings yet. One major obstacle is the lack of suitable training data. We adapt the recently developed method of domain randomization to our use case of 3D pose estimation of ECU housings. We create purely synthetic data with high visual diversity to train artificial neural networks (ANNs). This enables ANNs to estimate the 3D pose of a real sample part with high accuracy from a single low-resolution RGB image in a production-like setting. Requirements regarding measurement hardware are very low. Our entire setup is fully automated and can be transferred to related industrial use cases.


Author(s):  
João Diogo Falcão ◽  
Carlos Ruiz ◽  
Adeola Bannis ◽  
Hae Young Noh ◽  
Pei Zhang

90% of retail sales occur in physical stores. In these physical stores 40% of shoppers leave the store based on the wait time. Autonomous stores can remove customer waiting time by providing a receipt without the need for scanning the items. Prior approaches use computer vision only, combine computer vision with weight sensors, or combine computer vision with sensors and human product recognition. These approaches, in general, suffer from low accuracy, up to hour long delays for receipt generation, or do not scale to store level deployments due to computation requirements and real-world multiple shopper scenarios. We present ISACS, which combines a physical store model (e.g. customers, shelves, and item interactions), multi-human 3D pose estimation, and live inventory monitoring to provide an accurate matching of multiple people to multiple products. ISACS utilizes only shelf weight sensors and does not require visual inventory monitoring which drastically reduces the computational requirements and thus is scalable to a store-level deployment. In addition, ISACS generates an instant receipt by not requiring human intervention during receipt generation. To fully evaluate the ISACS, we deployed and evaluated our approach in an operating convenience store covering 800 square feet with 1653 distinct products, and more than 20,000 items. Over the course of 13 months of operation, ISACS achieved a receipt daily accuracy of up to 96.4%. Which translates to a 3.5x reduction in error compared to self-checkout stations.


2021 ◽  
Author(s):  
Minghao Wang ◽  
Long Ye ◽  
Fei Hu ◽  
Li Fang ◽  
Wei Zhong ◽  
...  

Cell Reports ◽  
2021 ◽  
Vol 36 (13) ◽  
pp. 109730
Author(s):  
Pierre Karashchuk ◽  
Katie L. Rupp ◽  
Evyn S. Dickinson ◽  
Sarah Walling-Bell ◽  
Elischa Sanders ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document