Reduction of the effect of arm position variation on real-time performance of motion classification

Author(s):  
Yanjuan Geng ◽  
Fan Zhang ◽  
Lin Yang ◽  
Yuanting Zhang ◽  
Guanglin Li
2017 ◽  
Vol 2017 ◽  
pp. 1-10 ◽  
Author(s):  
Yanjuan Geng ◽  
Oluwarotimi Williams Samuel ◽  
Yue Wei ◽  
Guanglin Li

Previous studies have showed that arm position variations would significantly degrade the classification performance of myoelectric pattern-recognition-based prosthetic control, and the cascade classifier (CC) and multiposition classifier (MPC) have been proposed to minimize such degradation in offline scenarios. However, it remains unknown whether these proposed approaches could also perform well in the clinical use of a multifunctional prosthesis control. In this study, the online effect of arm position variation on motion identification was evaluated by using a motion-test environment (MTE) developed to mimic the real-time control of myoelectric prostheses. The performance of different classifier configurations in reducing the impact of arm position variation was investigated using four real-time metrics based on dataset obtained from transradial amputees. The results of this study showed that, compared to the commonly used motion classification method, the CC and MPC configurations improved the real-time performance across seven classes of movements in five different arm positions (8.7% and 12.7% increments of motion completion rate, resp.). The results also indicated that high offline classification accuracy might not ensure good real-time performance under variable arm positions, which necessitated the investigation of the real-time control performance to gain proper insight on the clinical implementation of EMG-pattern-recognition-based controllers for limb amputees.


2014 ◽  
Vol 39 (5) ◽  
pp. 658-663 ◽  
Author(s):  
Xue-Min TIAN ◽  
Ya-Jie SHI ◽  
Yu-Ping CAO

2021 ◽  
Vol 40 (3) ◽  
pp. 1-12
Author(s):  
Hao Zhang ◽  
Yuxiao Zhou ◽  
Yifei Tian ◽  
Jun-Hai Yong ◽  
Feng Xu

Reconstructing hand-object interactions is a challenging task due to strong occlusions and complex motions. This article proposes a real-time system that uses a single depth stream to simultaneously reconstruct hand poses, object shape, and rigid/non-rigid motions. To achieve this, we first train a joint learning network to segment the hand and object in a depth image, and to predict the 3D keypoints of the hand. With most layers shared by the two tasks, computation cost is saved for the real-time performance. A hybrid dataset is constructed here to train the network with real data (to learn real-world distributions) and synthetic data (to cover variations of objects, motions, and viewpoints). Next, the depth of the two targets and the keypoints are used in a uniform optimization to reconstruct the interacting motions. Benefitting from a novel tangential contact constraint, the system not only solves the remaining ambiguities but also keeps the real-time performance. Experiments show that our system handles different hand and object shapes, various interactive motions, and moving cameras.


2021 ◽  
Vol 62 ◽  
pp. 102465
Author(s):  
Karol Salwik ◽  
Łukasz Śliwczyński ◽  
Przemysław Krehlik ◽  
Jacek Kołodziej

Author(s):  
Andres Bell ◽  
Tomas Mantecon ◽  
Cesar Diaz ◽  
Carlos R. del-Blanco ◽  
Fernando Jaureguizar ◽  
...  

Author(s):  
Jop Vermeer ◽  
Leonardo Scandolo ◽  
Elmar Eisemann

Ambient occlusion (AO) is a popular rendering technique that enhances depth perception and realism by darkening locations that are less exposed to ambient light (e.g., corners and creases). In real-time applications, screen-space variants, relying on the depth buffer, are used due to their high performance and good visual quality. However, these only take visible surfaces into account, resulting in inconsistencies, especially during motion. Stochastic-Depth Ambient Occlusion is a novel AO algorithm that accounts for occluded geometry by relying on a stochastic depth map, capturing multiple scene layers per pixel at random. Hereby, we efficiently gather missing information in order to improve upon the accuracy and spatial stability of conventional screen-space approximations, while maintaining real-time performance. Our approach integrates well into existing rendering pipelines and improves the robustness of many different AO techniques, including multi-view solutions.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 15
Author(s):  
Filippo Aleotti ◽  
Giulio Zaccaroni ◽  
Luca Bartolomei ◽  
Matteo Poggi ◽  
Fabio Tosi ◽  
...  

Depth perception is paramount for tackling real-world problems, ranging from autonomous driving to consumer applications. For the latter, depth estimation from a single image would represent the most versatile solution since a standard camera is available on almost any handheld device. Nonetheless, two main issues limit the practical deployment of monocular depth estimation methods on such devices: (i) the low reliability when deployed in the wild and (ii) the resources needed to achieve real-time performance, often not compatible with low-power embedded systems. Therefore, in this paper, we deeply investigate all these issues, showing how they are both addressable by adopting appropriate network design and training strategies. Moreover, we also outline how to map the resulting networks on handheld devices to achieve real-time performance. Our thorough evaluation highlights the ability of such fast networks to generalize well to new environments, a crucial feature required to tackle the extremely varied contexts faced in real applications. Indeed, to further support this evidence, we report experimental results concerning real-time, depth-aware augmented reality and image blurring with smartphones in the wild.


Sign in / Sign up

Export Citation Format

Share Document