Flexible constraint hierarchy during the visual encoding of tool‐object interactions

Author(s):  
Kristel Yu Tiamco Bayani ◽  
Nikhilesh Natraj ◽  
Mary Kate Gale ◽  
Danielle Temples ◽  
Neel Atawala ◽  
...  
2021 ◽  
Vol 40 (3) ◽  
pp. 1-12
Author(s):  
Hao Zhang ◽  
Yuxiao Zhou ◽  
Yifei Tian ◽  
Jun-Hai Yong ◽  
Feng Xu

Reconstructing hand-object interactions is a challenging task due to strong occlusions and complex motions. This article proposes a real-time system that uses a single depth stream to simultaneously reconstruct hand poses, object shape, and rigid/non-rigid motions. To achieve this, we first train a joint learning network to segment the hand and object in a depth image, and to predict the 3D keypoints of the hand. With most layers shared by the two tasks, computation cost is saved for the real-time performance. A hybrid dataset is constructed here to train the network with real data (to learn real-world distributions) and synthetic data (to cover variations of objects, motions, and viewpoints). Next, the depth of the two targets and the keypoints are used in a uniform optimization to reconstruct the interacting motions. Benefitting from a novel tangential contact constraint, the system not only solves the remaining ambiguities but also keeps the real-time performance. Experiments show that our system handles different hand and object shapes, various interactive motions, and moving cameras.


2021 ◽  
Vol 3 (1) ◽  
pp. 1-46
Author(s):  
Alexander Krüger ◽  
Jan Tünnermann ◽  
Lukas Stratmann ◽  
Lucas Briese ◽  
Falko Dressler ◽  
...  

Abstract As a formal theory, Bundesen’s theory of visual attention (TVA) enables the estimation of several theoretically meaningful parameters involved in attentional selection and visual encoding. As of yet, TVA has almost exclusively been used in restricted empirical scenarios such as whole and partial report and with strictly controlled stimulus material. We present a series of experiments in which we test whether the advantages of TVA can be exploited in more realistic scenarios with varying degree of stimulus control. This includes brief experimental sessions conducted on different mobile devices, computer games, and a driving simulator. Overall, six experiments demonstrate that the TVA parameters for processing capacity and attentional weight can be measured with sufficient precision in less controlled scenarios and that the results do not deviate strongly from typical laboratory results, although some systematic differences were found.


PLoS ONE ◽  
2013 ◽  
Vol 8 (12) ◽  
pp. e82936 ◽  
Author(s):  
Garreth Prendergast ◽  
Eve Limbrick-Oldfield ◽  
Ed Ingamells ◽  
Susan Gathercole ◽  
Alan Baddeley ◽  
...  

2016 ◽  
Vol 22 (9) ◽  
pp. 2200-2213 ◽  
Author(s):  
Quirijn W. Bouts ◽  
Tim Dwyer ◽  
Jason Dykes ◽  
Bettina Speckmann ◽  
Sarah Goodwin ◽  
...  

2017 ◽  
Vol 30 (7-8) ◽  
pp. 763-781 ◽  
Author(s):  
Jenni Heikkilä ◽  
Kimmo Alho ◽  
Kaisa Tiippana

Audiovisual semantic congruency during memory encoding has been shown to facilitate later recognition memory performance. However, it is still unclear whether this improvement is due to multisensory semantic congruency or just semantic congruencyper se. We investigated whether dual visual encoding facilitates recognition memory in the same way as audiovisual encoding. The participants memorized auditory or visual stimuli paired with a semantically congruent, incongruent or non-semantic stimulus in the same modality or in the other modality during encoding. Subsequent recognition memory performance was better when the stimulus was initially paired with a semantically congruent stimulus than when it was paired with a non-semantic stimulus. This congruency effect was observed with both audiovisual and dual visual stimuli. The present results indicate that not only multisensory but also unisensory semantically congruent stimuli can improve memory performance. Thus, the semantic congruency effect is not solely a multisensory phenomenon, as has been suggested previously.


Sign in / Sign up

Export Citation Format

Share Document