active perception
Recently Published Documents


TOTAL DOCUMENTS

231
(FIVE YEARS 80)

H-INDEX

18
(FIVE YEARS 5)

Author(s):  
Martin Jacquet ◽  
Max Kivits ◽  
Hemjyoti Das ◽  
Antonio Franchi
Keyword(s):  

2021 ◽  
Author(s):  
Yu-feng Su ◽  
Tai-Hsin Tsai ◽  
Keng-Liang Kuo ◽  
Chieh-Hsin Wu ◽  
Cheng-Yu Tsai ◽  
...  

Abstract Background: The aim of this study was to investigate the learning curve of robotic spine surgery quantitatively with the well-described power law of practice.Methods: Kaohsiung Medical University Hospital set up a robotic spine surgery team by the neurosurgery department in 2013 and the orthopedic department joined the well-established team in 2014. A total of 150 cases and 841 transpedicular screws were enrolled into 3 groups: the first 50 cases performed by neurosurgeons, the first 50 cases by orthopedic surgeons, and 50 cases by neurosurgeons after the orthopedic surgeons joined the team. The time per screw and accuracy by each group and individual surgeon were analyzed.Results: The time per screw for each group was 9.56±4.19, 7.29±3.64, and 8.74±5.77 minutes respectively. The accuracy was 99.6% (253/254), 99.5% (361/363), and 99.1% (222/224), respectively. The first group took significantly more time per screw, but without significance on the nonlinear parallelism test. Analysis of 5 surgeons and their first 10 cases of short segment surgery showed the time per screw by each surgeon was 12.28±5.21, 6.38±1.54, 8.68±3.10, 6.33±1.90, and 6.73±1.81 minutes. The first surgeon who initiated the robotic spine surgery took significantly more time per screw and the nonlinear parallelism test also revealed only the first surgeon had a steeper learning curve. Conclusions: This is the first study to demonstrate that differences of learning curves between individual surgeons and teams. The roles of teamwork and the unmet needs due to lack of active perception are discussed.


2021 ◽  
Vol 132 ◽  
pp. 103939
Author(s):  
Jiahao Jin ◽  
Weimin Zhang ◽  
Fangxing Li ◽  
Mingzhu Li ◽  
Yongliang Shi ◽  
...  
Keyword(s):  

2021 ◽  
Vol 8 ◽  
Author(s):  
Takato Horii ◽  
Yukie Nagai

During communication, humans express their emotional states using various modalities (e.g., facial expressions and gestures), and they estimate the emotional states of others by paying attention to multimodal signals. To ensure that a communication robot with limited resources can pay attention to such multimodal signals, the main challenge involves selecting the most effective modalities among those expressed. In this study, we propose an active perception method that involves selecting the most informative modalities using a criterion based on energy minimization. This energy-based model can learn the probability of the network state using energy values, whereby a lower energy value represents a higher probability of the state. A multimodal deep belief network, which is an energy-based model, was employed to represent the relationships between the emotional states and multimodal sensory signals. Compared to other active perception methods, the proposed approach demonstrated improved accuracy using limited information in several contexts associated with affective human–robot interaction. We present the differences and advantages of our method compared to other methods through mathematical formulations using, for example, information gain as a criterion. Further, we evaluate performance of our method, as pertains to active inference, which is based on the free energy principle. Consequently, we establish that our method demonstrated superior performance in tasks associated with mutually correlated multimodal information.


2021 ◽  
Vol 2082 (1) ◽  
pp. 012002
Author(s):  
Rui Yang ◽  
Qinglong Mo ◽  
Yuhong Li ◽  
Lin Gan ◽  
Ruihan Hu

Abstract The three-dimensional vision system can improve the active perception ability of the robot, and then guide its flexible operation. This system has been widely used in industrial production processes, such as disorderly sorting, assembly, flexible welding, and defect detection. In sorting, assembly and other applications, accurate perception in a complex and changeable industrial environment is essential. Moreover, the control and other operations should be completed under the guidance of feedback information based on the collected three-dimensional perception results. Nonetheless, improvements are still required, such as accurate three-dimensional detection and positioning of work-in-progress and autonomous guidance in a complicated industrial context with continuous changes.


2021 ◽  
Author(s):  
Evgenii Safronov ◽  
Nicola Piga ◽  
Michele Colledanchise ◽  
Lorenzo Natale

eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Lukas Klimmasch ◽  
Johann Schneider ◽  
Alexander Lelais ◽  
Maria Fronius ◽  
Bertram Emil Shi ◽  
...  

The development of binocular vision is an active learning process comprising the development of disparity tuned neurons in visual cortex and the establishment of precise vergence control of the eyes. We present a computational model for the learning and self-calibration of active binocular vision based on the Active Efficient Coding framework, an extension of classic efficient coding ideas to active perception. Under normal rearing conditions with naturalistic input, the model develops disparity tuned neurons and precise vergence control, allowing it to correctly interpret random dot stereograms. Under altered rearing conditions modeled after neurophysiological experiments, the model qualitatively reproduces key experimental findings on changes in binocularity and disparity tuning. Furthermore, the model makes testable predictions regarding how altered rearing conditions impede the learning of precise vergence control. Finally, the model predicts a surprising new effect that impaired vergence control affects the statistics of orientation tuning in visual cortical neurons.


Sign in / Sign up

Export Citation Format

Share Document