scholarly journals Camera realignment imposes a cost on laparoscopic performance

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Christopher L. Hewitson ◽  
Sinan T. Shukur ◽  
John Cartmill ◽  
Matthew J. Crossley ◽  
David M. Kaplan

AbstractThere is an unresolved question about whether realigned visual feedback is beneficial or costly to laparoscopic task performance. We provide evidence that camera realignment imposes a reliable cost on performance across both naive controls and experienced surgeons. This finding clarifies an important ongoing discussion in the literature about the effects of camera realignment, which could inform the strategies that laparoscopic surgeons use in the operating room.

2008 ◽  
Vol 1 (3) ◽  
pp. 311
Author(s):  
S. Hotz-Boendermaker ◽  
J. Petersen ◽  
M. Laubacher ◽  
M.-C. Hepp-Reymond ◽  
M. Schubert

2005 ◽  
Vol 14 (6) ◽  
pp. 677-696 ◽  
Author(s):  
Christoph W. Borst ◽  
Richard A. Volz

We present a haptic feedback technique that combines feedback from a portable force-feedback glove with feedback from direct contact with rigid passive objects. This approach is a haptic analogue of visual mixed reality, since it can be used to haptically combine real and virtual elements in a single display. We discuss device limitations that motivated this combined approach and summarize technological challenges encountered. We present three experiments to evaluate the approach for interactions with buttons and sliders on a virtual control panel. In our first experiment, this approach resulted in better task performance and better subjective ratings than the use of only a force-feedback glove. In our second experiment, visual feedback was degraded and the combined approach resulted in better performance than the glove-only approach and in better ratings of slider interactions than both glove-only and passive-only approaches. A third experiment allowed subjective comparison of approaches and provided additional evidence that the combined approach provides the best experience.


2012 ◽  
Vol 87 (5-6) ◽  
pp. 808-812 ◽  
Author(s):  
Gwendolijn Y.R. Schropp ◽  
Cock J.M. Heemskerk ◽  
Astrid M.L. Kappers ◽  
Wouter M. Bergmann Tiest ◽  
Ben S.Q. Elzendoorn ◽  
...  

2018 ◽  
Vol 25 (3) ◽  
pp. 280-285 ◽  
Author(s):  
Tobias Huber ◽  
Markus Paschold ◽  
Christian Hansen ◽  
Hauke Lang ◽  
Werner Kneist

Introduction. Immersive virtual reality (VR) laparoscopy simulation connects VR simulation with head-mounted displays to increase presence during VR training. The goal of the present study was the comparison of 2 different surroundings according to performance and users’ preference. Methods. With a custom immersive virtual reality laparoscopy simulator, an artificially created VR operating room (AVR) and a highly immersive VR operating room (IVR) were compared. Participants (n = 30) performed 3 tasks (peg transfer, fine dissection, and cholecystectomy) in AVR and IVR in a crossover study design. Results. No overall difference in virtual laparoscopic performance was obtained when comparing results from AVR with IVR. Most participants preferred the IVR surrounding (n = 24). Experienced participants (n = 10) performed significantly better than novices (n = 10) in all tasks regardless of the surrounding ( P < .05). Participants with limited experience (n = 10) showed differing results. Presence, immersion, and exhilaration were significantly higher in IVR. Two thirds assumed that IVR would have a positive influence on their laparoscopic simulator use. Conclusion. This first study comparing AVR and IVR did not reveal differences in virtual laparoscopic performance. IVR is considered the more realistic surrounding and is therefore preferred by the participants.


2020 ◽  
Vol 132 (6) ◽  
pp. 1930-1937 ◽  
Author(s):  
Alexander A. Aabedi ◽  
EunSeon Ahn ◽  
Sofia Kakaizada ◽  
Claudia Valdivia ◽  
Jacob S. Young ◽  
...  

OBJECTIVEMaximal safe tumor resection in language areas of the brain relies on a patient’s ability to perform intraoperative language tasks. Assessing the performance of these tasks during awake craniotomies allows the neurosurgeon to identify and preserve brain regions that are critical for language processing. However, receiving sedation and analgesia just prior to experiencing an awake craniotomy may reduce a patient’s wakefulness, leading to transient language and/or cognitive impairments that do not completely subside before language testing begins. At present, the degree to which wakefulness influences intraoperative language task performance is unclear. Therefore, the authors sought to determine whether any of 5 brief measures of wakefulness predicts such performance during awake craniotomies for glioma resection.METHODSThe authors recruited 21 patients with dominant hemisphere low- and high-grade gliomas. Each patient performed baseline wakefulness measures in addition to picture-naming and text-reading language tasks 24 hours before undergoing an awake craniotomy. The patients performed these same tasks again in the operating room following the cessation of anesthesia medications. The authors then conducted statistical analyses to investigate potential relationships between wakefulness measures and language task performance.RESULTSRelative to baseline, performance on 3 of the 4 objective wakefulness measures (rapid counting, button pressing, and vigilance) declined in the operating room. Moreover, these declines appeared in the complete absence of self-reported changes in arousal. Performance on language tasks similarly declined in the intraoperative setting, with patients experiencing greater declines in picture naming than in text reading. Finally, performance declines on rapid counting and vigilance wakefulness tasks predicted performance declines on the picture-naming task.CONCLUSIONSCurrent subjective methods for assessing wakefulness during awake craniotomies may be insufficient. The administration of objective measures of wakefulness just prior to language task administration may help to ensure that patients are ready for testing. It may also allow neurosurgeons to identify patients who are at risk for poor intraoperative performance.


2006 ◽  
Vol 15 (6) ◽  
pp. 613-626 ◽  
Author(s):  
Ying Zhang ◽  
Terrence Fernando ◽  
Hannan Xiao ◽  
Adrian R. L Travis

This paper presents the creation of an assembly simulation environment with multisensory feedback (auditory and visual), and the evaluation of the effects of auditory and visual feedback on the task performance in the context of assembly simulation in a virtual environment (VE). This VE experimental system platform brings together complex technologies such as constraint-based assembly simulation, optical motion tracking technology, and real time 3D sound generation technology around a virtual reality workbench and a common software platform. A peg-in-a-hole and a Sener electronic box assembly task have been used as the task cases to conduct the human factor experiment, using sixteen participants. Both objective performance data (i.e., task completion time, TCT; and human performance error rate, HPER) and subjective opinions (i.e., questionnaires) on the utilization of auditory and visual feedback in a virtual assembly environment (VAE) have been gathered from the experiment. Results showed that the introduction of auditory and/or visual feedback into VAE did improve the assembly task performance. They also indicated that integrated feedback (auditory plus visual) offered better assembly task performance than either feedback used in isolation. Most participants preferred integrated feedback to either individual feedback (auditory or visual) or no feedback. The participants' comments demonstrated that nonrealistic or inappropriate feedback had a negative effect on the task performance, and easily made them frustrated.


2007 ◽  
Vol 14 (2) ◽  
pp. 122-126 ◽  
Author(s):  
Lily Chang ◽  
Nancy J. Hogle ◽  
Brianna B. Moore ◽  
Mark J. Graham ◽  
Mika N. Sinanan ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document