scholarly journals Force Feedback to Assist Active Contour Modelling for Tracheal Stenosis Segmentation

2012 ◽  
Vol 2012 ◽  
pp. 1-9 ◽  
Author(s):  
Lode Vanacken ◽  
Rômulo Pinho ◽  
Jan Sijbers ◽  
Karin Coninx

Manual segmentation of structures for diagnosis and treatment of various diseases is a very time-consuming procedure. Therefore, some level of automation during the segmentation is desired, as it often significantly reduces the segmentation time. A typical solution is to allow manual interaction to steer the segmentation process, which is known as semiautomatic segmentation. In 2D, such interaction is usually achieved with click-and-drag operations, but in 3D a more sophisticated interface is called for. In this paper, we propose a semi-automatic Active Contour Modelling for the delineation of medical structures in 3D, tomographic images. Interaction is implemented with the employment of a 3D haptic device, which is used to steer the contour deformation towards the correct boundaries. In this way, valuable haptic feedback is provided about the 3D surface and its deformation. Experiments on simulated and real tracheal CT data showed that the proposed technique is an intuitive and effective segmentation mechanism.

2000 ◽  
Author(s):  
Michael L. Turner ◽  
Ryan P. Findley ◽  
Weston B. Griffin ◽  
Mark R. Cutkosky ◽  
Daniel H. Gomez

Abstract This paper describes the development of a system for dexterous telemanipulation and presents the results of tests involving simple manipulation tasks. The user wears an instrumented glove augmented with an arm-grounded haptic feedback apparatus. A linkage attached to the user’s wrist measures gross motions of the arm. The movements of the user are transferred to a two fingered dexterous robot hand mounted on the end of a 4-DOF industrial robot arm. Forces measured at the robot fingers can be transmitted back to the user via the haptic feedback apparatus. The results obtained in block-stacking and object-rolling experiments indicate that the addition of force feedback to the user did not improve the speed of task execution. In fact, in some cases the presence of incomplete force information is detrimental to performance speed compared to no force information. There are indications that the presence of force feedback did aid in task learning.


2018 ◽  
Vol 35 (2) ◽  
pp. 149-160 ◽  
Author(s):  
Mustufa H. Abidi ◽  
Abdulrahman M. Al-Ahmari ◽  
Ali Ahmad ◽  
Saber Darmoul ◽  
Wadea Ameen

AbstractThe design and verification of assembly operations is essential for planning product production operations. Recently, virtual prototyping has witnessed tremendous progress, and has reached a stage where current environments enable rich and multi-modal interaction between designers and models through stereoscopic visuals, surround sound, and haptic feedback. The benefits of building and using Virtual Reality (VR) models in assembly process verification are discussed in this paper. In this paper, we present the virtual assembly (VA) of an aircraft turbine engine. The assembly parts and sequences are explained using a virtual reality design system. The system enables stereoscopic visuals, surround sounds, and ample and intuitive interaction with developed models. A special software architecture is suggested to describe the assembly parts and assembly sequence in VR. A collision detection mechanism is employed that provides visual feedback to check the interference between components. The system is tested for virtual prototype and assembly sequencing of a turbine engine. We show that the developed system is comprehensive in terms of VR feedback mechanisms, which include visual, auditory, tactile, as well as force feedback. The system is shown to be effective and efficient for validating the design of assembly, part design, and operations planning.


2005 ◽  
Vol 128 (2) ◽  
pp. 216-226 ◽  
Author(s):  
M. A. Vitrani ◽  
J. Nikitczuk ◽  
G. Morel ◽  
C. Mavroidis ◽  
B. Weinberg

Force-feedback mechanisms have been designed to simplify and enhance the human-vehicle interface. The increase in secondary controls within vehicle cockpits has created a desire for a simpler, more efficient human-vehicle interface. By consolidating various controls into a single, haptic feedback control device, information can be transmitted to the operator, without requiring the driver’s visual attention. In this paper, the experimental closed loop torque control of electro-rheological fluids (ERF) based resistive actuators for haptic applications is performed. ERFs are liquids that respond mechanically to electric fields by changing their properties, such as viscosity and shear stress electroactively. Using the electrically controlled rheological properties of ERFs, we developed resistive-actuators for haptic devices that can resist human operator forces in a controlled and tunable fashion. In this study, the ERF resistive-actuator analytical model is derived and experimentally verified and accurate closed loop torque control is experimentally achieved using a non-linear proportional integral controller with a feedforward loop.


2014 ◽  
Vol 2014 ◽  
pp. 1-10 ◽  
Author(s):  
L. Meziou ◽  
A. Histace ◽  
F. Precioso ◽  
O. Romain ◽  
X. Dray ◽  
...  

Visualization of the entire length of the gastrointestinal tract through natural orifices is a challenge for endoscopists. Videoendoscopy is currently the “gold standard” technique for diagnosis of different pathologies of the intestinal tract. Wireless capsule endoscopy (WCE) has been developed in the 1990s as an alternative to videoendoscopy to allow direct examination of the gastrointestinal tract without any need for sedation. Nevertheless, the systematic postexamination by the specialist of the 50,000 (for the small bowel) to 150,000 images (for the colon) of a complete acquisition using WCE remains time-consuming and challenging due to the poor quality of WCE images. In this paper, a semiautomatic segmentation for analysis of WCE images is proposed. Based on active contour segmentation, the proposed method introduces alpha-divergences, a flexible statistical similarity measure that gives a real flexibility to different types of gastrointestinal pathologies. Results of segmentation using the proposed approach are shown on different types of real-case examinations, from (multi)polyp(s) segmentation, to radiation enteritis delineation.


2019 ◽  
Vol 121 (4) ◽  
pp. 1398-1409 ◽  
Author(s):  
Vonne van Polanen ◽  
Robert Tibold ◽  
Atsuo Nuruki ◽  
Marco Davare

Lifting an object requires precise scaling of fingertip forces based on a prediction of object weight. At object contact, a series of tactile and visual events arise that need to be rapidly processed online to fine-tune the planned motor commands for lifting the object. The brain mechanisms underlying multisensory integration serially at transient sensorimotor events, a general feature of actions requiring hand-object interactions, are not yet understood. In this study we tested the relative weighting between haptic and visual signals when they are integrated online into the motor command. We used a new virtual reality setup to desynchronize visual feedback from haptics, which allowed us to probe the relative contribution of haptics and vision in driving participants’ movements when they grasped virtual objects simulated by two force-feedback robots. We found that visual delay changed the profile of fingertip force generation and led participants to perceive objects as heavier than when lifts were performed without visual delay. We further modeled the effect of vision on motor output by manipulating the extent to which delayed visual events could bias the force profile, which allowed us to determine the specific weighting the brain assigns to haptics and vision. Our results show for the first time how visuo-haptic integration is processed at discrete sensorimotor events for controlling object-lifting dynamics and further highlight the organization of multisensory signals online for controlling action and perception. NEW & NOTEWORTHY Dexterous hand movements require rapid integration of information from different senses, in particular touch and vision, at different key time points as movement unfolds. The relative weighting between vision and haptics for object manipulation is unknown. We used object lifting in virtual reality to desynchronize visual and haptic feedback and find out their relative weightings. Our findings shed light on how rapid multisensory integration is processed over a series of discrete sensorimotor control points.


Author(s):  
Jinling Wang ◽  
Wen F. Lu

Virtual reality technology plays an important role in the fields of product design, computer animation, medical simulation, cloth motion, and many others. Especially with the emergence of haptics technology, virtual simulation system provides an intuitive way of human and computer interaction, which allows user to feel and touch the virtual environment. For a real-time simulation system, a physically based deformable model including complex material properties with a high resolution is required. However, such deformable model hardly satisfies the update rate of interactive haptic rendering that exceeds 1 kHz. To tackle this challenge, a real-time volumetric model with haptic feedback is developed in this paper. This model, named as Adaptive S-chain model, extends the S-chain model and integrates the energy-based wave propagation method by the proposed adaptive re-mesh method to achieve realistic graphic and haptic deformation results. The implemented results show that the nonlinear, heterogeneous, anisotropic, shape retaining material properties and large range deformation are well modeled. An accurate force feedback is generated by the proposed Adaptive S-chain model in case study which is quite close to the experiment data.


Author(s):  
Avi Fisch ◽  
Jason Nikitczuk ◽  
Brian Weinberg ◽  
Juan Melli-Huber ◽  
Constantinos Mavroidis ◽  
...  

Force-feedback methanisms have been designed to simplify and enahance the human-vehicle interface. The increase in secondary controls within vehicle cockpits has created a desire for a simpler, more efficient human-vehicle interface. Haptic system, or systems that interact with the operator’s sense of touch, can be used to consolidate various controls into fever, haptic feedback control devices, so that information can be transmitted to the operator and the operator can change control settings without requiring the driver’s visual attention. In this paper an Electro-Rheological Fluid (ERF) based actuator and mechanisms are presented that provide haptic feedback. ERSs are fluids that change their viscosity in response to an electric field. Using the electrically controlled rheological properties of ERFs, haptic devices have been developed that can resist human operator forces in a controlled and tunable fashion. The design of an ERF-based actuator and its application to a haptic knob and haptic joystick is presented. The analytical model is given, analyses are performed, and experimental systems and data are presented for the actuator. Conceptual methods for the application to the haptic devices are presented.


2002 ◽  
Vol 26 (1) ◽  
pp. 9-17 ◽  
Author(s):  
Piotr Makowski ◽  
Thomas Sangild Sørensen ◽  
Søren Vorre Therkildsen ◽  
Andrzej Materka ◽  
Hans Stødkilde-Jørgensen ◽  
...  

2005 ◽  
Vol 14 (6) ◽  
pp. 677-696 ◽  
Author(s):  
Christoph W. Borst ◽  
Richard A. Volz

We present a haptic feedback technique that combines feedback from a portable force-feedback glove with feedback from direct contact with rigid passive objects. This approach is a haptic analogue of visual mixed reality, since it can be used to haptically combine real and virtual elements in a single display. We discuss device limitations that motivated this combined approach and summarize technological challenges encountered. We present three experiments to evaluate the approach for interactions with buttons and sliders on a virtual control panel. In our first experiment, this approach resulted in better task performance and better subjective ratings than the use of only a force-feedback glove. In our second experiment, visual feedback was degraded and the combined approach resulted in better performance than the glove-only approach and in better ratings of slider interactions than both glove-only and passive-only approaches. A third experiment allowed subjective comparison of approaches and provided additional evidence that the combined approach provides the best experience.


Sign in / Sign up

Export Citation Format

Share Document