Real-time orthognathic surgical simulation using a mandibular motion tracking system

2007 ◽  
Vol 12 (2) ◽  
pp. 91-104 ◽  
Author(s):  
Kenji Fushima ◽  
Masaru Kobayashi ◽  
Hiroaki Konishi ◽  
Kennichi Minagichi ◽  
Takeshi Fukuchi
2007 ◽  
Vol 12 (2) ◽  
pp. 91-104
Author(s):  
Kenji Fushima ◽  
Masaru Kobayashi ◽  
Hiroaki Konishi ◽  
Kennichi Minagichi ◽  
Takeshi Fukuchi

Author(s):  
J. S. Goddard ◽  
J. S. Baba ◽  
S. J. Lee ◽  
A. G. Weisenberger ◽  
A. Stolin ◽  
...  

2013 ◽  
Vol 74 (1) ◽  
pp. 11-16 ◽  
Author(s):  
Monique N. Mayer ◽  
Joel L. Lanovaz ◽  
Michael J. Smith ◽  
Narinder Sidhu ◽  
Cheryl L. Waldner

2012 ◽  
Vol 22 (05) ◽  
pp. 1250019 ◽  
Author(s):  
LUIS QUESADA ◽  
ALEJANDRO J. LEÓN

Motion tracking is a critical task in many computer vision applications. Existing motion tracking techniques require either a great amount of knowledge on the target object or specific hardware. These requirements discourage the wide spread of commercial applications based on motion tracking. In this paper, we present a novel three degrees of freedom motion tracking system that needs no knowledge on the target object and that only requires a single low-budget camera that can be found installed in most computers and smartphones. Our system estimates, in real time, the three-dimensional position of a nonmodeled unmarked object that may be nonrigid, nonconvex, partially occluded, self-occluded, or motion blurred, given that it is opaque, evenly colored, enough contrasting with the background in each frame, and that it does not rotate. Our system is also able to determine the most relevant object to track in the screen. Our proposal does not impose additional constraints, therefore it allows a market-wide implementation of applications that require the estimation of the three position degrees of freedom of an object.


2012 ◽  
Vol 605-607 ◽  
pp. 1391-1394
Author(s):  
Youngouk Kim ◽  
Sewoong Jun

This paper presents a new real-time system to acquire motion information of human articulated objects such as arm and head. The system does not need any marker or device to wear on human body and adopted stereo camera to obtain robust system against for illumination and complex background without position initialization of articulated objects. We present a solution to estimate self-occluded body objects when human model behaves normal action towards the camera. The main idea of the solution is to apply a component labeling techniques on sliced disparity map, and found the arm position when the arm is located in front of basis distance of body and we could also found arm location when the arm is located on the basis distance with Morphological methods. From this approach, we can obtain the full body shape considering self-occlusion. It is simple and fast in comparison with other methods which satisfy real-time performance and accuracy of object tracking at the same time.


2018 ◽  
Vol 30 (3) ◽  
pp. 453-466 ◽  
Author(s):  
Shaopeng Hu ◽  
◽  
Mingjun Jiang ◽  
Takeshi Takaki ◽  
Idaku Ishii

In this study, we developed a monocular stereo tracking system to be used as a marker-based, three-dimensional (3-D) motion capture system. This system aims to localize dozens of markers on multiple moving objects in real time by switching five hundred different views in 1 s. The ultrafast mirror-drive active vision used in our catadioptric stereo tracking system can accelerate a series of operations for multithread gaze control with video shooting, computation, and actuation within 2 ms. By switching between five hundred different views in 1 s, with real-time video processing for marker extraction, our system can function asJvirtual left and right pan-tilt tracking cameras, operating at 250/Jfps to simultaneously capture and processJpairs of 512 × 512 stereo images with different views via the catadioptric mirror system. We conducted several real-time 3-D motion experiments to capture multiple fast-moving objects with markers. The results demonstrated the effectiveness of our monocular 3-D motion tracking system.


Sign in / Sign up

Export Citation Format

Share Document