3d vision
Recently Published Documents


TOTAL DOCUMENTS

598
(FIVE YEARS 153)

H-INDEX

22
(FIVE YEARS 3)

2022 ◽  
Vol 149 ◽  
pp. 106834
Author(s):  
Xiaodong Wang ◽  
Bin Liu ◽  
Xuesong Mei ◽  
Wenjun Wang ◽  
Wenqiang Duan ◽  
...  

Author(s):  
Dongri Shan ◽  
Chenglong Zhang ◽  
Peng Zhang ◽  
Xiaofang Wang ◽  
Dongmei He ◽  
...  

Light pen 3D vision coordinate measurement systems are increasingly widely used due to their advantages, such as small size, convenient carrying and wide applicability. The posture of the light pen is an important factor affecting accuracy. The pose domain of the pen needs to be given so that the measurement system has a suitable measurement range to obtain more qualified parameters. The advantage of the self-calibration method is that the entire self-calibration process can be completed at the measurement site without any auxiliary equipment. After the system camera calibration is completed, we take several pictures of the same measurement point with different poses to obtain the conversion matrix of the picture, and then use spherical fitting, the generalized inverse method of least squares, and the principle of position invariance within the pose domain range. The combined stylus tip center self-calibration method calculates the actual position of the light pen probe. The experimental results show that the absolute error is stable below 0.0737 mm and that the relative error is stable below 0.0025 mm. The experimental results verify the effectiveness of the method; the measurement accuracy of the system can meet the basic industrial measurement requirements.


2022 ◽  
Vol 2022 ◽  
pp. 1-11
Author(s):  
Feng Shan ◽  
Youya Wang

The depth synthesis of image texture is neglected in the current image visual communication technology, which leads to the poor visual effect. Therefore, the design method of film and TV animation based on 3D visual communication technology is proposed. Collect film and television animation videos through 3D visual communication content production, server processing, and client processing. Through stitching, projection mapping, and animation video image frame texture synthesis, 3D vision conveys animation video image projection. In order to ensure the continuous variation of scaling factors between adjacent triangles of animation and video images, the scaling factor field is constructed. Deep learning is used to extract the deep features and to reconstruct the multiframe animated and animated video images based on visual communication. Based on this, the frame feature of video image under gray projection is identified and extracted, and the animation design based on 3D visual communication technology is completed. Experimental results show that the proposed method can enhance the visual transmission of animation video images significantly and can achieve high-precision reconstruction of video images in a short time.


2022 ◽  
Vol 52 (1) ◽  
pp. E13

OBJECTIVE A clear, stable, suitably located vision field is essential for port surgery. A scope is usually held by hand or a fixing device. The former yields fatigue and requires lengthy training, while the latter increases inconvenience because of needing to adjust the scope. Thus, the authors innovated a novel robotic system that can recognize the port and automatically place the scope in an optimized position. In this study, the authors executed a preliminary experiment to test this system’s technical feasibility and accuracy in vitro. METHODS A collaborative robotic (CoBot) system consisting of a mechatronic arm and a 3D camera was developed. With the 3D camera and programmed machine vision, CoBot can search a marker attached to the opening of the surgical port, followed by automatic alignment of the scope’s axis with the port’s longitudinal axis so that optimal illumination and visual observation can be achieved. Three tests were conducted. In test 1, the robot positioned a laser range finder attached to the robot’s arm to align the sheath’s center axis. The laser successfully passing through two holes in the port sheath’s central axis defined successful positioning. Researchers recorded the finder’s readings, demonstrating the actual distance between the finder and the sheath. In test 2, the robot held a high-definition exoscope and relocated it to the setting position. Test 3 was similar to test 2, but a metal holder substituted the robot. Trained neurosurgeons manually adjusted the holder. The manipulation time was recorded. Additionally, a grading system was designed to score each image captured by the exoscope at the setting position, and the scores in the two tests were compared using the rank-sum test. RESULTS The CoBot system positioned the finder successfully in all rounds in test 1; the mean height errors ± SD were 1.14 mm ± 0.38 mm (downward) and 1.60 mm ± 0.89 mm (upward). The grading scores of images in tests 2 and 3 were significantly different. Regarding the total score and four subgroups, test 2 showed a more precise, better-positioned, and more stable vision field. The total manipulation time in test 2 was 20 minutes, and for test 3 it was 52 minutes. CONCLUSIONS The CoBot system successfully acted as a robust scope holding system to provide a stable and optimized surgical view during simulated port surgery, providing further evidence for the substitution of human hands, and leading to a more efficient, user-friendly, and precise operation.


2021 ◽  
Vol 11 (1) ◽  
pp. 223
Author(s):  
Nicola Montemurro ◽  
Alba Scerrati ◽  
Luca Ricciardi ◽  
Gianluca Trevisi

Background: Exoscopes are a safe and effective alternative or adjunct to the existing binocular surgical microscope for brain tumor, skull base surgery, aneurysm clipping and both cervical and lumbar complex spine surgery that probably will open a new era in the field of new tools and techniques in neurosurgery. Methods: A Pubmed and Ovid EMBASE search was performed to identify papers that include surgical experiences with the exoscope in neurosurgery. PRISMA guidelines (Preferred Reporting Items for Systematic Reviews and Meta-analyses) were followed. Results: A total of 86 articles and 1711 cases were included and analyzed in this review. Among 86 papers included in this review 74 (86%) were published in the last 5 years. Out of 1711 surgical procedures, 1534 (89.6%) were performed in the operative room, whereas 177 (10.9%) were performed in the laboratory on cadavers. In more detail, 1251 (72.7%) were reported as brain surgeries, whereas 274 (16%) and 9 (0.5%) were reported as spine and peripheral nerve surgeries, respectively. Considering only the clinical series (40 studies and 1328 patients), the overall surgical complication rate was 2.6% during the use of the exoscope. These patients experienced complication profiles similar to those that underwent the same treatments with the OM. The overall switch incidence rate from exoscope to OM during surgery was 5.8%. Conclusions: The exoscope seems to be a safe alternative compared to an operative microscope for the most common brain and spinal procedures, with several advantages that have been reached, such as an easier simplicity of use and a better 3D vision and magnification of the surgical field. Moreover, it offers the opportunity of better interaction with other members of the surgical staff. All these points set the first step for subsequent and short-term changes in the field of neurosurgery and offer new educational possibilities for young neurosurgery and medical students.


2021 ◽  
Vol 12 (1) ◽  
pp. 286
Author(s):  
Radovan Holubek ◽  
Marek Vagaš

In advanced manufacturing technologies (including complex automated processes) and their branches of industry, perception and evaluation of the object parameters are the most critical factors. Many production machines and workplaces are currently equipped as standard with high-quality special sensing devices based on vision systems to detect these parameters. This article focuses on designing a reachable and fully functional vision system based on two standard CCD cameras usage, while the emphasis is on the RS 232C communication interface between two sites (vision and robotic systems). To this, we combine principles of the 1D photogrammetric calibration method from two known points at a stable point field and the available packages inside the processing unit of the vision system (as filtering, enhancing and extracting edges, weak and robust smoothing, etc.). A correlation factor at camera system (for reliable recognition of the sensed object) was set from 84 to 100%. Then, the pilot communication between both systems was proposed and then tested through CREAD/CWRITE commands according to protocol 3964R (used for the data transfer). Moreover, the system was proven by successful transition of the data into the robotic system. Since research gaps in this field still exist and many vision systems are based on PC processing or intelligent cameras, our potential research topic tries to provide the price–performance ratio solution for those who cannot regularly invest in the newest vision technology; however, they could still do so to stay competitive.


2021 ◽  
Author(s):  
Omar Alfarisi ◽  
Aikifa Raza ◽  
Djamel Ouzzane ◽  
Mohamed Sassi ◽  
Hongtao Zhang ◽  
...  

<p><a></a><a>Permeability has a dominant influence on the flow behavior of a natural fluid, and without proper quantification, biological fluids (Hydrocarbons) and water resources become waste. During the first decades of the 21<sup>st</sup> century, permeability quantification from nano-micro porous media images emerged, aided by 3D pore network flow simulation, primarily using the Lattice Boltzmann simulator. Earth scientists realized that the simulation process holds millions of flow dynamics calculations with accumulated errors and high computing power consumption. Therefore, accuracy and efficiency challenges obstruct planetary exploration. To effic­­­iently, consistently predict permeability with high quality, we propose the Morphology Decoder. It is a parallel and serial flow reconstruction of machine learning-driven semantically segmented heterogeneous rock texture images of 3D X-Ray Micro Computerized Tomography (μCT) and Nuclear Magnetic Resonance (MRI). For 3D vision, we introduce controllable-measurable-volume as new supervised semantic segmentation, in which a unique set of voxel intensity corresponds to grain and pore throat sizes. The morphology decoder demarks and aggregates the morphologies' boundaries in a novel way to quantify permeability. The morphology decoder method consists of five novel processes, which we describe in this paper, these novel processes are (1) Geometrical: 3D Permeability Governing Equation, (2) Machine Learning: Guided 3D Properties Recognition of Rock Morphology, (3) Analytical: 3D Image Properties Integration Model for Permeability, (4) Experimental: MRI Permeability Imager, and (5) Morphology Decoder (the process that integrates the other four novel processes).</a></p>


2021 ◽  
Author(s):  
Omar Alfarisi ◽  
Aikifa Raza ◽  
Djamel Ouzzane ◽  
Mohamed Sassi ◽  
Hongtao Zhang ◽  
...  

<p><a></a><a>Permeability has a dominant influence on the flow behavior of a natural fluid, and without proper quantification, biological fluids (Hydrocarbons) and water resources become waste. During the first decades of the 21<sup>st</sup> century, permeability quantification from nano-micro porous media images emerged, aided by 3D pore network flow simulation, primarily using the Lattice Boltzmann simulator. Earth scientists realized that the simulation process holds millions of flow dynamics calculations with accumulated errors and high computing power consumption. Therefore, accuracy and efficiency challenges obstruct planetary exploration. To effic­­­iently, consistently predict permeability with high quality, we propose the Morphology Decoder. It is a parallel and serial flow reconstruction of machine learning-driven semantically segmented heterogeneous rock texture images of 3D X-Ray Micro Computerized Tomography (μCT) and Nuclear Magnetic Resonance (MRI). For 3D vision, we introduce controllable-measurable-volume as new supervised semantic segmentation, in which a unique set of voxel intensity corresponds to grain and pore throat sizes. The morphology decoder demarks and aggregates the morphologies' boundaries in a novel way to quantify permeability. The morphology decoder method consists of five novel processes, which we describe in this paper, these novel processes are (1) Geometrical: 3D Permeability Governing Equation, (2) Machine Learning: Guided 3D Properties Recognition of Rock Morphology, (3) Analytical: 3D Image Properties Integration Model for Permeability, (4) Experimental: MRI Permeability Imager, and (5) Morphology Decoder (the process that integrates the other four novel processes).</a></p>


2021 ◽  
Vol Volume 14 ◽  
pp. 469-480
Author(s):  
Nima Motahariasl ◽  
Sayed Borna Farzaneh ◽  
Sina Motahariasl ◽  
Ilya Kokotkin ◽  
Sara Sousi ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document