stitching errors
Recently Published Documents


TOTAL DOCUMENTS

25
(FIVE YEARS 6)

H-INDEX

3
(FIVE YEARS 1)

2021 ◽  
Vol 27 (3) ◽  
pp. 636-642
Author(s):  
Qin Qin ◽  
Jigang Huang ◽  
Jin Yao ◽  
Wenxiang Gao

Purpose Scanning projection-based stereolithography (SPSL) is a powerful technology for additive manufacturing with high resolution as well as large building area. However, the surface quality of stitching boundary in an SPSL system has been rarely studied, and no positive settlement was proposed to address the poor stitching quality. This paper aims to propose an approach of multi-pass scanning and a compensation algorithm for multi-pass scanning process to address the issue of poor stitching quality in SPSL systems. Design/methodology/approach The process of multi-pass scanning is realized by scanning regions repeatedly, and the regions can be cured simultaneously because of the very short repeat exposure time and very fast scanning. Then, the poor stitching quality caused by the non-simultaneous curing can be eliminated. Also, a compensation algorithm is designed for multi-pass scanning to reduce the stitching errors. The validity of multi-pass scanning is verified by curing depth test, while the performance of multi-pass scanning as well as proposed compensation algorithm is demonstrated by comparing with that of a previous SPSL system. Findings The results lead to a conclusion that multi-pass scanning with its compensation algorithm is an effective approach to improve the stitching quality of an SPSL system. Practical implications This study can provide advice for researchers to achieve the satisfactory surface finish with SPSL technology. Originality/value The authors proposed a process of multi-pass scanning as well as a compensation algorithm for SPSL additive manufacturing (system to improve the stitching quality, which has rarely been studied in previous work.


2020 ◽  
Vol 10 (23) ◽  
pp. 8679
Author(s):  
Jaehyun Lee ◽  
Sungjae Ha ◽  
Philippe Gentet ◽  
Leehwan Hwang ◽  
Soonchul Kwon ◽  
...  

As highly immersive virtual reality (VR) content, 360° video allows users to observe all viewpoints within the desired direction from the position where the video is recorded. In 360° video content, virtual objects are inserted into recorded real scenes to provide a higher sense of immersion. These techniques are called 3D composition. For a realistic 3D composition in a 360° video, it is important to obtain the internal (focal length) and external (position and rotation) parameters from a 360° camera. Traditional methods estimate the trajectory of a camera by extracting the feature point from the recorded video. However, incorrect results may occur owing to stitching errors from a 360° camera attached to several high-resolution cameras for the stitching process, and a large amount of time is spent on feature tracking owing to the high-resolution of the video. We propose a new method for pre-visualization and 3D composition that overcomes the limitations of existing methods. This system achieves real-time position tracking of the attached camera using a ZED camera and a stereo-vision sensor, and real-time stabilization using a Kalman filter. The proposed system shows high time efficiency and accurate 3D composition.


Sensors ◽  
2020 ◽  
Vol 20 (22) ◽  
pp. 6457
Author(s):  
Hayat Ullah ◽  
Muhammad Irfan ◽  
Kyungjin Han ◽  
Jong Weon Lee

Due to recent advancements in virtual reality (VR) and augmented reality (AR), the demand for high quality immersive contents is a primary concern for production companies and consumers. Similarly, the topical record-breaking performance of deep learning in various domains of artificial intelligence has extended the attention of researchers to contribute to different fields of computer vision. To ensure the quality of immersive media contents using these advanced deep learning technologies, several learning based Stitched Image Quality Assessment methods have been proposed with reasonable performances. However, these methods are unable to localize, segment, and extract the stitching errors in panoramic images. Further, these methods used computationally complex procedures for quality assessment of panoramic images. With these motivations, in this paper, we propose a novel three-fold Deep Learning based No-Reference Stitched Image Quality Assessment (DLNR-SIQA) approach to evaluate the quality of immersive contents. In the first fold, we fined-tuned the state-of-the-art Mask R-CNN (Regional Convolutional Neural Network) on manually annotated various stitching error-based cropped images from the two publicly available datasets. In the second fold, we segment and localize various stitching errors present in the immersive contents. Finally, based on the distorted regions present in the immersive contents, we measured the overall quality of the stitched images. Unlike existing methods that only measure the quality of the images using deep features, our proposed method can efficiently segment and localize stitching errors and estimate the image quality by investigating segmented regions. We also carried out extensive qualitative and quantitative comparison with full reference image quality assessment (FR-IQA) and no reference image quality assessment (NR-IQA) on two publicly available datasets, where the proposed system outperformed the existing state-of-the-art techniques.


Author(s):  
Harikrishnan Madhusudanan ◽  
Xingjian Liu ◽  
Wenyuan Chen ◽  
Dahai Li ◽  
Linghao Du ◽  
...  

Author(s):  
Xingjian Liu ◽  
Harikrishnan Madhusudanan ◽  
Wenyuan Chen ◽  
Dahai Li ◽  
Ji Ge ◽  
...  

2019 ◽  
Vol 66 (7) ◽  
pp. 739-746 ◽  
Author(s):  
Meng Ding ◽  
Qi Fan ◽  
Yin Su ◽  
Baiyu Yang ◽  
Changhui Tian ◽  
...  

2018 ◽  
Vol 35 (2) ◽  
pp. 331 ◽  
Author(s):  
Maxim Neradovskiy ◽  
Elizaveta Neradovskaia ◽  
Dmitry Chezganov ◽  
Evgeny Vlasov ◽  
Vladimir Ya. Shur ◽  
...  

2017 ◽  
Vol 207 (5) ◽  
pp. 224-224 ◽  
Author(s):  
Clare Faurie ◽  
Nicole Williams ◽  
Peter J Cundy

2017 ◽  
Vol 870 ◽  
pp. 95-101
Author(s):  
Shi Wei Ye ◽  
Ping Yang ◽  
Zhen Zhong Wang ◽  
Yan Ting Zhang ◽  
Yun Feng Peng

A multi-segments stitching method is proposed to obtain two-dimensional profile of large scale aspheric components during grinding process. We firstly analyze the corresponding relation between surface features of aspherics and measurement range of profilometer. According to multi-body system theory and slope difference, a mathematical model of multi-segments stitching method is constructed. Further, the multi-segments stitching errors based on 400 mm aspheric profile are simulated with different translation amounts, rotation angles, translation errors as well as rotation errors. The simulation results indicate that the translation error is the key factor for measuring accuracy of the multi-segments stitching method. In addition, the standard deviations of multi-segments stitching errors are between 0.2 μm and 0.4 μm as the four factors are set to the proper values. To verify the simulation analysis results, an experimental setup, including Talysurf PGI 1240 and fixture, is applied to detect a 150 mm profile line on the aspheric surface. The experiment results show that the standard deviations of multi-segments stitching errors are almost within 1 μm when the rotation angle is under 7° and the translation amount is under 11 mm. The multi-segments stitching method can employ a small-range profilometer to achieve measurement of large scale aspheric surfaces with submicrometer accuracy.


Author(s):  
J.R. Fridmann ◽  
J.E. Sanabia ◽  
M. Rasche

Abstract For large area, high resolution SEM imaging applications, such as integrated circuit (IC) reverse engineering and connectomics [1-3], SEM instruments are limited by small, uncalibrated fields of view (FOVs) and imprecise sample positioning. These limitations affect image capture throughput, requiring more stage drive time and larger image overlaps. Furthermore, these instrument limitations introduce stitching errors in 4 dimensions of the image data, X, Y, Z and I (signal intensity). Throughput and stitching errors are cited challenges [2] and software alone cannot tenably correct stitching errors in large image datasets [3]. Furthermore, software corrections can introduce additional errors into the image data via the scaling, rotation, and twisting of the images. So software has proven insufficient for reverse engineering of modern integrated circuits. Our methodology addresses the challenges brought on by small, uncalibrated FOVs and imprecise sample positioning by combining the resolution and flexibility of the SEM instrument with the accuracy (of the order 10 nm), stability, and automation of the electron beam lithography (EBL) instrument. With its unique combination of high resolution SEM imaging (up to 50,000 pixels x 50,000 pixels for each image), laser interferometer stage positioning, and FOV mapping, the reverse engineering scanning electron microscope (RE-SEM) produces the most accurate large area, high resolution images directly acquired by an SEM instrument [4]. Since the absolute position of each pixel is known ultimately to the accuracy afforded by the laser interferometer stage, these images can be stacked (3D-stitched) with the highest possible accuracy. Thus, the RE-SEM has been used to successfully reconstruct a current PC-CPU at the 22 nm node.


Sign in / Sign up

Export Citation Format

Share Document