range image
Recently Published Documents


TOTAL DOCUMENTS

1015
(FIVE YEARS 99)

H-INDEX

37
(FIVE YEARS 3)

2021 ◽  
Author(s):  
Yirui Wu ◽  
Benze Wu ◽  
Yunfei Zhang ◽  
Shaohua Wan

Abstract With the development of 5G/6G, IoT, and cloud systems, the amount of data generated, transmitted, and calculated is increasing, and fast and effective close-range image classification becomes more and more important. But many methods require a large number of samples to support in order to achieve sufficient functions. This allows the entire network to zoom in to meet a large number of effective feature extractions, which reduces the efficiency of small sample classification to a certain extent. In order to solve these problems, we propose an image enhancement method for the problems of few-shot classification. This method is an expanded convolutional network with data enhancement function. This network can not only meet the features required for image classification without increasing the number of samples, but also has the advantage of using a large number of effective features without sacrificing efficiency. structure. The cutout structure can enhance the matrix in the data image input process by adding a fixed area 0 mask. The structure of FAU uses dilated convolution and uses the characteristics of the sequence to improve the efficiency of the network. We conduct a comparative experiment on the miniImageNet and CUB datasets, and the proposed method is superior to the comparative method in terms of effectiveness and efficiency measurement in the 1-shot and 5-shot cases.


Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7024
Author(s):  
Marcos Alonso ◽  
Daniel Maestro ◽  
Alberto Izaguirre ◽  
Imanol Andonegui ◽  
Manuel Graña

Surface flatness assessment is necessary for quality control of metal sheets manufactured from steel coils by roll leveling and cutting. Mechanical-contact-based flatness sensors are being replaced by modern laser-based optical sensors that deliver accurate and dense reconstruction of metal sheet surfaces for flatness index computation. However, the surface range images captured by these optical sensors are corrupted by very specific kinds of noise due to vibrations caused by mechanical processes like degreasing, cleaning, polishing, shearing, and transporting roll systems. Therefore, high-quality flatness optical measurement systems strongly depend on the quality of image denoising methods applied to extract the true surface height image. This paper presents a deep learning architecture for removing these specific kinds of noise from the range images obtained by a laser based range sensor installed in a rolling and shearing line, in order to allow accurate flatness measurements from the clean range images. The proposed convolutional blind residual denoising network (CBRDNet) is composed of a noise estimation module and a noise removal module implemented by specific adaptation of semantic convolutional neural networks. The CBRDNet is validated on both synthetic and real noisy range image data that exhibit the most critical kinds of noise that arise throughout the metal sheet production process. Real data were obtained from a single laser line triangulation flatness sensor installed in a roll leveling and cut to length line. Computational experiments over both synthetic and real datasets clearly demonstrate that CBRDNet achieves superior performance in comparison to traditional 1D and 2D filtering methods, and state-of-the-art CNN-based denoising techniques. The experimental validation results show a reduction in error than can be up to 15% relative to solutions based on traditional 1D and 2D filtering methods and between 10% and 3% relative to the other deep learning denoising architectures recently reported in the literature.


Author(s):  
Di Xu ◽  
Zhen Li ◽  
Qi Cao

AbstractIn applications of augmented reality or mixed reality, rendering virtual objects in real scenes with consistent illumination is crucial for realistic visualization experiences. Prior learning-based methods reported in the literature usually attempt to reconstruct complicated high dynamic range environment maps from limited input, and rely on a separate rendering pipeline to light up the virtual object. In this paper, an object-based illumination transferring and rendering algorithm is proposed to tackle this problem within a unified framework. Given a single low dynamic range image, instead of recovering lighting environment of the entire scene, the proposed algorithm directly infers the relit virtual object. It is achieved by transferring implicit illumination features which are extracted from its nearby planar surfaces. A generative adversarial network is adopted in the proposed algorithm for implicit illumination features extraction and transferring. Compared to previous works in the literature, the proposed algorithm is more robust, as it is able to efficiently recover spatially varying illumination in both indoor and outdoor scene environments. Experiments have been conducted. It is observed that notable experiment results and comparison outcomes have been obtained quantitatively and qualitatively by the proposed algorithm in different environments. It shows the effectiveness and robustness for realistic virtual object insertion and improved realism.


2021 ◽  
Vol 15 (03) ◽  
pp. 293-312
Author(s):  
Fabian Duerr ◽  
Hendrik Weigel ◽  
Jürgen Beyerer

One of the key tasks for autonomous vehicles or robots is a robust perception of their 3D environment, which is why autonomous vehicles or robots are equipped with a wide range of different sensors. Building upon a robust sensor setup, understanding and interpreting their 3D environment is the next important step. Semantic segmentation of 3D sensor data, e.g. point clouds, provides valuable information for this task and is often seen as key enabler for 3D scene understanding. This work presents an iterative deep fusion architecture for semantic segmentation of 3D point clouds, which builds upon a range image representation of the point clouds and additionally exploits camera features to increase accuracy and robustness. In contrast to other approaches, which fuse lidar and camera features once, the proposed fusion strategy iteratively combines and refines lidar and camera features at different scales inside the network architecture. Additionally, the proposed approach can deal with camera failure as well as jointly predict lidar and camera segmentation. We demonstrate the benefits of the presented iterative deep fusion approach on two challenging datasets, outperforming all range image-based lidar and fusion approaches. An in-depth evaluation underlines the effectiveness of the proposed fusion strategy and the potential of camera features for 3D semantic segmentation.


Author(s):  
P. Debus ◽  
V. Rodehorst

Abstract. The application of image-based methods in inspections and monitoring has increased significantly over recent years. This is especially the case for the inspection of large structures that are not easily accessible for human inspectors. Here, unmanned aircraft systems (UAS) can support by generating high-quality images, that contain valuable information about the structure’s condition. To guarantee high quality and completeness for the acquired data, inspection missions are planned in advance by computing a flight path for the UAS, that covers the entire structure with the required quality. Many approaches on this topic exist that aim to solve this planning task. Nevertheless, each publication on this matter mostly stands on its own, working with its own criteria and no comparison to other approaches. Therefore, it is currently not possible to compare different approaches and select the most suitable for a specific scenario. To solve this problem, this work proposes an evaluation pipeline that applies well defined quality criteria on flight paths for close-range image-based inspections. These criteria are limited to fundamental aspects for the evaluation of paths that were created for diverse scenarios with diverse criteria and still find common ground for comparison. As experiments show, this pipeline allows the comparison of different approaches, objectifying the performance and working towards a common understanding of the current state of the art. Finally, the Bauhaus Path Planning Challenge is presented, inviting submissions to a comparison based on this pipeline to collaborate on an objective ranking, available under https://uni-weimar.de/pathplanning.


2021 ◽  
Author(s):  
Yuning Chai ◽  
Pei Sun ◽  
Jiquan Ngiam ◽  
Weiyue Wang ◽  
Benjamin Caine ◽  
...  

2021 ◽  
Author(s):  
Chia-Ni Lu ◽  
Ya-Chu Chang ◽  
Wei-Chen Chiu

2021 ◽  
Author(s):  
Zhidong Liang ◽  
Zehan Zhang ◽  
Ming Zhang ◽  
Xian Zhao ◽  
Shiliang Pu
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document