outdoor scenes
Recently Published Documents


TOTAL DOCUMENTS

237
(FIVE YEARS 63)

H-INDEX

21
(FIVE YEARS 4)

Author(s):  
Hsin-Lin Ho ◽  
Jun-Da Chen ◽  
Ching-An Yang ◽  
Chia-Chi Liu ◽  
Cheng-Ting Lee ◽  
...  

AbstractWe characterize a new chaos lidar system configuration and demonstrate its capability for high-speed 3D imaging. Compared with a homodyned scheme employing single-element avalanche photodetectors (APDs), the proposed scheme utilizes a fiber Bragg grating and quadrant APDs to substantially increase the system throughput, frame rate, and field-of-view. By quantitatively analyzing the signal-to-noise ratio, peak-to-standard deviation of the sidelobe level, precision, and detection probability, we show that the proposed scheme has better detection performance suitable for practical applications. To show the feasibility of the chaos lidar system, while under the constrain of eye-safe regulation, we demonstrate high-speed 3D imaging with indoor and outdoor scenes at a throughput of 100 kHz, a frame rate of 10 Hz, and a FOV of 24.5$$^\circ $$ ∘ $$\times $$ × 11.5$$^\circ $$ ∘ for the first time.


Author(s):  
Uoc Quang Ngo ◽  
Duong Tri Ngo ◽  
Hoc Thai Nguyen ◽  
Thanh Dang Bui

Increasingly <span>emerging technologies in agriculture such as computer vision, artificial intelligence technology, not only make it possible to increase production. To minimize the negative impact on climate and the environment but also to conserve resources. A key task of these technologies is to monitor the growth of plants online with a high accuracy rate and in non-destructive manners. It is known that leaf area (LA) is one of the most important growth indexes in plant growth monitoring system. Unfortunately, to estimate the LA in natural outdoor scenes (the presence of occlusion or overlap area) with a high accuracy rate is not easy and it still remains a big challenge in eco-physiological studies. In this paper, two accurate and non-destructive approaches for estimating the LA were proposed with top-view and side-view images, respectively. The proposed approaches successfully extract the skeleton of cucumber plants in red, green, and blue (RGB) images and estimate the LA of cucumber plants with high precision. The results were validated by comparing with manual measurements. The experimental results of our proposed algorithms achieve 97.64% accuracy in leaf segmentation, and the relative error in LA estimation varies from 3.76% to 13.00%, which could meet the requirements of plant growth monitoring </span>systems.


2021 ◽  
Vol 2112 (1) ◽  
pp. 012017
Author(s):  
Chutian Gao ◽  
Ming Guo ◽  
Zexin Fu ◽  
Dengke Li ◽  
Xian Ren ◽  
...  

Abstract Obtaining architectural engineering drawings is a crucial aspect of upgrading and repairing structures. Traditional elevation measuring is ineffective and results in a poor rate of restoration. The current building elevation measurement solutions based on 3D scanning technology all obtain building 3D point cloud data from a single type of laser scanner. These two methods can’t get both indoor and outdoor scenes at the same time. This paper presents a scanning strategy that combines SLAM with Ground-based LiDAR to solve this problem. The point cloud data for the building’s indoor and outdoor scenes are obtained independently, and the Ground-based LiDAR point cloud data is registered locally using the iterative closest point(ICP) algorithm. The SLAM point clouds and the Ground-based LiDAR point clouds are then registered as a whole to develop an overall model of the building using point constrained error equations. For various reasons, the building can be trimmed into a planar point cloud model depending on the application. Finally, engineering drawings for the construction of the building can be drawn. The method’s viability was demonstrated by using it in a 3D scanning project of a scenic site in Beijing. This technology improves model information interpretability, scanning efficiency, and provides powerful data assistance for building rehabilitation and repair. It is extremely important in the disciplines of urban planning, rehabilitation, and historic preservation. After performing a more optimal preprocessing, more than 90% classification accuracy was achieved across 18 low-power consumer devices for scenarios in which the in-band features-to-noise ratio (FNR) was very poor.


2021 ◽  
Vol 13 (21) ◽  
pp. 4357
Author(s):  
Yu Hou ◽  
Meida Chen ◽  
Rebekka Volk ◽  
Lucio Soibelman

As-is building modeling plays an important role in energy audits and retrofits. However, in order to understand the source(s) of energy loss, researchers must know the semantic information of the buildings and outdoor scenes. Thermal information can potentially be used to distinguish objects that have similar surface colors but are composed of different materials. To utilize both the red–green–blue (RGB) color model and thermal information for the semantic segmentation of buildings and outdoor scenes, we deployed and adapted various pioneering deep convolutional neural network (DCNN) tools that combine RGB information with thermal information to improve the semantic and instance segmentation processes. When both types of information are available, the resulting DCNN models allow us to achieve better segmentation performance. By deploying three case studies, we experimented with our proposed DCNN framework, deploying datasets of building components and outdoor scenes, and testing the models to determine whether the segmentation performance had improved or not. In our observation, the fusion of RGB and thermal information can help the segmentation task in specific cases, but it might also make the neural networks hard to train or deteriorate their prediction performance in some cases. Additionally, different algorithms perform differently in semantic and instance segmentation.


2021 ◽  
Vol 60 (10) ◽  
Author(s):  
Meredith Kupinski ◽  
Christine Bradley ◽  
David Diner ◽  
Feng Xu ◽  
Russell Chipman

2021 ◽  
Vol 12 (7) ◽  
pp. 373-384
Author(s):  
D. D. Rukhovich ◽  

In this article, we introduce the task of multi-view RGB-based 3D object detection as an end-to-end optimization problem. In a multi-view formulation of the 3D object detection problem, several images of a static scene are used to detect objects in the scene. To address the 3D object detection problem in a multi-view formulation, we propose a novel 3D object detection method named ImVoxelNet. ImVoxelNet is based on a fully convolutional neural network. Unlike existing 3D object detection methods, ImVoxelNet works directly with 3D representations and does not mediate 3D object detection through 2D object detection. The proposed method accepts multi-view inputs. The number of monocular images in each multi-view input can vary during training and inference; actually, this number might be unique for each multi-view input. Moreover, we propose to treat a single RGB image as a special case of a multi-view input. Accordingly, the proposed method can also accept monocular inputs with no modifications. Through extensive evaluation, we demonstrate that the proposed method successfully handles a variety of outdoor scenes. Specifically, it achieves state-of-the-art results in car detection on KITTI (monocular) and nuScenes (multi-view) benchmarks among all methods that accept RGB images. The proposed method operates in real-time, which makes it possible to integrate it into the navigation systems of autonomous devices. The results of this study can be used to address tasks of navigation, path planning, and semantic scene mapping.


2021 ◽  
Vol 10 (1) ◽  
Author(s):  
Xintong Liu ◽  
Jianyu Wang ◽  
Zhupeng Li ◽  
Zuoqiang Shi ◽  
Xing Fu ◽  
...  

AbstractNon-line-of-sight imaging aims at recovering obscured objects from multiple scattered lights. It has recently received widespread attention due to its potential applications, such as autonomous driving, rescue operations, and remote sensing. However, in cases with high measurement noise, obtaining high-quality reconstructions remains a challenging task. In this work, we establish a unified regularization framework, which can be tailored for different scenarios, including indoor and outdoor scenes with substantial background noise under both confocal and non-confocal settings. The proposed regularization framework incorporates sparseness and non-local self-similarity of the hidden objects as well as the smoothness of the signals. We show that the estimated signals, albedo, and surface normal of the hidden objects can be reconstructed robustly even with high measurement noise under the proposed framework. Reconstruction results on synthetic and experimental data show that our approach recovers the hidden objects faithfully and outperforms state-of-the-art reconstruction algorithms in terms of both quantitative criteria and visual quality.


2021 ◽  
Author(s):  
Zhuohan Jiang ◽  
D. Merika W. Sanders ◽  
Rosemary Cowell

We collected visual and semantic similarity norms for a set of photographic images comprising 120 recognizable objects/animals and 120 indoor/outdoor scenes. Human observers rated the similarity of pairs of images within four categories of stimulus ‒ inanimate objects, animals, indoor scenes and outdoor scenes ‒ via Amazon's Mechanical Turk. We performed multi-dimensional scaling (MDS) on the collected similarity ratings to visualize the perceived similarity for each image category, for both visual and semantic ratings. The MDS solutions revealed the expected similarity relationships between images within each category, along with intuitively sensible differences between visual and semantic similarity relationships for each category. Stress tests performed on the MDS solutions indicated that the MDS analyses captured meaningful levels of variance in the similarity data. These stimuli, associated norms and naming data are made publicly available, and should provide a useful resource for researchers of vision, memory and conceptual knowledge wishing to run experiments using well-parameterized stimulus sets.


2021 ◽  
Author(s):  
Donik Vrsnak ◽  
Ilija Domislovic ◽  
Marko Subasic ◽  
Sven Loncaric

Sign in / Sign up

Export Citation Format

Share Document