dense point
Recently Published Documents


TOTAL DOCUMENTS

323
(FIVE YEARS 147)

H-INDEX

16
(FIVE YEARS 5)

Author(s):  
Kenny Chen ◽  
Brett Lopez ◽  
Ali-akbar Agha-mohammadi ◽  
Ankur Mehta
Keyword(s):  

2022 ◽  
pp. 1-1
Author(s):  
Chaofeng Ren ◽  
Haixing Shang ◽  
Zhengdong Zha ◽  
Fuqiang Zhang ◽  
Yuchi Pu

2021 ◽  
Vol 13 (12) ◽  
pp. 315
Author(s):  
Lev Shilov ◽  
Semen Shanshin ◽  
Aleksandr Romanov ◽  
Anastasia Fedotova ◽  
Anna Kurtukova ◽  
...  

Reconstructed 3D foot models can be used for 3D printing and further manufacturing of individual orthopedic shoes, as well as in medical research and for online shoe shopping. This study presents a technique based on the approach and algorithms of photogrammetry. The presented technique was used to reconstruct a 3D model of the foot shape, including the lower arch, using smartphone images. The technique is based on modern computer vision and artificial intelligence algorithms designed for image processing, obtaining sparse and dense point clouds, depth maps, and a final 3D model. For the segmentation of foot images, the Mask R-CNN neural network was used, which was trained on foot data from a set of 40 people. The obtained accuracy was 97.88%. The result of the study was a high-quality reconstructed 3D model. The standard deviation of linear indicators in length and width was 0.95 mm, with an average creation time of 1 min 35 s recorded. Integration of this technique into the business models of orthopedic enterprises, Internet stores, and medical organizations will allow basic manufacturing and shoe-fitting services to be carried out and will help medical research to be performed via the Internet.


2021 ◽  
Author(s):  
Melkamu Demelash ◽  
Binyam Tesfaw ◽  
Degefie Tibebe

Abstract Accurate crop classification using remote sensing based satellite imageries approach remains challenging due to mix in spectral signatures. Employing Unmanned Aerial Vehicle (UAV) together with satellite imageries is believed in improving crop classification at field. Accordingly, this study aims to evaluate the potential of UAV images by blending with Sentinel 2A satellite images for crop field classification in Ethiopian agricultural context. The main purpose of the blending is to upgrade and or improve the lower resolution of the data source that is the sentinel 2A data which was 10m resolution. In the study, UAV data was used and preprocessed. The preprocessing includes camera calibration, photo alignment, dense point cloud generation based on the estimated camera positioning of scouting crop types. Then, orthomosaic UAV image was generated from single dense point cloud. Then, the processed UAV data was fused with Sentinel 2A (medium resolution) satellite data using Gram Schmidt pan sharpening method.this method is the most approach that it can run large data sets of spatial resultions. For crop classification, the Random forest (RF) machine-learning algorithm and Maximum likelihood methods were applied. Apart from the UAV and S2A data, field data was collected for training the crop classification. The point field data was collected from Teff, Wheat, Faba bean, Barley and Sorghum crop fields The results show that RF classifier algorithm classifies the crop types with 94% overall accuracy whereas the Maximum likelihood classifier with 90% overall accuracy. This implies that fused image has a potential to be used for crop type classification together with relatively better classification technique with high accuracy level


2021 ◽  
Vol 9 ◽  
Author(s):  
Zhenyu Wu ◽  
Xiangyu Deng ◽  
Shengming Li ◽  
Yingshun Li

Visual Simultaneous Localization and Mapping (SLAM) system is mainly used in real-time localization and mapping tasks of robots in various complex environments, while traditional monocular vision algorithms are struggling to cope with weak texture and dynamic scenes. To solve these problems, this work presents an object detection and clustering assisted SLAM algorithm (OC-SLAM), which adopts a faster object detection algorithm to add semantic information to the image and conducts geometrical constraint on the dynamic keypoints in the prediction box to optimize the camera pose. It also uses RGB-D camera to perform dense point cloud reconstruction with the dynamic objects rejected, and facilitates European clustering of dense point clouds to jointly eliminate dynamic features combining with object detection algorithm. Experiments in the TUM dataset indicate that OC-SLAM enhances the localization accuracy of the SLAM system in the dynamic environments compared with original algorithm and it has shown impressive performance in the localizition and can build a more precise dense point cloud map in dynamic scenes.


2021 ◽  
Vol 13 (23) ◽  
pp. 4811
Author(s):  
Rudolf Urban ◽  
Martin Štroner ◽  
Lenka Línková

Lately, affordable unmanned aerial vehicle (UAV)-lidar systems have started to appear on the market, highlighting the need for methods facilitating proper verification of their accuracy. However, the dense point cloud produced by such systems makes the identification of individual points that could be used as reference points difficult. In this paper, we propose such a method utilizing accurately georeferenced targets covered with high-reflectivity foil, which can be easily extracted from the cloud; their centers can be determined and used for the calculation of the systematic shift of the lidar point cloud. Subsequently, the lidar point cloud is cleaned of such systematic shift and compared with a dense SfM point cloud, thus yielding the residual accuracy. We successfully applied this method to the evaluation of an affordable DJI ZENMUSE L1 scanner mounted on the UAV DJI Matrice 300 and found that the accuracies of this system (3.5 cm in all directions after removal of the global georeferencing error) are better than manufacturer-declared values (10/5 cm horizontal/vertical). However, evaluation of the color information revealed a relatively high (approx. 0.2 m) systematic shift.


2021 ◽  
Vol 13 (22) ◽  
pp. 4569
Author(s):  
Liyang Zhou ◽  
Zhuang Zhang ◽  
Hanqing Jiang ◽  
Han Sun ◽  
Hujun Bao ◽  
...  

This paper presents an accurate and robust dense 3D reconstruction system for detail preserving surface modeling of large-scale scenes from multi-view images, which we named DP-MVS. Our system performs high-quality large-scale dense reconstruction, which preserves geometric details for thin structures, especially for linear objects. Our framework begins with a sparse reconstruction carried out by an incremental Structure-from-Motion. Based on the reconstructed sparse map, a novel detail preserving PatchMatch approach is applied for depth estimation of each image view. The estimated depth maps of multiple views are then fused to a dense point cloud in a memory-efficient way, followed by a detail-aware surface meshing method to extract the final surface mesh of the captured scene. Experiments on ETH3D benchmark show that the proposed method outperforms other state-of-the-art methods on F1-score, with the running time more than 4 times faster. More experiments on large-scale photo collections demonstrate the effectiveness of the proposed framework for large-scale scene reconstruction in terms of accuracy, completeness, memory saving, and time efficiency.


2021 ◽  
Vol 11 (22) ◽  
pp. 10531
Author(s):  
Chenrui Wu ◽  
Long Chen ◽  
Shiqing Wu

6D pose estimation of objects is essential for intelligent manufacturing. Current methods mainly place emphasis on the single object’s pose estimation, which limit its use in real-world applications. In this paper, we propose a multi-instance framework of 6D pose estimation for textureless objects in an industrial environment. We use a two-stage pipeline for this purpose. In the detection stage, EfficientDet is used to detect target instances from the image. In the pose estimation stage, the cropped images are first interpolated into a fixed size, then fed into a pseudo-siamese graph matching network to calculate dense point correspondences. A modified circle loss is defined to measure the differences of positive and negative correspondences. Experiments on the antenna support demonstrate the effectiveness and advantages of our proposed method.


2021 ◽  
Author(s):  
Jinhan Hu ◽  
Aashiq Shaikh ◽  
Alireza Bahremand ◽  
Robert LiKamWa

Sensors ◽  
2021 ◽  
Vol 21 (20) ◽  
pp. 6888
Author(s):  
Lei Pang ◽  
Yanfeng Gai ◽  
Tian Zhang

Synthetic aperture radar (SAR) tomography (TomoSAR) can obtain 3D imaging models of observed urban areas and can also discriminate different scatters in an azimuth–range pixel unit. Recently, compressive sensing (CS) has been applied to TomoSAR imaging with the use of very-high-resolution (VHR) SAR images delivered by modern SAR systems, such as TerraSAR-X and TanDEM-X. Compared with the traditional Fourier transform and spectrum estimation methods, using sparse information for TomoSAR imaging can obtain super-resolution power and robustness and is only minorly impacted by the sidelobe effect. However, due to the tight control of SAR satellite orbit, the number of acquisitions is usually too low to form a synthetic aperture in the elevation direction, and the baseline distribution of acquisitions is also uneven. In addition, artificial outliers may easily be generated in later TomoSAR processing, leading to a poor mapping product. Focusing on these problems, by synthesizing the opinions of various experts and scholarly works, this paper briefly reviews the research status of sparse TomoSAR imaging. Then, a joint sparse imaging algorithm, based on the building points of interest (POIs) and maximum likelihood estimation, is proposed to reduce the number of acquisitions required and reject the scatterer outliers. Moreover, we adopted the proposed novel workflow in the TerraSAR-X datasets in staring spotlight (ST) work mode. The experiments on simulation data and TerraSAR-X data stacks not only indicated the effectiveness of the proposed approach, but also proved the great potential of producing a high-precision dense point cloud from staring spotlight (ST) data.


Sign in / Sign up

Export Citation Format

Share Document