depth maps
Recently Published Documents


TOTAL DOCUMENTS

567
(FIVE YEARS 131)

H-INDEX

34
(FIVE YEARS 5)

2021 ◽  
pp. 3942-3951
Author(s):  
Ali K. Jaheed ◽  
Hussein H. Karim

The Amarah Oil field structure was studied and interpreted by using 2-D seismic data obtained from the Oil  Exploration company. The study is concerned with Maysan Group Formation (Kirkuk Group) which is located in southeastern Iraq and belongs to the Tertiary Age. Two reflectors were detected based on synthetic seismograms and well logs (top and bottom Missan Group). Structural maps were derived from seismic reflection interpretations to obtain the location and direction of the sedimentary basin. Two-way time and depth maps were conducted depending on the structural interpretation of the picked reflectors to show several structural features. These included three types of closures, namely two anticlines extended in the directions of S-SW and NE, one nose structure (anticline) in the middle of the study area,  and structural faults in the northeastern part of the area, which is consistent with the general fault pattern. The seismic interpretation showed the presence of some stratigraphic features. Stratigraphic trap at the eastern part of the field, along with other phenomena, such as flatspot (mound), lenses, onlap, and toplap, were detected as indications of potential hydrocarbon accumulation in the region.


Entropy ◽  
2021 ◽  
Vol 23 (12) ◽  
pp. 1561
Author(s):  
Sheng Zeng ◽  
Guohua Geng ◽  
Mingquan Zhou

Automatically selecting a set of representative views of a 3D virtual cultural relic is crucial for constructing wisdom museums. There is no consensus regarding the definition of a good view in computer graphics; the same is true of multiple views. View-based methods play an important role in the field of 3D shape retrieval and classification. However, it is still difficult to select views that not only conform to subjective human preferences but also have a good feature description. In this study, we define two novel measures based on information entropy, named depth variation entropy and depth distribution entropy. These measures were used to determine the amount of information about the depth swings and different depth quantities of each view. Firstly, a canonical pose 3D cultural relic was generated using principal component analysis. A set of depth maps obtained by orthographic cameras was then captured on the dense vertices of a geodesic unit-sphere by subdividing the regular unit-octahedron. Afterwards, the two measures were calculated separately on the depth maps gained from the vertices and the results on each one-eighth sphere form a group. The views with maximum entropy of depth variation and depth distribution were selected, and further scattered viewpoints were selected. Finally, the threshold word histogram derived from the vector quantization of salient local descriptors on the selected depth maps represented the 3D cultural relic. The viewpoints obtained by the proposed method coincided with an arbitrary pose of the 3D model. The latter eliminated the steps of manually adjusting the model’s pose and provided acceptable display views for people. In addition, it was verified on several datasets that the proposed method, which uses the Bag-of-Words mechanism and a deep convolution neural network, also has good performance regarding retrieval and classification when dealing with only four views.


2021 ◽  
Vol 13 (22) ◽  
pp. 4569
Author(s):  
Liyang Zhou ◽  
Zhuang Zhang ◽  
Hanqing Jiang ◽  
Han Sun ◽  
Hujun Bao ◽  
...  

This paper presents an accurate and robust dense 3D reconstruction system for detail preserving surface modeling of large-scale scenes from multi-view images, which we named DP-MVS. Our system performs high-quality large-scale dense reconstruction, which preserves geometric details for thin structures, especially for linear objects. Our framework begins with a sparse reconstruction carried out by an incremental Structure-from-Motion. Based on the reconstructed sparse map, a novel detail preserving PatchMatch approach is applied for depth estimation of each image view. The estimated depth maps of multiple views are then fused to a dense point cloud in a memory-efficient way, followed by a detail-aware surface meshing method to extract the final surface mesh of the captured scene. Experiments on ETH3D benchmark show that the proposed method outperforms other state-of-the-art methods on F1-score, with the running time more than 4 times faster. More experiments on large-scale photo collections demonstrate the effectiveness of the proposed framework for large-scale scene reconstruction in terms of accuracy, completeness, memory saving, and time efficiency.


2021 ◽  
Vol 2095 (1) ◽  
pp. 012074
Author(s):  
Qiang He ◽  
Jiawei Yu

Abstract Recently, the unmanned mobile robots have received broad applications, such as industrial and security inspection, disinfection and epidemic prevention, warehousing logistics, agricultural picking, etc. In order to drive autonomously from departure to destination, an unmanned mobile robot mounts different sensors to collect information around it and further understand its surrounding environment based on the perceptions. Here we proposed a method to generate high-resolution depth map for given sparse LiDAR point cloud. Our method fits the point cloud into a 3D curve and projects LiDAR data onto the curve surface, and then we make appropriate interpolations of the curve and finally implement the Delaunay triangulation algorithm to all the data points on the 3D curve. The experimental results show that our approach can effectively improve the resolution of depth maps from sparse LiDAR measurements.


2021 ◽  
Author(s):  
Kuan-Ting Lee ◽  
En-Rwei Liu ◽  
Jar-Ferr Yang ◽  
Li Hong

Abstract With the rapid development of 3D coding and display technologies, numerous applications are emerging to target human immersive entertainments. To achieve a prime 3D visual experience, high accuracy depth maps play a crucial role. However, depth maps retrieved from most devices still suffer inaccuracies at object boundaries. Therefore, a depth enhancement system is usually needed to correct the error. Recent developments by applying deep learning to deep enhancement have shown their promising improvement. In this paper, we propose a deep depth enhancement network system that effectively corrects the inaccurate depth using color images as a guide. The proposed network contains both depth and image branches, where we combine a new set of features from the image branch with those from the depth branch. Experimental results show that the proposed system achieves a better depth correction performance than state of the art advanced networks. The ablation study reveals that the proposed loss functions in use of image information can enhance depth map accuracy effectively.


2021 ◽  
Vol 11 (20) ◽  
pp. 9585
Author(s):  
Honglin Lei ◽  
Yanqi Pan ◽  
Tao Yu ◽  
Zuoming Fu ◽  
Chongan Zhang ◽  
...  

Retrograde intrarenal surgery (RIRS) is a minimally invasive endoscopic procedure for the treatment of kidney stones. Traditionally, RIRS is usually performed by reconstructing a 3D model of the kidney from preoperative CT images in order to locate the kidney stones; then, the surgeon finds and removes the stones with experience in endoscopic video. However, due to the many branches within the kidney, it can be difficult to relocate each lesion and to ensure that all branches are searched, which may result in the misdiagnosis of some kidney stones. To avoid this situation, we propose a convolutional neural network (CNN)-based method for matching preoperative CT images and intraoperative videos for the navigation of ureteroscopic procedures. First, a pair of synthetic images and depth maps reflecting preoperative information are obtained from a 3D model of the kidney. Then, a style transfer network is introduced to transfer the ureteroscopic images to the synthetic images, which can generate the associated depth maps. Finally, the fusion and matching of depth maps of preoperative images and intraoperative video images are realized based on semantic features. Compared with the traditional CT-video matching method, our method achieved a five times improvement in time performance and a 26% improvement in the top 10 accuracy.


2021 ◽  
Vol 7 (2) ◽  
pp. 335-338
Author(s):  
Sina Walluscheck ◽  
Thomas Wittenberg ◽  
Volker Bruns ◽  
Thomas Eixelberger ◽  
Ralf Hackner

Abstract For the image-based documentation of a colonoscopy procedure, a 3D-reconstuction of the hollow colon structure from endoscopic video streams is desirable. To obtain this reconstruction, 3D information about the colon has to be extracted from monocular colonoscopy image sequences. This information can be provided by estimating depth through shape-from-motion approaches, using the image information from two successive image frames and the exact knowledge of their disparity. Nevertheless, during a standard colonoscopy the spatial offset between successive frames is continuously changing. Thus, in this work deep convolutional neural networks (DCNNs) are applied in order to obtain piecewise depth maps and point clouds of the colon. These pieces can then be fused for a partial 3D reconstruction.


Sign in / Sign up

Export Citation Format

Share Document