local image features
Recently Published Documents


TOTAL DOCUMENTS

68
(FIVE YEARS 10)

H-INDEX

12
(FIVE YEARS 1)

Astrodynamics ◽  
2020 ◽  
Vol 4 (2) ◽  
pp. 149-161
Author(s):  
Yoshiyuki Anzai ◽  
Takehisa Yairi ◽  
Naoya Takeishi ◽  
Yuichi Tsuda ◽  
Naoko Ogawa

Geophysics ◽  
2020 ◽  
Vol 85 (4) ◽  
pp. WA87-WA100 ◽  
Author(s):  
Zhicheng Geng ◽  
Xinming Wu ◽  
Yunzhi Shi ◽  
Sergey Fomel

Constructing a relative geologic time (RGT) image from a seismic image is crucial for seismic structural and stratigraphic interpretation. In conventional methods, automatic RGT estimation from a seismic image is typically based on only local image features, which makes it challenging to cope with discontinuous structures (e.g., faults and unconformities). We have considered the estimation of 2D RGT images as a regression problem, where we design a deep convolutional neural network (CNN) to directly and automatically compute an RGT image from a 2D seismic image. This CNN consists of three parts: an encoder, a decoder, and a refinement module. We train this CNN by using 2080 pairs of synthetic input seismic images and target RGT images, and then we test it on 960 testing seismic images. Although trained with only synthetic images, the network can generate accurate results on real seismic images. Multiple field examples show that our CNN-based method is significantly superior to conventional methods, especially in dealing with complex structures such as crossing faults and complicatedly folded horizons, without the need of any manual picking.


2019 ◽  
Vol 19 (10) ◽  
pp. 259b
Author(s):  
Elena Waidmann ◽  
Kenji W Koyano ◽  
Julie J Hong ◽  
Brian E Russ ◽  
David A Leopold

Sensors ◽  
2019 ◽  
Vol 19 (2) ◽  
pp. 291 ◽  
Author(s):  
Hamdi Sahloul ◽  
Shouhei Shirafuji ◽  
Jun Ota

Local image features are invariant to in-plane rotations and robust to minor viewpoint changes. However, the current detectors and descriptors for local image features fail to accommodate out-of-plane rotations larger than 25°–30°. Invariance to such viewpoint changes is essential for numerous applications, including wide baseline matching, 6D pose estimation, and object reconstruction. In this study, we present a general embedding that wraps a detector/descriptor pair in order to increase viewpoint invariance by exploiting input depth maps. The proposed embedding locates smooth surfaces within the input RGB-D images and projects them into a viewpoint invariant representation, enabling the detection and description of more viewpoint invariant features. Our embedding can be utilized with different combinations of descriptor/detector pairs, according to the desired application. Using synthetic and real-world objects, we evaluated the viewpoint invariance of various detectors and descriptors, for both standalone and embedded approaches. While standalone local image features fail to accommodate average viewpoint changes beyond 33.3°, our proposed embedding boosted the viewpoint invariance to different levels, depending on the scene geometry. Objects with distinct surface discontinuities were on average invariant up to 52.8°, and the overall average for all evaluated datasets was 45.4°. Similarly, out of a total of 140 combinations involving 20 local image features and various objects with distinct surface discontinuities, only a single standalone local image feature exceeded the goal of 60° viewpoint difference in just two combinations, as compared with 19 different local image features succeeding in 73 combinations when wrapped in the proposed embedding. Furthermore, the proposed approach operates robustly in the presence of input depth noise, even that of low-cost commodity depth sensors, and well beyond.


Sign in / Sign up

Export Citation Format

Share Document