Visual saliency and potential field data enhancements: Where is your attention drawn?

2014 ◽  
Vol 2 (4) ◽  
pp. SJ9-SJ21 ◽  
Author(s):  
Yathunanthan Sivarajah ◽  
Eun-Jung Holden ◽  
Roberto Togneri ◽  
Michael Dentith ◽  
Mark Lindsay

Interpretation of gravity and magnetic data for exploration applications may be based on pattern recognition in which geophysical signatures of geologic features associated with localized characteristics are sought within data. A crucial control on what comprises noticeable and comparable characteristics in a data set is how images displaying those data are enhanced. Interpreters are provided with various image enhancement and display tools to assist their interpretation, although the effectiveness of these tools to improve geologic feature detection is difficult to measure. We addressed this challenge by analyzing how image enhancement methods impact the interpreter’s visual attention when interpreting the data because features that are more salient to the human visual system are more likely to be noticed. We used geologic target-spotting exercises within images generated from magnetic data to assess commonly used magnetic data visualization methods for their visual saliency. Our aim was achieved in two stages. In the first stage, we identified a suitable saliency detection algorithm that can computationally predict visual attention of magnetic data interpreters. The computer vision community has developed various image saliency detection algorithms, and we assessed which algorithm best matches the interpreter’s data observation patterns for magnetic target-spotting exercises. In the second stage, we applied this saliency detection algorithm to understand potential visual biases for commonly used magnetic data enhancement methods. We developed a guide to choosing image enhancement methods, based on saliency maps that minimize unintended visual biases in magnetic data interpretation, and some recommendations for identifying exploration targets in different types of magnetic data.

Geophysics ◽  
2014 ◽  
Vol 79 (1) ◽  
pp. IM1-IM9 ◽  
Author(s):  
Nathan Leon Foks ◽  
Richard Krahenbuhl ◽  
Yaoguo Li

Compressive inversion uses computational algorithms that decrease the time and storage needs of a traditional inverse problem. Most compression approaches focus on the model domain, and very few, other than traditional downsampling focus on the data domain for potential-field applications. To further the compression in the data domain, a direct and practical approach to the adaptive downsampling of potential-field data for large inversion problems has been developed. The approach is formulated to significantly reduce the quantity of data in relatively smooth or quiet regions of the data set, while preserving the signal anomalies that contain the relevant target information. Two major benefits arise from this form of compressive inversion. First, because the approach compresses the problem in the data domain, it can be applied immediately without the addition of, or modification to, existing inversion software. Second, as most industry software use some form of model or sensitivity compression, the addition of this adaptive data sampling creates a complete compressive inversion methodology whereby the reduction of computational cost is achieved simultaneously in the model and data domains. We applied the method to a synthetic magnetic data set and two large field magnetic data sets; however, the method is also applicable to other data types. Our results showed that the relevant model information is maintained after inversion despite using 1%–5% of the data.


2013 ◽  
Vol 333-335 ◽  
pp. 1171-1174
Author(s):  
Fan Hui ◽  
Ren Lu ◽  
Jin Jiang Li

Drawing on the suvey of visual attention degree and its significance in psychology and physiology , in recent years, researchers have proposed a lot of visual attention model and algorithms, such as Itti model and many saliency detection algorithms. And in recent years, the researchers applied the visual attention of this technology in a lot of directions, such as a significant regional shifts and visual tracing detection model based on network loss, for video quality evaluation. This paper summarizes the various algorithms and its application of visual attention and its significance.


2012 ◽  
Vol 239-240 ◽  
pp. 811-815
Author(s):  
Zhi Hai Sun ◽  
Teng Song ◽  
Wen Hui Zhou ◽  
Hua Zhang

Visual saliency detection has become an important step between computer vision and digital image processing. Recent methods almost form a computational model based on color, which are difficult to overcome the shortcoming with cluttered and textured background. This paper proposes a novel salient object detection algorithm integrating with region color contrast and histograms of oriented gradients (HoG). Extensively experiments show that our algorithm outperforms other state-of-art saliency methods, yielding higher precision and better recall rate, even lower mean absolution error.


2011 ◽  
Vol 403-408 ◽  
pp. 1927-1932
Author(s):  
Hai Peng ◽  
Hua Jun Feng ◽  
Ju Feng Zhao ◽  
Zhi Hai Xu ◽  
Qi Li ◽  
...  

We propose a new image fusion method to fuse the frames of infrared and visual image sequences more effectively. In our method, we introduce an improved salient feature detection algorithm to achieve the saliency map of the original frames. This improved method can detect not only spatially but also temporally salient features using dynamic information of inter-frames. Images are then segmented into target regions and background regions based on saliency distribution. We formulate fusion rules for different regions using a double threshold method and finally fuse the image frames in NSCT multi-scale domain. Comparison of different methods shows that our result is a more effective one to stress salient features of target regions and maintain details of background regions from the original image sequences.


Author(s):  
Jila Hosseinkhani ◽  
Chris Joslin

A key factor in designing saliency detection algorithms for videos is to understand how different visual cues affect the human perceptual and visual system. To this end, this article investigated the bottom-up features including color, texture, and motion in video sequences for a one-by-one scenario to provide a ranking system stating the most dominant circumstances for each feature. In this work, it is considered the individual features and various visual saliency attributes investigated under conditions in which the authors had no cognitive bias. Human cognition refers to a systematic pattern of perceptual and rational judgments and decision-making actions. First, this paper modeled the test data as 2D videos in a virtual environment to avoid any cognitive bias. Then, this paper performed an experiment using human subjects to determine which colors, textures, motion directions, and motion speeds attract human attention more. The proposed benchmark ranking system of salient visual attention stimuli was achieved using an eye tracking procedure.


Geophysics ◽  
2006 ◽  
Vol 71 (2) ◽  
pp. L17-L23 ◽  
Author(s):  
Mark Pilkington ◽  
Duncan R. Cowan

Separating the fields produced by sources at different depths is a common requirement in the interpretation of potential field data. Approaches to this problem are generally data- or model-based. Data-based methods require clear linear segments in the logarithmic power spectrum of the data corresponding to different components of the field. Various types of filters can then be designed to carry out the separation. When the logarithmic power spectrum shows no identifiable linear spectral segments, other approaches are necessary. We outline a model-based method that does not depend on power-spectral information but requires independent estimates of the average depths of the source distributions, e.g., from seismic interpretations. An ensemble of models using fractal source distributions is computed based on these known values, and filter parameters are determined that produce the closest fit (in a least-squares sense) to the theoretical fields that each source distribution generates. This approach is used to separate basement effects from intrasedimentary sources in magnetic data collected over the Colville Hills, Northwest Territories, Canada. Seismic data interpretation places crystalline basement at ∼10 km depth and an intrasedimentary basaltic layer at ∼2 km. Our approach results in an optimal separation filter with a cutoff wavelength of ∼12 km that appears to provide an effective separation of the two source effects.


Geophysics ◽  
1999 ◽  
Vol 64 (2) ◽  
pp. 452-460 ◽  
Author(s):  
Maurizio Fedi ◽  
Antonio Rapolla

Magnetization and density models with depth resolution are obtained by solving an inverse problem based on a 3-D set of potential field data. Such a data set is built from information on vertical and horizontal variations of the magnetic or gravity field. The a priori information consists of delimiting a source region and subdividing it in a set of blocks. In this case, the information related to a set of field data along the vertical direction is not generally redundant and is decisive in giving a depth resolution to the gravity and magnetic methods. Because of this depth resolution, which derives solely from the potential field data, an unconstrained and joint inversion of a multiobservation‐level data set is shown to provide surprising results for error‐free synthetic data. On the contrary, a single‐observation level data inversion produces an incorrect and too shallow model. Hence, a good depth resolution is likely to occur for the gravity and magnetic methods when based on the information along the vertical direction. This is also evidenced by an analysis of the kernel function versus the field altitude level and by a singular value analysis of the inversion operators for both the single and multilevel cases. Errors connected to numerical upward continuation do not affect the quality of the solution, provided that the data set extent is larger than that of the anomaly field. Application of the method to a 3-D magnetic data set relative to Vesuvius indicates that the method may significantly improve interpretation of potential fields.


2013 ◽  
Vol 2013 ◽  
pp. 1-9
Author(s):  
Yuantao Chen ◽  
Weihong Xu ◽  
Fangjun Kuang ◽  
Shangbing Gao

Image segmentation process for high quality visual saliency map is very dependent on the existing visual saliency metrics. It is mostly only get sketchy effect of saliency map, and roughly based visual saliency map will affect the image segmentation results. The paper had presented the randomized visual saliency detection algorithm. The randomized visual saliency detection method can quickly generate the same size as the original input image and detailed results of the saliency map. The randomized saliency detection method can be applied to real-time requirements for image content-based scaling saliency results map. The randomization method for fast randomized video saliency area detection, the algorithm only requires a small amount of memory space can be detected detailed oriented visual saliency map, the presented results are shown that the method of visual saliency map used in image after the segmentation process can be an ideal segmentation results.


Geophysics ◽  
2017 ◽  
Vol 82 (4) ◽  
pp. G87-G100 ◽  
Author(s):  
Lorenzo Cascone ◽  
Chris Green ◽  
Simon Campbell ◽  
Ahmed Salem ◽  
Derek Fairhead

Geologic features, such as faults, dikes, and contacts appear as lineaments in gravity and magnetic data. The automated coherent lineament analysis and selection (ACLAS) method is a new approach to automatically compare and combine sets of lineaments or edges derived from two or more existing enhancement techniques applied to the same gravity or magnetic data set. ACLAS can be applied to the results of any edge-detection algorithms and overcomes discrepancies between techniques to generate a coherent set of detected lineaments, which can be more reliably incorporated into geologic interpretation. We have determined that the method increases spatial accuracy, removes artifacts not related to real edges, increases stability, and is quick to implement and execute. The direction of lower density or susceptibility can also be automatically determined, representing, for example, the downthrown side of a fault. We have evaluated ACLAS on magnetic anomalies calculated from a simple slab model and from a synthetic continental margin model with noise added to the result. The approach helps us to identify and discount artifacts of the different techniques, although the success of the combination is limited by the appropriateness of the individual techniques and their inherent assumptions. ACLAS has been applied separately to gravity and magnetic data from the Australian North West Shelf; displaying results from the two data sets together helps in the appreciation of similarities and differences between gravity and magnetic results and indicates the application of the new approach to large-scale structural mapping. Future developments could include refinement of depth estimates for ACLAS lineaments.


Sign in / Sign up

Export Citation Format

Share Document