scholarly journals Radioisotope Identification and Nonintrusive Depth Estimation of Localized Low-Level Radioactive Contaminants Using Bayesian Inference

Sensors ◽  
2019 ◽  
Vol 20 (1) ◽  
pp. 95 ◽  
Author(s):  
Jinhwan Kim ◽  
Kyung Taek Lim ◽  
Kilyoung Ko ◽  
Eunbie Ko ◽  
Gyuseong Cho

Obtaining the in-depth information of radioactive contaminants is crucial for determining the most cost-effective decommissioning strategy. The main limitations of a burial depth analysis lie in the assumptions that foreknowledge of buried radioisotopes present at the site is always available and that only a single radioisotope is present. We present an advanced depth estimation method using Bayesian inference, which does not rely on those assumptions. Thus, we identified low-level radioactive contaminants buried in a substance and then estimated their depths and activities. To evaluate the performance of the proposed method, several spectra were obtained using a 3 × 3 inch hand-held NaI (Tl) detector exposed to Cs-137, Co-60, Na-22, Am-241, Eu-152, and Eu-154 sources (less than 1μCi) that were buried in a sandbox at depths of up to 15 cm. The experimental results showed that this method is capable of correctly detecting not only a single but also multiple radioisotopes that are buried in sand. Furthermore, it can provide a good approximation of the burial depth and activity of the identified sources in terms of the mean and 95% credible interval in a single measurement. Lastly, we demonstrate that the proposed technique is rarely susceptible to short acquisition time and gain-shift effects.

Sensors ◽  
2019 ◽  
Vol 19 (24) ◽  
pp. 5365 ◽  
Author(s):  
Jinhwan Kim ◽  
Kyung Taek Lim ◽  
Kyeongjin Park ◽  
Gyuseong Cho

This study reports on the implementation of Bayesian inference to improve the estimation of remote-depth profiling for low-level radioactive contaminants with a low-resolution NaI(Tl) detector. In particular, we demonstrate that this approach offers results that are more reliable because it provides a mean value with a 95% credible interval by determining the probability distributions of the burial depth and activity of a radioisotope in a single measurement. To evaluate the proposed method, the simulation was compared with experimental measurements. The simulation showed that the proposed method was able to detect the depth of a Cs-137 point source buried below 60 cm in sand, with a 95% credible interval. The experiment also showed that the maximum detectable depths for weakly active 0.94-μCi Cs-137 and 0.69-μCi Co-60 sources buried in sand was 21 cm, providing an improved performance compared to existing methods. In addition, the maximum detectable depths hardly degraded, even with a reduced acquisition time of less than 60 s or with gain-shift effects; therefore, the proposed method is appropriate for the accurate and rapid non-intrusive localization of buried low-level radioactive contaminants during in situ measurement.


2013 ◽  
Vol 2013 ◽  
pp. 1-12
Author(s):  
Fang-Hsuan Cheng ◽  
Tze-Yun Sung

A method for estimating the depth information of a general monocular image sequence and then creating a 3D stereo video is proposed. Distinguishing between foreground and background is possible without additional information, and then foreground pixels are moved to create the binocular image. The proposed depth estimation method is based on coarse-to-fine strategy. By applying the CID method in the spatial domain, the sharpness and the contrast of an image can be improved by the distance of the region based on its color. Then a coarse depth map of the image can be generated. An optical-flow method based on temporal information is then used to search and compare the block motion status between previous and current frames, and then the distance of the block can be estimated according to the amount of block motion. Finally, the static and motion depth information is integrated to create the fine depth map. By shifting foreground pixels based on the depth information, a binocular image pair can be created. A sense of 3D stereo can be obtained without glasses by an autostereoscopic 3D display.


Author(s):  
Shaocheng Jia ◽  
Xin Pei ◽  
Zi Yang ◽  
Shan Tian ◽  
Yun Yue

Depth information from still 2D images plays an important role in automated driving, driving safety, and robotics. Monocular depth estimation is considered as an ill-posed and inherently ambiguous problem in general, and a tight issue is how to obtain global information efficiently since pure convolutional neural networks (CNNs) merely extract the local information. To end that, some previous works utilized conditional random fields (CRFs) to obtain the global information, but it is notoriously difficult to optimize. In this paper, a novel hybrid neural network is proposed to solve that, and concurrently a dense depth map is predicted from the monocular still image. Specifically: first, the deep residual network is utilized to obtain multi-scale local information and then feature correlation (FCL) blocks are used to correlate these features. Finally, the feature selection attention-based mechanism is adopted to fuse the multi-layer features, and the multi-layer recurrent neural networks (RNNs) are utilized with bidirectional long short-term memory (Bi-LSTM) unit as the output layer. Furthermore, a novel logarithm exponential average error (LEAE) is proposed to overcome over-weighted problem. The multi-scale feature correlation network (MFCN) is evaluated on large-scale KITTI benchmarks (LKT), which is a subset of KITTI raw dataset, and NYU depth v2. The experiments indicate that the proposed unified network outperforms existing methods. This method also updates the state-of-the-art performance on LKT datasets. Importantly, the depth estimation method can be widely used for collision risk assessment and avoidance in driving assistance systems or automated pilot systems to achieve safety in a more economical and convenient way.


Author(s):  
Miguel Saura-Herreros ◽  
Angeles Lopez ◽  
Jose Ribelles

AbstractIn this paper, we propose to work in the 2.5D space of the scene to facilitate composition of new spherical panoramas. For adding depths to spherical panoramas, we extend an existing method that was designed to estimate relative depths from a single perspective image through user interaction. We analyze the difficulties to interactively provide such depth information for spherical panoramas, through three different types of presentation. Then, we propose a set of basic tools to interactively manage the relative depths of the panoramas in order to obtain a composition in a very simple way. We conclude that the relative depths obtained by the extended depth estimation method are enough for the purpose of compositing new photorealistic panoramas through a few elementary editing tools.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 54
Author(s):  
Peng Liu ◽  
Zonghua Zhang ◽  
Zhaozong Meng ◽  
Nan Gao

Depth estimation is a crucial component in many 3D vision applications. Monocular depth estimation is gaining increasing interest due to flexible use and extremely low system requirements, but inherently ill-posed and ambiguous characteristics still cause unsatisfactory estimation results. This paper proposes a new deep convolutional neural network for monocular depth estimation. The network applies joint attention feature distillation and wavelet-based loss function to recover the depth information of a scene. Two improvements were achieved, compared with previous methods. First, we combined feature distillation and joint attention mechanisms to boost feature modulation discrimination. The network extracts hierarchical features using a progressive feature distillation and refinement strategy and aggregates features using a joint attention operation. Second, we adopted a wavelet-based loss function for network training, which improves loss function effectiveness by obtaining more structural details. The experimental results on challenging indoor and outdoor benchmark datasets verified the proposed method’s superiority compared with current state-of-the-art methods.


2021 ◽  
Vol 11 (12) ◽  
pp. 5383
Author(s):  
Huachen Gao ◽  
Xiaoyu Liu ◽  
Meixia Qu ◽  
Shijie Huang

In recent studies, self-supervised learning methods have been explored for monocular depth estimation. They minimize the reconstruction loss of images instead of depth information as a supervised signal. However, existing methods usually assume that the corresponding points in different views should have the same color, which leads to unreliable unsupervised signals and ultimately damages the reconstruction loss during the training. Meanwhile, in the low texture region, it is unable to predict the disparity value of pixels correctly because of the small number of extracted features. To solve the above issues, we propose a network—PDANet—that integrates perceptual consistency and data augmentation consistency, which are more reliable unsupervised signals, into a regular unsupervised depth estimation model. Specifically, we apply a reliable data augmentation mechanism to minimize the loss of the disparity map generated by the original image and the augmented image, respectively, which will enhance the robustness of the image in the prediction of color fluctuation. At the same time, we aggregate the features of different layers extracted by a pre-trained VGG16 network to explore the higher-level perceptual differences between the input image and the generated one. Ablation studies demonstrate the effectiveness of each components, and PDANet shows high-quality depth estimation results on the KITTI benchmark, which optimizes the state-of-the-art method from 0.114 to 0.084, measured by absolute relative error for depth estimation.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Laurence Romain

Abstract This paper shows that low-level generalisations in argument structure constructions are crucial to understanding the concept of alternation: low-level generalisations inform and constrain more schematic generalisations and thus constructional meaning. On the basis of an analysis of the causative alternation in English, and more specifically of the theme (i.e., the entity undergoing the event denoted by the verb), I show that each construction has its own schematic meaning. This analysis is conducted on a dataset composed of 11,554 instances of the intransitive non-causative construction and the transitive causative construction. The identification of lower-level generalisations feeds into the idea that language acquisition is organic and abstractions are formed only gradually (if at all) from exposure to input. So far, most of the literature on argument structure constructions has focused on the verb itself, and thus fails to capture these generalisations. I make up for this deficit through an in-depth analysis of the causative alternation.


Sign in / Sign up

Export Citation Format

Share Document