scholarly journals Proximal Policy Optimization for Radiation Source Search

2021 ◽  
Vol 2 (4) ◽  
pp. 368-397
Author(s):  
Philippe Proctor ◽  
Christof Teuscher ◽  
Adam Hecht ◽  
Marek Osiński

Rapid search and localization for nuclear sources can be an important aspect in preventing human harm from illicit material in dirty bombs or from contamination. In the case of a single mobile radiation detector, there are numerous challenges to overcome such as weak source intensity, multiple sources, background radiation, and the presence of obstructions, i.e., a non-convex environment. In this work, we investigate the sequential decision making capability of deep reinforcement learning in the nuclear source search context. A novel neural network architecture (RAD-A2C) based on the advantage actor critic (A2C) framework and a particle filter gated recurrent unit for localization is proposed. Performance is studied in a randomized 20×20 m convex and non-convex simulation environment across a range of signal-to-noise ratio (SNR)s for a single detector and single source. RAD-A2C performance is compared to both an information-driven controller that uses a bootstrap particle filter and to a gradient search (GS) algorithm. We find that the RAD-A2C has comparable performance to the information-driven controller across SNR in a convex environment. The RAD-A2C far outperforms the GS algorithm in the non-convex environment with greater than 95% median completion rate for up to seven obstructions.

Author(s):  
Philippe Proctor ◽  
Christof Teuscher ◽  
Adam Hecht ◽  
Marek Osiński

Rapid search and localization for nuclear sources can be an important aspect in preventing human harm from illicit material in dirty bombs or from contamination. In the case of a single mobile radiation detector, there are numerous challenges to overcome such as weak source intensity, multiple sources, background radiation, and the presence of obstructions, i.e., a non-convex environment. In this work, we investigate the sequential decision making capability of deep reinforcement learning in the nuclear source search context. A novel neural network architecture (RAD-A2C) based on the actor critic (A2C) framework and a particle filter gated recurrent unit for localization is proposed. Performance is studied in a randomized 20 x 20 m convex and non-convex environment across a range of signal-to-noise ratio (SNR)s for a single detector and single source. RAD-A2C performance is compared to both an information-driven controller that uses a bootstrap particle filter and to a gradient search (GS) algorithm. We find that the RAD-A2C has comparable performance to the information-driven controller across SNR in a convex environment and at lower computational complexity per action. The RAD-A2C far outperforms the GS algorithm in the non-convex environment with greater than 95% median completion rate for up to seven obstructions.


2020 ◽  
Vol 641 ◽  
pp. A67
Author(s):  
F. Sureau ◽  
A. Lechat ◽  
J.-L. Starck

The deconvolution of large survey images with millions of galaxies requires developing a new generation of methods that can take a space-variant point spread function into account. These methods have also to be accurate and fast. We investigate how deep learning might be used to perform this task. We employed a U-net deep neural network architecture to learn parameters that were adapted for galaxy image processing in a supervised setting and studied two deconvolution strategies. The first approach is a post-processing of a mere Tikhonov deconvolution with closed-form solution, and the second approach is an iterative deconvolution framework based on the alternating direction method of multipliers (ADMM). Our numerical results based on GREAT3 simulations with realistic galaxy images and point spread functions show that our two approaches outperform standard techniques that are based on convex optimization, whether assessed in galaxy image reconstruction or shape recovery. The approach based on a Tikhonov deconvolution leads to the most accurate results, except for ellipticity errors at high signal-to-noise ratio. The ADMM approach performs slightly better in this case. Considering that the Tikhonov approach is also more computation-time efficient in processing a large number of galaxies, we recommend this approach in this scenario.


2018 ◽  
Vol 2018 ◽  
pp. 1-10
Author(s):  
Guang Pu Zhang ◽  
Ce Zheng ◽  
Wang Sheng Lin

Azimuth angle estimation using a single vector hydrophone is a well-known problem in underwater acoustics. In the presence of multiple sources, a conventional complex acoustic intensity estimator (CAIE) cannot distinguish the azimuth angle of each source. In this paper, we propose a steering acoustic intensity estimator (SAIE) for azimuth angle estimation in the presence of interference. The azimuth angle of the interference is known in advance from the global positioning system (GPS) and compass data. By constructing the steering acoustic energy fluxes in the x and y channels of the acoustic vector hydrophone, the azimuth angle of interest can be obtained when the steering azimuth angle is directed toward the interference. Simulation results show that the SAIE outperforms the CAIE and is insensitive to the signal-to-noise ratio (SNR) and signal-to-interference ratio (SIR). A sea trial is presented that verifies the validity of the proposed method.


Geophysics ◽  
2006 ◽  
Vol 71 (4) ◽  
pp. SI177-SI187 ◽  
Author(s):  
Brad Artman

Imaging passive seismic data is the process of synthesizing the wealth of subsurface information available from reflection seismic experiments by recording ambient sound using an array of geophones distributed at the surface. Crosscorrelating the traces of such a passive experiment can synthesize data that are identical to actively collected reflection seismic data. With a correlation-based imaging condition, wave-equation shot-profile depth migration can use raw transmission wavefields as input for producing a subsurface image. Migration is even more important for passively acquired data than for active data because with passive data, the source wavefields are likely to be weak compared with background and instrument noise — a condition that leads to a low signal-to-noise ratio. Fourier analysis of correlating long field records shows that aliasing of the wavefields from distinct shots is unavoidable. Although this reduces the order of computations for correlation by the length of the original trace, the aliasing produces an output volume that may not be substantially more useful than the raw data because of the introduction of crosstalk between multiple sources. Direct migration of raw field data still can produce an accurate image, even when the transmission wavefields from individual sources are not separated. To illustrate direct migration, I use images from a shallow passive seismic investigation targeting a buried hollow pipe and the water-table reflection. These images show a strong anomaly at the 1-m depth of the pipe and faint events that could be the water table at a depth of around [Formula: see text]. The images are not clear enough to be irrefutable. I identify deficiencies in survey design and execution to aid future efforts.


Geophysics ◽  
2011 ◽  
Vol 76 (6) ◽  
pp. WC27-WC36 ◽  
Author(s):  
Oleg V. Poliannikov ◽  
Alison E. Malcolm ◽  
Hugues Djikpesse ◽  
Michael Prange

Hydraulic fracturing is the process of injecting high-pressure fluids into a reservoir to induce fractures and thus improve reservoir productivity. Microseismic event localization is used to locate created fractures. Traditionally, events are localized individually. Available information about events already localized is not used to help estimate other source locations. Traditional localization methods yield an uncertainty that is inversely proportional to the square root of the number of receivers. However, in applications where multiple fractures are created, multiple sources in a reference fracture may provide redundant information about unknown events in subsequent fractures that can boost the signal-to-noise ratio, improving estimates of the event positions. We used sources in fractures closer to the monitoring well to help localize events further away. It is known through seismic interferometry that with a 2D array of receivers, the traveltime between two sources may be recovered from a crosscorrelogram of two common source gathers. This allowed an event in the second fracture to be localized relative to an event in the reference fracture. A difficulty became evident when receivers are located in a single monitoring well. When the receiver array is 1D, classical interferometry cannot be directly employed because the problem becomes underdetermined. In our approach, interferometry was used to partially redatum microseismic events from the second fracture onto the reference fracture so that they can be used as virtual receivers, providing additional information complementary to that provided by the physical receivers. Our error analysis showed that, in addition to the gain obtained by having multiple physical receivers, the location uncertainty is inversely proportional to the square root of the number of sources in the reference fracture. Because the number of microseism sources is usually high, the proposed method will usually result in more accurate location estimates as compared with the traditional methods.


Sensors ◽  
2020 ◽  
Vol 20 (24) ◽  
pp. 7253
Author(s):  
Xintao Duan ◽  
Mengxiao Gou ◽  
Nao Liu ◽  
Wenxin Wang ◽  
Chuan Qin

The traditional cover modification steganography method only has low steganography ability. We propose a steganography method based on the convolutional neural network architecture (Xception) of deep separable convolutional layers in order to solve this problem. The Xception architecture is used for image steganography for the first time, which not only increases the width of the network, but also improves the adaptability of network expansion, and adds different receiving fields to carry out multi-scale information in it. By introducing jump connections, we solved the problems of gradient dissipation and gradient descent in the Xception architecture. After cascading the secret image and the mask image, high-quality images can be reconstructed through the network, which greatly improves the speed of steganography. When hiding, only the secret image and the cover image are cascaded, and then the secret image can be embedded in the cover image through the hidden network in order to obtain the secret image. After extraction, the secret image can be reconstructed by bypassing the secret image through the extraction network. The results show that the results that are obtained by our model have high peak signal-to-noise ratio (PSNR) and structural similarity (SSIM), and the average high load capacity is 23.96 bpp (bit per pixel), thus realizing large-capacity image steganography surgery.


2007 ◽  
Vol 19 (7) ◽  
pp. 1798-1853 ◽  
Author(s):  
Kenji Morita ◽  
Masato Okada ◽  
Kazuyuki Aihara

Inspired by recent studies regarding dendritic computation, we constructed a recurrent neural network model incorporating dendritic lateral inhibition. Our model consists of an input layer and a neuron layer that includes excitatory cells and an inhibitory cell; this inhibitory cell is activated by the pooled activities of all the excitatory cells, and it in turn inhibits each dendritic branch of the excitatory cells that receive excitations from the input layer. Dendritic nonlinear operation consisting of branch-specifically rectified inhibition and saturation is described by imposing nonlinear transfer functions before summation over the branches. In this model with sufficiently strong recurrent excitation, on transiently presenting a stimulus that has a high correlation with feed- forward connections of one of the excitatory cells, the corresponding cell becomes highly active, and the activity is sustained after the stimulus is turned off, whereas all the other excitatory cells continue to have low activities. But on transiently presenting a stimulus that does not have high correlations with feedforward connections of any of the excitatory cells, all the excitatory cells continue to have low activities. Interestingly, such stimulus-selective sustained response is preserved for a wide range of stimulus intensity. We derive an analytical formulation of the model in the limit where individual excitatory cells have an infinite number of dendritic branches and prove the existence of an equilibrium point corresponding to such a balanced low-level activity state as observed in the simulations, whose stability depends solely on the signal-to-noise ratio of the stimulus. We propose this model as a model of stimulus selectivity equipped with self-sustainability and intensity-invariance simultaneously, which was difficult in the conventional competitive neural networks with a similar degree of complexity in their network architecture. We discuss the biological relevance of the model in a general framework of computational neuroscience.


1986 ◽  
Vol 40 (3) ◽  
pp. 401-405 ◽  
Author(s):  
M. Handke ◽  
N. J. Harrick

The principal problem in measurement of emission IR spectra is the low signal-to-noise ratio resulting from the large background radiation relative to sample emission. One method of increasing the signal is to collect the emitted radiation over a very large solid angle using an ellipsoidal mirror. In this method, placing the sample at the short focal length of the ellipsoid both increases the amount of radiation collected for an improved signal-to-noise ratio as well as facilitates sampling of small areas. For locating the area of interest, a microscope is mounted on the emission accessory. The results of testing this emission accessory under different operating conditions such as different samples, emission angles, temperatures, etc., are presented.


2020 ◽  
Vol 39 (5) ◽  
pp. 6773-6782
Author(s):  
Snekha Thakran

The Electrocardiogram (ECG) signal records the electrical activity of the heart. It is very difficult for physicians to analyze the ECG signal if noise is embedded during acquisition to inspect the heart’s condition. The denoising of electrocardiogram signals based on the genetic particle filter algorithm(GPFA) using fuzzy thresholding and ensemble empirical mode decomposition (EEMD) is proposed in this paper, which efficiently removes noise from the ECG signal. This paper proposes a two-phase scheme for eliminating noise from the ECG signal. In the first phase, the noisy signal is decomposed into a true intrinsic mode function (IMFs) with the help of EEMD. EEMD is better than EMD because it removes the mode-mixing effect. In the second phase, IMFs which are corrupted by noise is obtained by using spectral flatness of each IMF and fuzzy thresholding. The corrupted IMFs are filtered using a GPF method to remove the noise. Then, the signal is reconstructed with the processed IMFs to get the de-noised ECG. The proposed algorithm is analyzed for a different local hospital database, and it gives better root mean square error and signal to noise ratio than other existing techniques (Wavelet transform (WT), EMD, Particle filter(PF) based method, extreme-point symmetric mode decomposition with Nonlocal Means(ESMD-NLM), and discrete wavelet with Savitzky-Golay(DW-SG) filter).


2018 ◽  
Vol 8 (12) ◽  
pp. 2604 ◽  
Author(s):  
Zhiguo Huang ◽  
Rui Huang ◽  
Xiaojun Xue

To determine the feasibility of observing high-orbit targets with a large aperture telescope, we created a simulation based on electronics to evaluate the signal-to-noise ratio (SNR) model for an infrared ground-based photoelectric system. Atmosphere transmission and sky background radiation data were obtained using MODTRAN software, then the SNRs of the high-orbit target (HOT) in different temperatures and orbit heights were calculated separately. The results showed that the observation of the HOT in a short band was possible, and the effect of short-wave was excellent at low temperatures. On the basis of this model, some space targets were observed by a K-band photoelectric telescope for verification and had constructive results. Thus, the model can be used as a basis for whether a HOT can be detected.


Sign in / Sign up

Export Citation Format

Share Document