scholarly journals An MLE Approach to Metrology of Laser Propagation Axes using a-Posteriori Uncertainty Estimates Determined Through Monte-Carlo Experimentation

2021 ◽  
Author(s):  
Sheldon Deeny ◽  
Daniel Champion
2018 ◽  
Vol 11 (8) ◽  
pp. 4627-4643 ◽  
Author(s):  
Simon Pfreundschuh ◽  
Patrick Eriksson ◽  
David Duncan ◽  
Bengt Rydberg ◽  
Nina Håkansson ◽  
...  

Abstract. A neural-network-based method, quantile regression neural networks (QRNNs), is proposed as a novel approach to estimating the a posteriori distribution of Bayesian remote sensing retrievals. The advantage of QRNNs over conventional neural network retrievals is that they learn to predict not only a single retrieval value but also the associated, case-specific uncertainties. In this study, the retrieval performance of QRNNs is characterized and compared to that of other state-of-the-art retrieval methods. A synthetic retrieval scenario is presented and used as a validation case for the application of QRNNs to Bayesian retrieval problems. The QRNN retrieval performance is evaluated against Markov chain Monte Carlo simulation and another Bayesian method based on Monte Carlo integration over a retrieval database. The scenario is also used to investigate how different hyperparameter configurations and training set sizes affect the retrieval performance. In the second part of the study, QRNNs are applied to the retrieval of cloud top pressure from observations by the Moderate Resolution Imaging Spectroradiometer (MODIS). It is shown that QRNNs are not only capable of achieving similar accuracy to standard neural network retrievals but also provide statistically consistent uncertainty estimates for non-Gaussian retrieval errors. The results presented in this work show that QRNNs are able to combine the flexibility and computational efficiency of the machine learning approach with the theoretically sound handling of uncertainties of the Bayesian framework. Together with this article, a Python implementation of QRNNs is released through a public repository to make the method available to the scientific community.


2018 ◽  
Author(s):  
Simon Pfreundschuh ◽  
Patrick Eriksson ◽  
David Duncan ◽  
Bengt Rydberg ◽  
Nina Håkansson ◽  
...  

Abstract. This work is concerned with the retrieval of physical quantities from remote sensing measurements. A neural network based method, Quantile Regression Neural Networks (QRNNs), is proposed as a novel approach to estimate the a posteriori distribution of Bayesian remote sensing retrievals. The advantage of QRNNs over conventional neural network retrievals is that they not only learn to predict a single retrieval value but also the associated, case specific uncertainties. In this study, the retrieval performance of QRNNs is characterized and compared to that of other state-of-the-art retrieval methods. A synthetic retrieval scenario is presented and used as a validation case for the application of QRNNs to Bayesian retrieval problems. The QRNN retrieval performance is evaluated against Markov chain Monte Carlo simulation and another Bayesian method based on Monte Carlo integration over a retrieval database. The scenario is also used to investigate how different hyperparameter configurations and training set sizes affect the retrieval performance. In the second part of the study, QRNNs are applied to the retrieval of cloud top pressure from observations by the moderate resolution imaging spectroradiometer (MODIS). It is shown that QRNNs are not only capable of achieving similar accuracy as standard neural network retrievals, but also provide statistically consistent uncertainty estimates for non-Gaussian retrieval errors. The results presented in this work show that QRNNs are able to combine the flexibility and computational efficiency of the machine learning approach with the theoretically sound handling of uncertainties of the Bayesian framework. Together with this article, a Python implementation of QRNNs is released through a public repository to make the method available to the scientific community.


2008 ◽  
Vol 37 (2) ◽  
pp. 261-272
Author(s):  
Tarcisio de Moraes Gonçalves ◽  
Ana Luísa Lopes da Costa ◽  
Juliana Salgado Laranjo ◽  
Mary Ana Petersen Rodriguez ◽  
Geovanne Ferreira Rebouças
Keyword(s):  

Foram utilizados 1.129 animais, 298 F1 e 831 F2 para gordura intramuscular (GIM, %) e ganho de peso (GP, g/dia) e 324 F1 e 805 F2 para espessura de toucinho (ET, mm), obtidos por meio do cruzamento de suínos machos da raça Meishan e fêmeas Large White e Landrace. Os animais foram genotipados para marcadores moleculares cobrindo todo o genoma. Foram estudados os cromossomos 1, 2, 4, 5, 6, 7, 13, 14 e19 para ET e GIM e os cromossomos 1, 2, 4, 6, 7, 8, 13, 17 e19 para GP entre 25 e 90 kg de peso vivo (PV). Análises de QTL usando metodologia Bayesiana foram aplicadas mediante o modelo genético estatístico combinando os efeitos Poligênico Infinito (MPI), Poligênico Finito (MPF) e de QTL. Os sumários dos parâmetros estimados foram baseados nas distribuições marginais a posteriori obtidas por Cadeia de Markov, algoritmo de Monte Carlo (MCMC). De modo geral, por meio dos resultados, foi possível evidenciar um QTL para ET, independentemente da priori estudada. Não foi possível detectar QTL para as características GIM e GP com a aplicação desta metodologia, o que pode estar relacionado aos marcadores não-informativos ou à ausência de QTL segregando nos cromossomos estudados. Há vantagens em analisar dados experimentais ajustando modelos genéticos combinados e não considerando unicamente o modelo poligênico ou o oligogênico. As análises ilustraram a utilidade e aplicabilidade do método Bayesiano no qual foram utilizados modelos finitos.


Author(s):  
Trinh Tran Hong Duyen ◽  
Tran Anh Tu

Nowadays, the uses of laser and optics in the medical areas are extremely vivid, especially low-level laser therapy. The light with the wavelength of 633 nm to 1200 nm could penetrate and propagate deep in biological tissue. To develop the low-level laser therapy device, optimizing light delivery is critical to accurately stimulate the biological effects inside the biological tissue. Nevertheless, each form of the tissues at each zone on the body had various refractive optic, absorption, scattering, and anisotropy coefficients. This paper describes the simulation results of low-level laser propagation from skin surface at the lower spine, the knee, the femur and the prostate gland with four wavelengths (633 nm, 780 nm, 850 nm, and 940 nm) by the Monte Carlo method. These simulation results are the base for developing the low-level laser therapy device, that could be used in clinical for treating the fracture, knee osteoarthritis, spinal degeneration, and benign prostatic hypertrophy.


Mathematics ◽  
2020 ◽  
Vol 8 (7) ◽  
pp. 1078
Author(s):  
Ruxandra Stoean ◽  
Catalin Stoean ◽  
Miguel Atencia ◽  
Roberto Rodríguez-Labrada ◽  
Gonzalo Joya

Uncertainty quantification in deep learning models is especially important for the medical applications of this complex and successful type of neural architectures. One popular technique is Monte Carlo dropout that gives a sample output for a record, which can be measured statistically in terms of average probability and variance for each diagnostic class of the problem. The current paper puts forward a convolutional–long short-term memory network model with a Monte Carlo dropout layer for obtaining information regarding the model uncertainty for saccadic records of all patients. These are next used in assessing the uncertainty of the learning model at the higher level of sets of multiple records (i.e., registers) that are gathered for one patient case by the examining physician towards an accurate diagnosis. Means and standard deviations are additionally calculated for the Monte Carlo uncertainty estimates of groups of predictions. These serve as a new collection where a random forest model can perform both classification and ranking of variable importance. The approach is validated on a real-world problem of classifying electrooculography time series for an early detection of spinocerebellar ataxia 2 and reaches an accuracy of 88.59% in distinguishing between the three classes of patients.


2006 ◽  
Vol 3 (2) ◽  
pp. 219-230 ◽  
Author(s):  
Basil T. Wong ◽  
M. Pinar Mengüç ◽  
R. Ryan Vallance

A methodology is presented for nanometer-size patterning of a workpiece using both an electron-beam and a laser. A Monte Carlo/Ray Tracing technique is used in modeling the electron-beam propagation inside a thin gold film. This approach is identical to that of a typical Monte Carlo simulation in radiative transfer except that proper electron scattering properties are employed. The laser propagation within the one-dimensional, non-scattering film on top of a quartz substrate is modeled using a ray-tracing approach and reflections at the boundaries are accounted for with the Fresnel-expressions. The temperature distribution inside a gold film is then predicted using the Fourier law of heat conduction, after evaluating the accuracy of the model for the range considered. A sequential nano-pattern is created using these coupled numerical simulations. The procedure we present here is the first to outline the sequential nano-machining processes and likely to guide the experimental studies to success with less trial-and-error attempts.


2021 ◽  
Author(s):  
Paulo Chagas ◽  
Luiz Souza ◽  
Izabelle Pontes ◽  
Rodrigo Calumby ◽  
Michele Angelo ◽  
...  

Membranous Nephropathy (MN) is one of the most common glomerular diseases that cause adult nephrotic syndrome. To assist pathologists on MN classification, we evaluated three deep-learning-based architectures, namely, ResNet-18, DenseNet and Wide-ResNet. In addition, to accomplish more reliable results, we applied Monte-Carlo Dropout for uncertainty estimation. We achieved average F1-Scores above 92% for all models, with Wide-ResNet obtaining the highest average F1-Score (93.2%). For uncertainty estimation on Wide-ResNet, the uncertainty scores showed high relation with incorrect classifications, proving that these uncertainty estimates can support pathologists on the analysis of model predictions.


Solid Earth ◽  
2019 ◽  
Vol 10 (5) ◽  
pp. 1663-1684 ◽  
Author(s):  
Evren Pakyuz-Charrier ◽  
Mark Jessell ◽  
Jérémie Giraud ◽  
Mark Lindsay ◽  
Vitaliy Ogarko

Abstract. This paper proposes and demonstrates improvements for the Monte Carlo simulation for uncertainty propagation (MCUP) method. MCUP is a type of Bayesian Monte Carlo method aimed at input data uncertainty propagation in implicit 3-D geological modeling. In the Monte Carlo process, a series of statistically plausible models is built from the input dataset of which uncertainty is to be propagated to a final probabilistic geological model or uncertainty index model. Significant differences in terms of topology are observed in the plausible model suite that is generated as an intermediary step in MCUP. These differences are interpreted as analogous to population heterogeneity. The source of this heterogeneity is traced to be the non-linear relationship between plausible datasets' variability and plausible model's variability. Non-linearity is shown to mainly arise from the effect of the geometrical rule set on model building which transforms lithological continuous interfaces into discontinuous piecewise ones. Plausible model heterogeneity induces topological heterogeneity and challenges the underlying assumption of homogeneity which global uncertainty estimates rely on. To address this issue, a method for topological analysis applied to the plausible model suite in MCUP is introduced. Boolean topological signatures recording lithological unit adjacency are used as n-dimensional points to be considered individually or clustered using the density-based spatial clustering of applications with noise (DBSCAN) algorithm. The proposed method is tested on two challenging synthetic examples with varying levels of confidence in the structural input data. Results indicate that topological signatures constitute a powerful discriminant to address plausible model heterogeneity. Basic topological signatures appear to be a reliable indicator of the structural behavior of the plausible models and provide useful geological insights. Moreover, ignoring heterogeneity was found to be detrimental to the accuracy and relevance of the probabilistic geological models and uncertainty index models. Highlights. Monte Carlo uncertainty propagation (MCUP) methods often produce topologically distinct plausible models. Plausible models can be differentiated using topological signatures. Topologically similar probabilistic geological models may be obtained through topological signature clustering.


2020 ◽  
Vol 35 (10) ◽  
pp. 1105-1112
Author(s):  
Darko Kajfez

A frequently used Q factor measurement procedure consists of determining the values of the input reflection coefficient vs. frequency with the use of a network analyzer, and processing the measured values with a data-fitting procedure to evaluate the location and the size of the corresponding Q-circle. That information is then used to compute the value of the loaded and unloaded Q factors and the coupling coefficient of the resonator being tested. This paper describes a novel method of post-processing the measured data, which also provides information on the uncertainty of the obtained results. Numerical examples show that this a-posteriori procedure can not only provide the uncertainty estimates but also improve the accuracy of results, even in the presence of a significant level of random measurement noise.


2020 ◽  
Author(s):  
Patrick Eriksson ◽  
Simon Pfreundschuh ◽  
Teo Norrestad ◽  
Christian Kummerow

<p>A novel method for the estimation of surface precipitation using passive observations from the GPM constellation is proposed. The method, which makes use of quantile regression neural networks (QRNNs), is shown to provide a more accurate representation of retrieval uncertainties, high processing speed and simplifies the integration of ancillary data into the retrieval. With that, it overcomes limitations of traditionally used methods, such as Monte Carlo integration as well as standard usage of machine learning.</p><p>The bulk of precipitation estimates provided by the Global Precipitation Measurement mission (GPM) is based on passive microwave observations. These data are produced by the GPROF algorithm, which applies a Bayesian approach denoted as Monte Carlo integration (MCI). In this work, we investigate the potential of using QRNNs as an alternative to MCI by assessing the performance of both methods using identical input databases.</p><p>The methods agree well regarding point estimates, but QRNN provides better estimates of the retrieval uncertainty at the same time as reducing processing times by an order of magnitude. As QRNN gives more precise uncertainty estimates than MCI, it gives an improved basis for further processing of the data, such as identification of extreme precipitation and areal integration.</p><p>Results so far indicate that a single network can handle all data from a sensor, which is in contrast to MCI where observations over oceans and different land types have to be treated separately. Moreover, the flexibility of the machine-learning approach opens up opportunities for further improvements of the retrieval: ancillary information can be easily incorporated and QRNN can be applied on multiple footprints, to make better use of spatial information. The effects of these improvements are investigated on independent validation data from ground-based precipitation radars.</p><p>QRNN is here shown to be a highly interesting alternative for GPROF, but being a general approach it should be of equally high interest for other precipitation and clouds retrievals.</p>


Sign in / Sign up

Export Citation Format

Share Document