scholarly journals A Semi-Analytical Model  for Gravitational  Microlensing

2021 ◽  
Author(s):  
◽  
Paul Robin Brian Chote

<p>This thesis describes the theory and implementation of a semi-analytical model for gravitational microlensing. Gravitational microlensing is observed when a distant background `source' star comes into close alignment with an intermediate `lens' star. The gravitational eld of the lens de ects the paths of light emitted from the source, which causes an increase in its observed brightness. As the alignment of the two stars changes with time, the apparent magni cation of the source follows a well de ned `lightcurve'. A companion body (such as a planet) orbiting the lens star can introduce large deviations from the standard lightcurve, which can be modelled to determine a mass ratio and separation for the companion(s). This provides a means to detect extrasolar planets orbiting the lens star. We show, from basic principles, the development of the standard model of a mi- crolensing event, including the e ect of multiple lens masses and orbital motion. We discuss the two, distinctly di erent, numerical approaches that are used to calculate theoretical lightcurves using this model. The `ray shooting' approaches are discussed with reference to the previously developed modelling code (MLENS), which implemented them. This is followed by a comprehensive description of the `semi-analytical' approaches used in the new software (mlens2) developed during this thesis programme; a key feature of these techniques is the determination of the source magni cation from the roots of a high order polynomial. We also discuss the process of nding the best- t model for an observed microlensing event, with respect to the mlens2 software package. Finally, we demonstrate the capabilities of our semi-analytical model by generating theoretical lightcurves for the microlensing events OGLE-2005-BLG-390 and OGLE-2006-BLG-109 and comparing them to the observational data and published models.</p>

2021 ◽  
Author(s):  
◽  
Paul Robin Brian Chote

<p>This thesis describes the theory and implementation of a semi-analytical model for gravitational microlensing. Gravitational microlensing is observed when a distant background `source' star comes into close alignment with an intermediate `lens' star. The gravitational eld of the lens de ects the paths of light emitted from the source, which causes an increase in its observed brightness. As the alignment of the two stars changes with time, the apparent magni cation of the source follows a well de ned `lightcurve'. A companion body (such as a planet) orbiting the lens star can introduce large deviations from the standard lightcurve, which can be modelled to determine a mass ratio and separation for the companion(s). This provides a means to detect extrasolar planets orbiting the lens star. We show, from basic principles, the development of the standard model of a mi- crolensing event, including the e ect of multiple lens masses and orbital motion. We discuss the two, distinctly di erent, numerical approaches that are used to calculate theoretical lightcurves using this model. The `ray shooting' approaches are discussed with reference to the previously developed modelling code (MLENS), which implemented them. This is followed by a comprehensive description of the `semi-analytical' approaches used in the new software (mlens2) developed during this thesis programme; a key feature of these techniques is the determination of the source magni cation from the roots of a high order polynomial. We also discuss the process of nding the best- t model for an observed microlensing event, with respect to the mlens2 software package. Finally, we demonstrate the capabilities of our semi-analytical model by generating theoretical lightcurves for the microlensing events OGLE-2005-BLG-390 and OGLE-2006-BLG-109 and comparing them to the observational data and published models.</p>


2018 ◽  
Vol 84 (11) ◽  
pp. 74-87
Author(s):  
V. B. Bokov

A new statistical method for response steepest improvement is proposed. This method is based on an initial experiment performed on two-level factorial design and first-order statistical linear model with coded numerical factors and response variables. The factors for the runs of response steepest improvement are estimated from the data of initial experiment and determination of the conditional extremum. Confidence intervals are determined for those factors. The first-order polynomial response function fitted to the data of the initial experiment makes it possible to predict the response of the runs for response steepest improvement. The linear model of the response prediction, as well as the results of the estimation of the parameters of the linear model for the initial experiment and factors for the experiments of the steepest improvement of the response, are used when finding prediction response intervals in these experiments. Kknowledge of the prediction response intervals in the runs of steepest improvement of the response makes it possible to detect the results beyond their limits and to find the limiting values of the factors for which further runs of response steepest improvement become ineffective and a new initial experiment must be carried out.


2020 ◽  
Vol 16 ◽  
Author(s):  
Natasa P. Kalogiouri ◽  
Natalia Manousi ◽  
Erwin Rosenberg ◽  
George A. Zachariadis ◽  
Victoria F. Samanidou

Background:: Nuts have been incorporated into guidelines for healthy eating since they contain considerable amounts of antioxidants and their effects are related to health benefits since they contribute to the prevention of nutritional deficiencies. The micronutrient characterization is based mainly on the determination of phenolics which is the most abundant class of bioactive compounds in nuts. Terpenes constitute another class of bioactive compounds that are present in nuts and show high volatility. The analysis of phenolic compounds and terpenes are very demanding tasks that require optimization of the chromatographic conditions to improve the separation of the components. Moreover, nuts are rich in unsaturated fatty acids and they are therefore considered as cardioprotective. Gas chromatography is the predominant instrumental analytical technique for the determination of derivatized fatty acids and terpenes in food matrices, while high performance liquid chromatography is currently the most popular technique for the determination of phenolic compounds Objective:: This review summarizes all the recent advances in the optimization of the chromatographic conditions for the determination of phenolic compounds, fatty acids and terpenes in nuts Conclusion:: The state-of-the art in the technology available is critically discussed, exploring new analytical approaches to reduce the time of analysis and improve the performance of the chromatographic systems in terms of precision, reproducibility, limits of detection and quantification and overall quality of the results


2021 ◽  
Vol 11 (4) ◽  
pp. 1482
Author(s):  
Róbert Huňady ◽  
Pavol Lengvarský ◽  
Peter Pavelka ◽  
Adam Kaľavský ◽  
Jakub Mlotek

The paper deals with methods of equivalence of boundary conditions in finite element models that are based on finite element model updating technique. The proposed methods are based on the determination of the stiffness parameters in the section plate or region, where the boundary condition or the removed part of the model is replaced by the bushing connector. Two methods for determining its elastic properties are described. In the first case, the stiffness coefficients are determined by a series of static finite element analyses that are used to obtain the response of the removed part to the six basic types of loads. The second method is a combination of experimental and numerical approaches. The natural frequencies obtained by the measurement are used in finite element (FE) optimization, in which the response of the model is tuned by changing the stiffness coefficients of the bushing. Both methods provide a good estimate of the stiffness at the region where the model is replaced by an equivalent boundary condition. This increases the accuracy of the numerical model and also saves computational time and capacity due to element reduction.


2021 ◽  
Vol 22 (12) ◽  
pp. 6283
Author(s):  
Jérémy Lamarche ◽  
Luisa Ronga ◽  
Joanna Szpunar ◽  
Ryszard Lobinski

Selenoprotein P (SELENOP) is an emerging marker of the nutritional status of selenium and of various diseases, however, its chemical characteristics still need to be investigated and methods for its accurate quantitation improved. SELENOP is unique among selenoproteins, as it contains multiple genetically encoded SeCys residues, whereas all the other characterized selenoproteins contain just one. SELENOP occurs in the form of multiple isoforms, truncated species and post-translationally modified variants which are relatively poorly characterized. The accurate quantification of SELENOP is contingent on the availability of specific primary standards and reference methods. Before recombinant SELENOP becomes available to be used as a primary standard, careful investigation of the characteristics of the SELENOP measured by electrospray MS and strict control of the recoveries at the various steps of the analytical procedures are strongly recommended. This review critically discusses the state-of-the-art of analytical approaches to the characterization and quantification of SELENOP. While immunoassays remain the standard for the determination of human and animal health status, because of their speed and simplicity, mass spectrometry techniques offer many attractive and complementary features that are highlighted and critically evaluated.


2020 ◽  
pp. 136943322098170
Author(s):  
Michele Fabio Granata ◽  
Antonino Recupero

In concrete box girders, the amount and distribution of reinforcements in the webs have to be estimated considering the local effects due to eccentric external loads and cross-sectional distortion and not only the global effect due to the resultant forces of a longitudinal analysis: shear, torsion and bending. This work presents an analytical model that allows designers to take into account the interaction of all these effects, global and local, for the determination of the reinforcements. The model is based on the theory of stress fields and it has been compared to a 3D finite element analysis, in order to validate the interaction domains. The results show how the proposed analytical model allows an easy and reliable reinforcement evaluation, in agreement with a more refined 3D analysis but with a reduced computational burden.


1997 ◽  
Vol 12 (3) ◽  
pp. S14-S15 ◽  
Author(s):  
J. Dingwell ◽  
T. Ovaert ◽  
D. Lemmon ◽  
P.R. Cavanagh

1975 ◽  
Vol 8 (4) ◽  
pp. 451-506 ◽  
Author(s):  
F Conti ◽  
E. Wanke

The basic principles underlying fluctuation phenomena in thermodynamics have long been understood (for reviews see Kubo, 1957; Kubo, Matsuo & Kazuhiro 1973 Lax, 1960). Classical examples of how fluctuation analysis can provide an insight into the corpuscular nature of matter are the determination of Avogadro's number according to Einstein's theory of Brownian motion (see, e.g. Uhlenbeck & Ornstein, 1930; Kac, 1947) and the evaluation of the electronic charge from the shot noise in vacuum tubes (see Van der Ziel, 1970).


Author(s):  
Gino James Rouss ◽  
William S. Janna

The valve coefficient was measured for 1, 1-1/4, 1-1/2 and 2 nominal ball valves. A recently designed orifice insert was used with these valves to obtain smaller valve coefficients. Orifice inserts were threaded into the body of a ball valve just upstream of the ball itself. The valve coefficient was measured for every insert used with these valves, and an expression was determined to relate the orifice diameter to other pertinent flow parameters. Two dimensionless groups were chosen to correlate the collected data, and expressions were developed that can be used as aids in sizing the orifice insert needed to obtain the desired valve coefficient. The study has shown that a 2nd order polynomial equation as well as a power law equation can both be used to predict the desired results. Knowing pipe size and schedule, the diameter of the orifice insert needed to obtain the required valve coefficient can be approximated with minimum error. An error analysis performed on the collected data shows that the results are highly accurate, and that the experimental process is repeatable.


Sign in / Sign up

Export Citation Format

Share Document