scholarly journals Remarks on the probabilities of error in physical observations, and on the density of the Earth, considered, especially with regard to the reduction of experiments on the pendulum. In a letter to Capt. Henry Kater, F. R. S. By Thomas Young, M. D. For. Sec. R. S

In the first section of this letter, Dr. Young proceeds to examine in what manner the apparent constancy of many general results, subject to numerous causes of diversity, may be best explained; and shows that the combination of many independent causes of error, each liable to incessant fluctuation, has a natural tendency, dependent on their multiplicity and independence, to diminish the aggregate variation of their joint effect; a position illustrated by the simple case of supposing an equal large number of black and white balls to be thrown into a box, and 100 of them to be drawn out at once or in succession; when it is demonstrated that there is 1 chance in 12 1/2; that exactly 50 of each kind will be drawn, and an even chance that there will not be more than 53 of either; and that it is barely possible that 100 black, or 100 white, should be drawn in succession. From calculations contained in this paper, Dr. Young infers that the original conditions of the probability of different errors do not considerably modify the conclusions respecting the accuracy of the mean result, because their effect is comprehended in the magnitude of the mean error from which these conclusions are deduced. The author also shows that the error of the mean, on account of this limitation is never likely to be greater than six sevenths of the mean of all the errors divided by the square root of the number of observations.

1869 ◽  
Vol 17 ◽  
pp. 344-346

The Tables of Jupiter and Saturn which have been used for some years past in the computations of the ‘Berliner Jahrbuch’ and ‘Nautical Almanac,’ differ more from observation than is consistent with the present requirements of astronomy; and, moreover, abundant means for the correction of Bouvard’s ‘Elements’ exist in the publication of the Greenwich Planetary Observations, 1750-1835, and the annual volumes issued from the Royal Observatory since 1836. The present work, which has been undertaken for this purpose, is based exclusively on the Greenwich Observations, 1750-1865. Each mean group of observations in the Greenwich Planetary Reductions &c. gives the mean error of the planet’s tabular geocentric place, with its equivalent in terms of the heliocentric errors of the earth and planet; but in the present investigation the places of Carlini’s Solar Tables, which have been used throughout the whole period (with the exception of 1864 and 1865), have been accepted without alteration; for Jupiter and Saturn the factors of the earth’s heliocentric errors are so small, that the difference of Carlini’s Solar Tables from the recent investigations of Leverrier rnay be neglected.


1871 ◽  
Vol 16 (4) ◽  
pp. 285-303
Author(s):  
C. Bremiker

Having thus, as I believe, demonstrated that life insurance calculations have nothing to do with probabilities, I come back to the idea of risk. This, as I pointed out at starting, must be taken from the theory of probabilities, or more precisely, from that part of it which has been cultivated since the beginning of this century, by Lagrange, Gauss, Laplace, and others, viz., the method of least squares. In that method is defined the idea of the “mean error,” which is considered as the measure of the danger to which we are exposed in a single case. This “mean error” is the square root of the sum of all the squares of the errors divided by their number; and the squares of the errors themselves are formed from the deviations of all the single cases from the average or most probable value. In insurances depending upon life and death, the value is also calculated according to the average, so that when all the assured are dead, if the mortality has followed the mean numbers given by the table of mortality, and the additions to the premiums for the expenses of management are disregarded, there will be neither surplus nor deficiency. This average value is the so-called net premium, which may be either a single premium or may be payable for a term of years agreed on beforehand. But we can calculate beforehand from the mortality table all the deviations, or the gains and losses, which can arise from the earlier or later death of the lives assured. Squaring all these deviations, and dividing the sum of the squares by their number, and taking the square root of this sum, we get the value of the mean danger or the risk attaching to a single insurance. For further elucidation some applications of this process will now be given.


2020 ◽  
Vol 42 ◽  
pp. e105
Author(s):  
Carlos Alexandre Santos Querino ◽  
Marcelo Sacardi Biudes ◽  
Nadja Gomes Machado ◽  
Juliane Kayse Albuquerque da Silva Querino ◽  
Marcos Antônio Lima Moura ◽  
...  

The measures of Atmospheric Long Wave radiation are onerous, which brings the necessity to use alternative methods. Thus, the main aim of this paper was to test and parameterize some models that exist in the literature to estimate atmospheric long wave. The data were collected at Fazenda São Nicolau (2002 - 2003), located in Northwestern of Mato Grosso State. Data were processed hourly, monthly, and seasonal (dry and wet) besides clear and partly cloudy days on the average. The models of Swinbank, Idso Jackson, Idso, Prata and Duarte. were applied. The performance of the models was based on the mean error, square root of mean square error, absolute mean error, Pearson's coefficient and Willmott's coefficient. All models had presented high errors and low Peason’s and Willmott coefficients. After parameterizing, all models reduced their errors and increased Pearson and Willmott’s coefficient. The models of Idso and Swinbank had presented better and worse performance, respectively. It was not observed an increment on the performance of the model when classified according to cloudiness and seasonality. The Idso’s model had presented the lowest errors among the models. The model that had presented worst performance for any tested situation was Swinbank.


1966 ◽  
Vol 25 ◽  
pp. 373
Author(s):  
Y. Kozai

The motion of an artificial satellite around the Moon is much more complicated than that around the Earth, since the shape of the Moon is a triaxial ellipsoid and the effect of the Earth on the motion is very important even for a very close satellite.The differential equations of motion of the satellite are written in canonical form of three degrees of freedom with time depending Hamiltonian. By eliminating short-periodic terms depending on the mean longitude of the satellite and by assuming that the Earth is moving on the lunar equator, however, the equations are reduced to those of two degrees of freedom with an energy integral.Since the mean motion of the Earth around the Moon is more rapid than the secular motion of the argument of pericentre of the satellite by a factor of one order, the terms depending on the longitude of the Earth can be eliminated, and the degree of freedom is reduced to one.Then the motion can be discussed by drawing equi-energy curves in two-dimensional space. According to these figures satellites with high inclination have large possibilities of falling down to the lunar surface even if the initial eccentricities are very small.The principal properties of the motion are not changed even if plausible values ofJ3andJ4of the Moon are included.This paper has been published in Publ. astr. Soc.Japan15, 301, 1963.


1979 ◽  
Vol 44 (2) ◽  
pp. 295-306 ◽  
Author(s):  
Ivan Cibulka ◽  
Vladimír Hynek ◽  
Robert Holub ◽  
Jiří Pick

A digital vibrating-tube densimeter was constructed for measuring the density of liquids at several temperatures. The underlying principle of the apparatus is the measurement of the period of eigen-vibrations of a V-shaped tube; the second power of the period of the vibrations is proportional to the density of the liquid in the tube. The temperature of the measuring system is controlled by an electronic regulator. The mean error in the density measurement is approximately ±1 . 10-5 g cm-3 at 25 °C and ±2 . 10-5 g cm-3 at 40 °C. The apparatus was used for an indirect measurement of the excess volume, tested with the benzene-cyclohexane system and further used for determining the excess volume of the benzene-methanol, benzene-acetonitrile and methanol-acetonitrile systems at 25 and 40 °C.


Aerospace ◽  
2021 ◽  
Vol 8 (7) ◽  
pp. 183
Author(s):  
Yongjie Liu ◽  
Yu Jiang ◽  
Hengnian Li ◽  
Hui Zhang

This paper intends to show some special types of orbits around Jupiter based on the mean element theory, including stationary orbits, sun-synchronous orbits, orbits at the critical inclination, and repeating ground track orbits. A gravity model concerning only the perturbations of J2 and J4 terms is used here. Compared with special orbits around the Earth, the orbit dynamics differ greatly: (1) There do not exist longitude drifts on stationary orbits due to non-spherical gravity since only J2 and J4 terms are taken into account in the gravity model. All points on stationary orbits are degenerate equilibrium points. Moreover, the satellite will oscillate in the radial and North-South directions after a sufficiently small perturbation of stationary orbits. (2) The inclinations of sun-synchronous orbits are always bigger than 90 degrees, but smaller than those for satellites around the Earth. (3) The critical inclinations are no-longer independent of the semi-major axis and eccentricity of the orbits. The results show that if the eccentricity is small, the critical inclinations will decrease as the altitudes of orbits increase; if the eccentricity is larger, the critical inclinations will increase as the altitudes of orbits increase. (4) The inclinations of repeating ground track orbits are monotonically increasing rapidly with respect to the altitudes of orbits.


Energies ◽  
2021 ◽  
Vol 14 (9) ◽  
pp. 2525
Author(s):  
Kamil Krasuski ◽  
Damian Wierzbicki

In the field of air navigation, there is a constant pursuit for new navigation solutions for precise GNSS (Global Navigation Satellite System) positioning of aircraft. This study aims to present the results of research on the development of a new method for improving the performance of PPP (Precise Point Positioning) positioning in the GPS (Global Positioning System) and GLONASS (Globalnaja Nawigacionnaja Sputnikovaya Sistema) systems for air navigation. The research method is based on a linear combination of individual position solutions from the GPS and GLONASS systems. The paper shows a computational scheme based on the linear combination for geocentric XYZ coordinates of an aircraft. The algorithm of the new research method uses the weighted mean method to determine the resultant aircraft position. The research method was tested on GPS and GLONASS kinematic data from an airborne experiment carried out with a Seneca Piper PA34-200T aircraft at the Mielec airport. A dual-frequency dual-system GPS/GLONASS receiver was placed on-board the plane, which made it possible to record GNSS observations, which were then used to calculate the aircraft’s position in CSRS-PPP software. The calculated XYZ position coordinates from the CSRS-PPP software were then used in the weighted mean model’s developed optimization algorithm. The measurement weights are a function of the number of GPS and GLONASS satellites and the inverse of the mean error square. The obtained coordinates of aircraft from the research model were verified with the RTK-OTF solution. As a result of the research, the presented solution’s accuracy is better by 11–87% for the model with a weighting scheme as a function of the inverse of the mean error square. Moreover, using the XYZ position from the RTKLIB program, the research method’s accuracy increases from 45% to 82% for the model with a weighting scheme as a function of the inverse of the square of mean error. The developed method demonstrates high efficiency for improving the performance of GPS and GLONASS solutions for the PPP measurement technology in air navigation.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Hussein Soffar ◽  
Mohamed F. Alsawy

Abstract Background Neuronavigation is a very beneficial tool in modern neurosurgical practice. However, the neuronavigation is not available in most of the hospitals in our country raising the question about its importance in localizing the calvarial extra-axial lesions and to what extent it is safe to operate without it. Methods We studied twenty patients with calvarial extra-axial lesions who underwent surgical interventions. All lesions were preoperatively located with both neuronavigation and the usual linear measurements. Both methods were compared regarding the time consumed to localize the tumor and the accuracy of each method to anticipate the actual center of the tumor. Results The mean error of distance between the planned center of the tumor and the actual was 6.50 ± 1.762 mm in conventional method, whereas the error was 3.85 ± 1.309 mm in IGS method. Much more time was consumed during the neuronavigation method including booting, registration, and positioning. A statistically significant difference was found between the mean time passed in the conventional method and IGS method (2.05 ± 0.826, 24.90 ± 1.334, respectively), P-value < 0.001. Conclusion In the setting of limited resources, the linear measurement localization method seems to have an accepted accuracy in the localization of calvarial extra-axial lesions and it saves more time than neuronavigation method.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1631
Author(s):  
Bruno Guilherme Martini ◽  
Gilson Augusto Helfer ◽  
Jorge Luis Victória Barbosa ◽  
Regina Célia Espinosa Modolo ◽  
Marcio Rosa da Silva ◽  
...  

The application of ubiquitous computing has increased in recent years, especially due to the development of technologies such as mobile computing, more accurate sensors, and specific protocols for the Internet of Things (IoT). One of the trends in this area of research is the use of context awareness. In agriculture, the context involves the environment, for example, the conditions found inside a greenhouse. Recently, a series of studies have proposed the use of sensors to monitor production and/or the use of cameras to obtain information about cultivation, providing data, reminders, and alerts to farmers. This article proposes a computational model for indoor agriculture called IndoorPlant. The model uses the analysis of context histories to provide intelligent generic services, such as predicting productivity, indicating problems that cultivation may suffer, and giving suggestions for improvements in greenhouse parameters. IndoorPlant was tested in three scenarios of the daily life of farmers with hydroponic production data that were obtained during seven months of cultivation of radicchio, lettuce, and arugula. Finally, the article presents the results obtained through intelligent services that use context histories. The scenarios used services to recommend improvements in cultivation, profiles and, finally, prediction of the cultivation time of radicchio, lettuce, and arugula using the partial least squares (PLS) regression technique. The prediction results were relevant since the following values were obtained: 0.96 (R2, coefficient of determination), 1.06 (RMSEC, square root of the mean square error of calibration), and 1.94 (RMSECV, square root of the mean square error of cross validation) for radicchio; 0.95 (R2), 1.37 (RMSEC), and 3.31 (RMSECV) for lettuce; 0.93 (R2), 1.10 (RMSEC), and 1.89 (RMSECV) for arugula. Eight farmers with different functions on the farm filled out a survey based on the technology acceptance model (TAM). The results showed 92% acceptance regarding utility and 98% acceptance for ease of use.


2020 ◽  
Vol 47 (No. 1) ◽  
pp. 13-20
Author(s):  
Jitka Blažková ◽  
František Paprštein ◽  
Lubor Zelený ◽  
Adéla Skřivanová ◽  
Pavol Suran

The cropping of six sweet cherry cultivars that originated in the Research and Breeding Institute of Pomology at Holovousy, and a standard one, ‘Burlat’, were evaluated on three rootstocks in the period of 2007–2017. Trees planted in a spacing of 1.5 m × 5.0 m were trained as tall spindle axes utilising their natural tendency to develop a central leader. On the standard rootstock, P-TU-2, ‘Tim’ was the most productive with a mean total harvest of 47.6 kg per tree. ‘Sandra’ yielded the most on the PHLC rootstock with 56.2 kg per tree and ‘Helga’ yielded the most on Gisela 5 with a mean total harvest of 55.9 kg per tree. The mean impact of the rootstock on the tree vigour, measured upon the trunk cross section area, ranged from 148.4 cm2 on the standard rootstock P-TU-2 to 114.1 cm2 on the PHLC and 125.2 cm2 on Gisela 5 . On the standard rootstock P-TU-2, the most vigorous one according to this criterion was ‘Jacinta’ (178.0 cm2) whereas ‘Justyna’ (109.7 cm2) was the least vigorous. On the PHLC, the most vigorous was ‘Sandra’ (147.2 cm2) and the least was ‘Amid’ (94.0 cm2). The other tree characteristics were mainly dependant on the cultivar and minimally, or not at all, influenced by the rootstock vigour.


Sign in / Sign up

Export Citation Format

Share Document