final estimate
Recently Published Documents


TOTAL DOCUMENTS

50
(FIVE YEARS 11)

H-INDEX

10
(FIVE YEARS 1)

Author(s):  
Anna V. Anisimova ◽  

For this work were taken measurements of the skinfolds of Russian children and adolescents of both sexes 7-17 years old, with a total number of 1103 pupils. Was made a comparison of the mean values of the average skinfolds. Results and discussion. In the investigated group significant differences in the average thickness of the skinfolds were revealed between the initial set of Matiegka and the modification of Lutovinova et al. The revealed differences significantly influenced the final estimates of the mass of body fat. However, the estimates obtained turned out to be highly correlated and with a high level of agreement, on the basis of which conversion formulas between them were proposed. Conclusion. When using Matiegka's formulas, it is necessary to give a detailed description of the method for measuring skinfolds, taking into account the influence of the choice of skinfolds on the final estimate of the fat mass.


Author(s):  
V Ripepi ◽  
G Catanzaro ◽  
R Molinaro ◽  
M Gatto ◽  
G De Somma ◽  
...  

Abstract Classical Cepheids (DCEPs) are the most important primary indicators of the extragalactic distance scale. Establishing the dependence on metallicity of their period–luminosity and period–Wesenheit (PLZ/PWZ) relations has deep consequences on the calibration of secondary distance indicators that lead to the final estimate of the Hubble constant (H0). We collected high-resolution spectroscopy for 47 DCEPs plus 1 BL Her variables with HARPS-N@TNG and derived accurate atmospheric parameters, radial velocities and metal abundances. We measured spectral lines for 29 species and characterized their chemical abundances, finding very good agreement with previous results. We re-determined the ephemerides for the program stars and measured their intensity-averaged magnitudes in the V, I, J, H, Ks bands. We complemented our sample with literature data and used the Gaia Early Data Release 3 (EDR3) to investigate the PLZ/PWZ relations for Galactic DCEPs in a variety of filter combinations. We find that the solution without any metallicity term is ruled out at more than the 5 σ level. Our best estimate for the metallicity dependence of the intercept of the PLKs, PWJKs, PWVKs and PWHVI relations with three parameters, is −0.456 ±0.099, −0.465 ±0.071, −0.459 ±0.107 and −0.366 ±0.089 mag/dex, respectively. These values are significantly larger than the recent literature. The present data are still inconclusive to establish whether or not also the slope of the relevant relationships depends on metallicity. Applying a correction to the standard zero point offset of the Gaia parallaxes has the same effect of reducing by ∼22% the size of the metallicity dependence on the intercept of the PLZ/PWZ relations.


2021 ◽  
Author(s):  
Fangfang Hong ◽  
Stephanie Badde ◽  
Michael S. Landy

AbstractTo obtain a coherent perception of the world, our senses need to be in alignment. When we encounter misaligned cues from two sensory modalities, the brain must infer which cue is faulty and recalibrate the corresponding sense. We examined whether and how the brain uses cue reliability to identify the miscalibrated sense by measuring the audiovisual ventriloquism aftereffect for stimuli of varying reliability. Visual spatial reliability was smaller, comparable to and greater than that of auditory stimuli. To adjust for modality-specific biases, visual stimulus locations were chosen based on perceived alignment with auditory stimulus locations for each participant. During audiovisual recalibration, participants were presented with bimodal stimuli with a fixed perceptual spatial discrepancy; they localized one modality, cued after stimulus presentation. Unimodal auditory and visual localization was measured before and after the recalibration phase. We compared participants’ behavior to the predictions of three models of recalibration: (a) Reliability-based: each modality is recalibrated based on its relative reliability—less reliable cues are recalibrated more; (b) Fixed-ratio: the degree of recalibration for each modality is fixed; (c) Causal-inference: recalibration is directly determined by the discrepancy between a cue and its final estimate, which in turn depends on the reliability of both cues, and inference about how likely the two cues derive from a common source. Vision was hardly recalibrated by audition. Auditory recalibration by vision changed idiosyncratically as visual reliability decreased: the extent of auditory recalibration either decreased monotonically, first increased and then decreased, or increased monotonically. The latter two patterns cannot be explained by either the reliability-based or fixed-ratio models. Only the causal-inference model of recalibration captures the idiosyncratic influences of cue reliability on recalibration. We conclude that cue reliability, causal inference, and modality-specific biases guide cross-modal recalibration indirectly by determining the perception of audiovisual stimuli.Author summaryAudiovisual recalibration of spatial perception occurs when we receive audiovisual stimuli with a systematic spatial discrepancy. The brain must determine to which extent both modalities should be recalibrated. In this study, we scrutinized the mechanisms the brain employs to do so. To this aim, we conducted a classical recalibration task in which participants were adapted to spatially discrepant audiovisual stimuli. The visual component of the bimodal stimulus was either less, equally, or more reliable than the auditory component. We measured the amount of recalibration by computing the difference between participants’ unimodal localization responses before and after the recalibration task. Across participants, the influence of visual reliability on auditory recalibration varied fundamentally. We compared three models of recalibration. Only a causal-inference model of recalibration captured the diverse influences of cue reliability on recalibration found in our study, and this model is able to replicate contradictory results found in previous studies. In this model, recalibration depends on the discrepancy between a cue and its final estimate. Cue reliability, perceptual biases, and the degree to which participants infer that the two cues come from a common source govern audiovisual perception and therefore audiovisual recalibration.


2020 ◽  
Vol 35 (20) ◽  
pp. 2050103 ◽  
Author(s):  
Maurizio Consoli ◽  
Leonardo Cosmai

In the first version of the theory, with a classical scalar potential, the sector inducing SSB was distinct from the Higgs field interactions induced through its gauge and Yukawa couplings. We have adopted a similar perspective but, following most recent lattice simulations, described SSB in [Formula: see text] theory as a weak first-order phase transition. In this case, the resulting effective potential has two mass scales: (i) a lower mass [Formula: see text], defined by its quadratic shape at the minima, and (ii) a larger mass [Formula: see text], defined by the zero-point energy. These refer to different momentum scales in the propagator and are related by [Formula: see text], where [Formula: see text] is the ultraviolet cutoff of the scalar sector. We have checked this two-scale structure with lattice simulations of the propagator and of the susceptibility in the 4D Ising limit of the theory. These indicate that, in a cutoff theory where both [Formula: see text] and [Formula: see text] are finite, by increasing the energy, there could be a transition from a relatively low value, e.g. [Formula: see text] GeV, to a much larger [Formula: see text]. The same lattice data give a final estimate [Formula: see text] GeV which induces to reconsider the experimental situation at Large Hadron Collider (LHC). In particular an independent analysis of the ATLAS[Formula: see text]+[Formula: see text]CMS data indicating an excess in the 4-lepton channel as if there were a new scalar resonance around 700 GeV. Finally, the presence of two vastly different mass scales, requiring an interpolating form for the Higgs field propagator also in loop corrections, could reduce the discrepancy with those precise measurements which still favor large values of the Higgs particle mass.


Sensors ◽  
2020 ◽  
Vol 20 (11) ◽  
pp. 3244
Author(s):  
Alessandro Emanuele ◽  
Francesco Gasparotto ◽  
Giacomo Guerra ◽  
Mattia Zorzi

We propose a distributed Kalman filter for a sensor network under model uncertainty. The distributed scheme is characterized by two communication stages in each time step: in the first stage, the local units exchange their observations and then they can compute their local estimate; in the final stage, the local units exchange their local estimate and compute the final estimate using a diffusion scheme. Each local estimate is computed in order to be optimal according to the least favorable model belonging to a prescribed local ambiguity set. The latter is a ball, in the Kullback–Liebler topology, about the corresponding nominal local model. We propose a strategy to compute the radius, called local tolerance, for each local ambiguity set in the sensor network, rather than keep it constant across the network. Finally, some numerical examples show the effectiveness of the proposed scheme.


2020 ◽  
Vol 59 ◽  
pp. 168-174
Author(s):  
L. M. Khmelnychyi ◽  
V. V. Vechorka

Breeding practice testifies that phenotypic and genetic specificity and an appropriate level of consolidation by leading economically useful traits are important characteristics and binding conditions for testing and subsequent genetic progress of breeds and their structural breeding units. Therefore, in the aspect of estimation of the conformation type of brown breeds in Sumy region – Lebedyn, Ukrainian brown dairy and brown Swiss, the level of phenotypic consolidation coefficients of firstborn cows, evaluated by the method of linear classification, was studied. Five farms of Sumy region were used as the basis of experiments: PJSC “Plemzavod “Mykhailivka” Lebedynsky, PAF “Kolos” and SE “Pobeda” of Bilopilsky and pedigree reproducers – AJSCCT “Zorya” Okhtyrsky and JSC “Mayak” of Trostyanets districts. The coefficients of phenotypic consolidation (K1 and K2) of breeding groups of animals on linear traits of conformation were determined by formulas proposed by Yu. P. Polupan (2005). Taking into account the importance of estimation of breeds of dairy cattle created in the process of interbreed combinations, in the aspect of studying the genetic progress and the desired level of their phenotypic consolidation, determining the degree of coefficients of phenotypic consolidation of cows of brown breeds in Sumy region by linear traits, that characterize the conformation type of animals, is motivated and relevant research. According to the group traits of 100-point linear classification system was revealed that the most consolidated by type were animals of brown Swiss breed by all group traits (K1 = 0.274–0.362; K2 = 0.262–0.369) and the final type assessment (K1 = 0.304; K2 = 0.322). The negative values of phenotypic consolidation coefficients indicated that the least consolidated by type were animals of the Swiss breed, especially on the group traits that characterize the dairy type (K1 = -0.012; K2 = -0.021), udder (K1 = -0.212; K2 = -0.231) and the final score (K1 = -0.028; K2 = -0.023). Animals of Ukrainian brown dairy breed were closer to the peers of brown Swiss cattle by both group traits (K1 = 0.202; K2 = 0.268) and by the final score (K1 = 0.219; K2 = 0.279). The consolidation coefficients of brown Swiss cows by group traits are (K1 = 0.274; K2 = 0.362), and by the final estimate (K1 = 0.304; K2 = 0.322). In a comparative analysis of the level of phenotypic consolidation coefficients of descriptive traits of the type was determined that of the evaluated breeds significant advantage by the phenotypic consolidation of these traits have animals of brown Swiss breed. The most consolidated firstborn cows of this breed for important descriptive traits of angularity (K1 = 0.362; K2 = 0.375), rear width (K1 = 0.293; K2 = 0.306, attachment of front (K1 = 0.289; K2 = 0.309) and rear ( K1 = 0.225; K2 = 0.229) udder parts, central ligament expression (K1 = 0.333; K2 = 0.371), udder depth (K1 = 0.296; K2 = 0.312), placement (K1 = 0.286; K2 = 0.303) and teats length (K1 = 0.321; K2 = 0.313) and locomotion (K1 = 0.304; K2 = 0.333). The determined hereditary influence of breed on the degree of phenotypic consolidation of the majority of linear traits testifies to the possibility of effective breeding of dairy cattle by type with intensive use of purebred sires of brown Swiss breed with high score by the linear classification of type of their daughters.


CJEM ◽  
2020 ◽  
Vol 22 (S1) ◽  
pp. S103-S103
Author(s):  
J. Estrada-Codecido ◽  
J. Lee ◽  
M. Chignell ◽  
C. Whyne

Introduction: Mobility is an evidence-based non-pharmacologic strategy shown to reduce delirium and functional decline among older patients in the acute care setting. Activity trackers have been used in previous studies to objectively measure mobility in older hospitalized patients. This study aims to compare the feasibility and validate the accuracy of three accelerometer-based activity trackers (Fitbit Zip, Fitbit Charge HR and StepWatch). This is the first step in a program of research to objectively measure as a potential marker of delirium risk. Methods: This is a prospective study of patients 65 years of age and older during their ED visit. We excluded those with critical illness, unable to communicate or provide consent; and any ambulatory impediments. Consenting participants wore the trackers for up to 8 hour, and completed a 6-meter walk test while a research assistant manually counted their steps. Our primary feasibility measure was the proportion of eligible patient for which we were able to recover the tracker and recorded their steps. The primary validation endpoint was the concordance between steps recorded by the tracker compared to a gold standard manual step count over a fixed distance. Sample size was based on the desired precision of the final estimate of feasibility. Intraclass correlation coefficient (ICC) was calculated to assess agreements between devices and manual count. We will report proportions with exact binomial 95% confidence intervals (CI) for feasibility and validity endpoints. Results: 41 participants were enrolled in this study. Mean age was 74.6 years (+/- 5.76) and 59% were females. The total subjects that wore the Fitbit Zip, Fitbit Charge HR and StepWatch during study participation was, 40/41 (97.5%, CI 0.87–0.99), 33/34 (97%, CI 0.84–0.99) and 31/32 (96.8%, CI 0.83–0.99), respectively. Total subjects with completed data extracted from the Fitbit Zip, Fitbit Charge HR and StepWatch was, 38/41 subjects (92.6%, CI 0.80–0.98), 34 (100%, CI 0.89–1.00), and 32 (100%, CI 0.89–1.00), respectively. All devices were recovered after use (100%, 95%CI 0.91–100). Conclusion: Our results suggest: 1) the use of gait-tracking devices in the ED is feasible, 2) consumer and research-grade devices showed good validity against the gold standard, and 3) the use of small, inexpensive, consumer-grade trackers to objectively measure mobility of older adults in the ED.


Author(s):  
V. V. Grebenyuk ◽  

The article considers the problem of finding a way to assess the quality of video in the absence of a standard for comparison. In the literature, such methods of assessing image quality are called no-reference (NR) or NR-methods. First of all, the article examines the artifacts of image compression. The relevance of this approach is that the data is compressed when material transmitting over the Internet to save information. This method is based on criteria that characterize the degree of change in the brightness of video frames. By themselves, the criteria allow to conduct a comparative analysis of image quality not in all cases. In this article, to assess the quality it is proposed to use criteria which are based on statistical methods, which reflects the degree of change in brightness in the aggregate. These criteria are completely new in the field of research the quality of both video streaming and images in general. The proposed method takes into account all possible changes in the characteristics of the image with deteriorating quality. During the experiment, the feasibility of using these methods in the problem of ranking the material by the level of compression artifacts was demonstrated. It has been experimentally shown that none of the studied non-reference methods of image quality assessment is universal, and the calculated assessment cannot be converted into a quality scale without taking into account the factors influencing the distortion of image quality. Also, this method forms the final estimate as the arithmetic mean of the estimates of rows and columns of the image. In the case of local distortions, the proposed methods may not give completely true results. To conduct the experiment, the program code was implemented in the MATLAB environment, using the library for computer image processing Image Processing Toolbox.


2019 ◽  
Vol 3 (1) ◽  
pp. 1-7
Author(s):  
Gregory D. Bothun

The initial estimate of the flow rate of now liberated crude oil following the explosion and sinking of the Deepwater Horizon oil platform turned out to be a factor of 50 times lower than the physical reality. This initial estimate, provided by the corporate owner of the oil platform, British Petroleum (BP), was a leak rate of 1,000 barrels per day (bpd). This number was not based on any scientific approach and was never put into context, for the media or the public, of whether this was a big or small number (i.e., how many bpd is equivalent to filling a bathtub for 24 h) and was simply accepted as the physical reality. As a consequence, the initial response to the disaster would plan for a scope that was much smaller than what ultimately unfolded. Furthermore, since 1,000 bpd turns out to be a small number, the initial strategy was based on the belief that the leak could be patched and therefore a fix was manageable. Here we show that (a) simple physical reasoning at the time of the occurrence would have lead to initial estimates that were close to the final estimate (determined 2 months after the initial incident) of about 50,000 bpd; (b) there was an unnecessarily slow time evolution to involve the scientific community to gather relevant data that would vastly improve the estimate and; (c) this slow evolution in unmasking the physical reality of the situation prevented a more robust governmental response to the problem. Even though the government, through National Oceanic and Atmospheric Administration (NOAA), revised the leak rate to 5,000 bpd one week after the disaster, another month would elapse before it was officially recognized that the leak rate was essentially 10 times higher.


ACTA IMEKO ◽  
2019 ◽  
Vol 8 (4) ◽  
pp. 41
Author(s):  
Stanislaw Goll ◽  
Julia Maximova

The main goal of this research is to increase the measurement resolution of ultrasonic rangefinders to meet the needs of vital signs noncontact registration based on chest movements. The two-phase method is proposed to make distance estimates by sending probe pulse trains, calculating the phase spectrum of the echo signal’s envelope, and tracking its relevant components. During the first phase, rough Time-of-Flight (ToF)-based estimates are made. During the second phase, this estimate is corrected based on the phase spectrum of the echo signal’s envelope, the phase ambiguity is removed, and the relevant components are determined. The final estimate of the human chest displacement is calculated based on these relevant components. The output data rate is the same as for the ToF-based measurements, but the measurement resolution is increased to one hundredth of the ultrasonic wavelength. The experiment results are provided for the both model and the real human chest displacements caused by the respiration and heartbeat processes.


Sign in / Sign up

Export Citation Format

Share Document