Dihedral angles between alveolar septa

1988 ◽  
Vol 64 (1) ◽  
pp. 299-307 ◽  
Author(s):  
E. H. Oldmixon ◽  
J. P. Butler ◽  
F. G. Hoppin

To determine the dihedral angle, alpha, at the characteristic three-way septal junctions of lung parenchyma, we examined photomicrographs of sections. The three angles, A, formed where three septal traces meet on section, were measured and found to range between approximately 50 and 170 degrees. Theoretical considerations predicted that the dispersion of alpha is much narrower than that of A. The mean of A and alpha is identically 120 degrees. The standard deviation of alpha was inferred from the cumulative distribution function of A. In lungs inflated to 30 cmH2O (VL30), the standard deviation of alpha was very small (approximately 2 degrees) and increased to approximately 6 degrees in lungs inflated to 0.4 VL30. These findings imply that at VL30 tensions exerted by septa are locally homogeneous (2% variation) and at lower lung volumes become less so (6% variation). At high distending pressures, tissue forces are thought to dominate interfacial forces, and therefore the local uniformity of tensions suggests a stress-responsive mechanism for forming or remodeling the connective tissues. The source of the local nonuniformity at lower volumes is unclear but could relate to differences in mechanical properties of alveolar duct and alveoli. Finally, local uniformity does not imply global uniformity.

2021 ◽  
Author(s):  
Wenting Wang ◽  
Shuiqing Yin ◽  
Bofu Yu ◽  
Shaodong Wang

Abstract. Stochastic weather generator CLIGEN can simulate long-term weather sequences as input to WEPP for erosion predictions. Its use, however, has been somewhat restricted by limited observations at high spatial-temporal resolutions. Long-term daily temperature, daily and hourly precipitation data from 2405 stations and daily solar radiation from 130 stations distributed across mainland China were collected to develop the most critical set of site-specific parameter values for CLIGEN. Universal Kriging (UK) with auxiliary covariables, longitude, latitude, elevation, and the mean annual rainfall was used to interpolate parameter values into a 10 km × 10 km grid and parameter accuracy was evaluated based on leave-one-out cross-validation. The results demonstrated that Nash-Sutcliffe efficiency coefficients (NSEs) between UK interpolated and observed parameters were greater than 0.85 for all parameters apart from the standard deviation of solar radiation, skewness coefficient of daily precipitation, and cumulative distribution of relative time to peak intensity, with relatively lower interpolation accuracy (NSE > 0.66). In addition, CLIGEN simulated daily weather sequences using UK-interpolated and observed inputs showed consistent statistics and frequency distributions. The mean absolute discrepancy between the two sequences in the average and standard deviation of the temperature was less than 0.51 °C. The mean absolute relative discrepancy for the same statistics for solar radiation, precipitation amount, duration and maximum intensity in 30-min were less than 5 %. CLIGEN parameters at the 10 km resolution would meet the minimum WEPP climate requirements throughout in mainland China. The dataset is availability at http://clicia.bnu.edu.cn/data/cligen.html and http://doi.org/10.12275/bnu.clicia.CLIGEN.CN.gridinput.001 (Wang et al., 2020).


2019 ◽  
Vol 3 ◽  
pp. 1-10
Author(s):  
Jyotirmoy Sarkar ◽  
Mamunur Rashid

Background: Sarkar and Rashid (2016a) introduced a geometric way to visualize the mean based on either the empirical cumulative distribution function of raw data, or the cumulative histogram of tabular data. Objective: Here, we extend the geometric method to visualize measures of spread such as the mean deviation, the root mean squared deviation and the standard deviation of similar data. Materials and Methods: We utilized elementary high school geometric method and the graph of a quadratic transformation. Results: We obtain concrete depictions of various measures of spread. Conclusion: We anticipate such visualizations will help readers understand, distinguish and remember these concepts.


2018 ◽  
Author(s):  
David M. Hyman ◽  
Andrea Bevilacqua ◽  
Marcus I. Bursik

Abstract. The study of volcanic mass flow hazards in a probabilistic framework centers around systematic experimental numerical modelling of the hazardous phenomenon and the subsequent generation and interpretation of a probabilistic hazard map (PHM). For a given volcanic flow (e.g., lava flow, lahar, pyroclastic flow, etc.), the PHM is typically interpreted as the point-wise probability of flow material inundation. In the current work, we present new methods for calculating spatial representations of the mean, standard deviation, median, and modal locations of the hazard's boundary as ensembles of many deterministic runs of a physical model. By formalizing its generation and properties, we show that a PHM may be used to construct these statistical measures of the hazard boundary which have been unrecognized in previous probabilistic hazard analyses. Our formalism shows that a typical PHM for a volcanic mass flow not only gives the point-wise inundation probability, but also represents a set of cumulative distribution functions for the location of the inundation boundary with a corresponding set of probability density functions. These distributions run over curves of steepest ascent on the PHM. Consequently, 2D space curves can be constructed on the map which represent the mean, median and modal locations of the likely inundation boundary. These curves give well-defined answers to the question of the likely boundary location of the area impacted by the hazard. Additionally, methods of calculation for higher moments including the standard deviation are presented which take the form of map regions surrounding the mean boundary location. These measures of central tendency and variance add significant value to spatial probabilistic hazard analyses, giving a new statistical description of the probability distributions underlying PHMs. The theory presented here may be used to construct improved hazard maps, which could prove useful for planning and emergency management purposes. This formalism also allows for application to simplified processes describable by analytic solutions. In that context, the connection between the PHM, its moments, and the underlying parameter variation is explicit, allowing for better source parameter estimation from natural data, yielding insights about natural controls on those parameters.


2019 ◽  
Vol 19 (7) ◽  
pp. 1347-1363 ◽  
Author(s):  
David M. Hyman ◽  
Andrea Bevilacqua ◽  
Marcus I. Bursik

Abstract. The study of volcanic flow hazards in a probabilistic framework centers around systematic experimental numerical modeling of the hazardous phenomenon and the subsequent generation and interpretation of a probabilistic hazard map (PHM). For a given volcanic flow (e.g., lava flow, lahar, pyroclastic flow, ash cloud), the PHM is typically interpreted as the point-wise probability of inundation by flow material. In the current work, we present new methods for calculating spatial representations of the mean, standard deviation, median, and modal locations of the hazard's boundary as ensembles of many deterministic runs of a physical model. By formalizing its generation and properties, we show that a PHM may be used to construct these statistical measures of the hazard boundary which have been unrecognized in previous probabilistic hazard analyses. Our formalism shows that a typical PHM for a volcanic flow not only gives the point-wise inundation probability, but also represents a set of cumulative distribution functions for the location of the inundation boundary with a corresponding set of probability density functions. These distributions run over curves of steepest probability gradient ascent on the PHM. Consequently, 2-D space curves can be constructed on the map which represents the mean, median, and modal locations of the likely inundation boundary. These curves give well-defined answers to the question of the likely boundary location of the area impacted by the hazard. Additionally, methods of calculation for higher moments including the standard deviation are presented, which take the form of map regions surrounding the mean boundary location. These measures of central tendency and variance add significant value to spatial probabilistic hazard analyses, giving a new statistical description of the probability distributions underlying PHMs. The theory presented here may be used to aid construction of improved hazard maps, which could prove useful for planning and emergency management purposes. This formalism also allows for application to simplified processes describable by analytic solutions. In that context, the connection between the PHM, its moments, and the underlying parameter variation is explicit, allowing for better source parameter estimation from natural data, yielding insights about natural controls on those parameters.


2013 ◽  
Vol 2013 ◽  
pp. 1-4
Author(s):  
Louis M. Houston

Using two measurements, we produce an estimate of the mean and the sample standard deviation. We construct a confidence interval with these parameters and compute the probability of the confidence interval by using the cumulative distribution function and averaging over the parameters. The probability is in the form of an integral that we compare to a computer simulation.


1969 ◽  
Vol 14 (9) ◽  
pp. 470-471
Author(s):  
M. DAVID MERRILL
Keyword(s):  

1972 ◽  
Vol 28 (03) ◽  
pp. 447-456 ◽  
Author(s):  
E. A Murphy ◽  
M. E Francis ◽  
J. F Mustard

SummaryThe characteristics of experimental error in measurement of platelet radioactivity have been explored by blind replicate determinations on specimens taken on several days on each of three Walker hounds.Analysis suggests that it is not unreasonable to suppose that error for each sample is normally distributed ; and while there is evidence that the variance is heterogeneous, no systematic relationship has been discovered between the mean and the standard deviation of the determinations on individual samples. Thus, since it would be impracticable for investigators to do replicate determinations as a routine, no improvement over simple unweighted least squares estimation on untransformed data suggests itself.


2020 ◽  
Vol 1 (2) ◽  
pp. 56-66
Author(s):  
Irma Linda

Background: Early marriages are at high risk of marital failure, poor family quality, young pregnancies at risk of maternal death, and the risk of being mentally ill to foster marriage and be responsible parents. Objective: To determine the effect of reproductive health education on peer groups (peers) on the knowledge and perceptions of adolescents about marriage age maturity. Method: This research uses the Quasi experimental method with One group pre and post test design, conducted from May to September 2018. The statistical analysis used in this study is a paired T test with a confidence level of 95% (α = 0, 05). Results: There is an average difference in the mean value of adolescent knowledge between the first and second measurements is 0.50 with a standard deviation of 1.922. The mean difference in mean scores of adolescent perceptions between the first and second measurements was 4.42 with a standard deviation of 9.611. Conclusion: There is a significant difference between adolescent knowledge on the pretest and posttest measurements with a value of P = 0.002, and there is a significant difference between adolescent perceptions on the pretest and posttest measurements with a value of p = 0.001. Increasing the number of facilities and facilities related to reproductive health education by peer groups (peers) in adolescents is carried out on an ongoing basis at school, in collaboration with local health workers as prevention of risky pregnancy.


1988 ◽  
Vol 60 (1) ◽  
pp. 1-29 ◽  
Author(s):  
E. D. Young ◽  
J. M. Robert ◽  
W. P. Shofner

1. The responses of neurons in the ventral cochlear nucleus (VCN) of decerebrate cats are described with regard to their regularity of discharge and latency. Regularity is measured by estimating the mean and standard deviation of interspike intervals as a function of time during responses to short tone bursts (25 ms). This method extends the usual interspike-interval analysis based on interval histograms by allowing the study of temporal changes in regularity during transient responses. The coefficient of variation (CV), equal to the ratio of standard deviation to mean interspike interval, is used as a measure of irregularity. Latency is measured as the mean and standard deviation of the latency of the first spike in response to short tone bursts, with 1.6-ms rise times. 2. The regularity and latency properties of the usual PST histogram response types are shown. Five major PST response type classes are used: chopper, primary-like, onset, onset-C, and unusual. The presence of a prepotential in a unit's action potentials is also noted; a prepotential implies that the unit is recorded from a bushy cell. 3. Units with chopper PST histograms give the most regular discharge. Three varieties of choppers are found. Chop-S units (regular choppers) have CVs less than 0.35 that are approximately constant during the response; chop-S units show no adaptation of instantaneous rate, as measured by the inverse of the mean interspike interval. Chop-T units have CVs greater than 0.35, show an increase in irregularity during the response and show substantial rate adaptation. Chop-U units have CVs greater than 0.35, show a decrease in irregularity during the response, and show a variety of rate adaptation behaviors, including negative adaptation (an increase in rate during a short-tone response). Irregular choppers (chop-T and chop-U units) rarely have CVs greater than 0.5. Choppers have the longest latencies of VCN units; all three groups have mean latencies at least 1 ms longer than the shortest auditory nerve (AN) fiber mean latencies. 4. Chopper units are recorded from stellate cells in VCN (35, 42). Our results for chopper units suggest a model for stellate cells in which a regularly firing action potential generator is driven by the summation of the AN inputs to the cell, where the summation is low-pass filtered by the membrane capacitance of the cell.(ABSTRACT TRUNCATED AT 400 WORDS)


Sign in / Sign up

Export Citation Format

Share Document