scholarly journals Determining Girard Form Class in Central Hardwoods

1997 ◽  
Vol 14 (4) ◽  
pp. 202-206 ◽  
Author(s):  
John C. Rennie ◽  
Jack D. Leake

Abstract Girard form class is widely used to describe tree form. Tree volume estimates change about 3% per unit change of Girard form class (Mesavage and Girard 1946). Hardwoods growing in close proximity have been observed to have a wide range in Girard form class. Accurate determination of Girard form class can therefore be important in getting accurate estimates of hardwood timber volume. However, the cost of estimating Girard form class for every tree being measured in the stand would be prohibitively expensive. Thus, estimation of average Girard form class for a stand is considered here. Three instruments used to estimate Girard form class—a Wheeler pentaprism optical caliper, a wedge prism, and a Spiegel relaskop—were compared to direct measurement. Number of sample trees to achieve desired half-widths of the confidence interval of ±1 and ±1 1/2 units of Girard form class was calculated for each method. Direct measurement requires the fewest trees to achieve the desired results. However, it requires considerably more time per tree than any of the instruments tested. The Wheeler pentaprism requires only a few more trees than direct measurement, and considerably fewer trees than either the wedge prism or the Spiegel relaskop. Use of all three instruments is hindered when understory vegetation obscures the top of the first log. North. J. Appl. For. 14(4):202-206.

It is well known that the disintegration electrons from a radioactive body are distributed over a wide range of velocities, and that a characteristic feature of this distribution is that there is an upper energy limit above which no electrons are emitted. The accurate determination of these upper limits and the corresponding maximum energy of the β-rays has become of much importance in connection with special theories which have been advanced to explain the nature of the β-ray disintegration. Numerous experiments have been done to find these upper limits or end points. Most of the existing data are based on range measurements in which the energy of the fastest β-rays from a source is deduced from their range. The chief advantage of the method is that it can be carried out with weak or rapidly decaying sources; the great disadvantage is that owing to the scattering suffered by the β-particles, the range found by experiment is an indefinite quantity and has no simple relation to the maximum energy of the β-particles. Because of this fact range methods, while frequently giving results in general agreement with magnetic analysis, are not capable of leading to accurate results for the upper limits. There remain two other methods of analysis, the magnetic spectrograph and the expansion chamber. The latter as used by Terroux and Alexander gives a higher upper limit than other methods. For radium E Terroux reported a tail extending to 3,000,000 volts, while other methods give an end point at about 1,070,000 volts. Champion, in repeating Terroux’s experiments, emphasized the precautions that must be taken in interpreting the experimental material. To eliminate the effect of scattering he was forced to adopt a criterion that a particle would be counted only if it had an undisturbed track greater than a certain length. The cloud-chamber method, while applicable to very weak sources, must be used with great care, and cannot give an accurate value of the end point without taking a very large number of photographs.


Author(s):  
A. Bondarenko ◽  
M. Moiseienko ◽  
V. Gordienko ◽  
O. Dutchenko

The purpose of the article is to study the essence of solvency of the enterprise, to determine the approaches to assessment and analysis of solvency. Since the assessment of the borrower's solvency is the key to the successful functioning not only of the financial and credit institution, but also of the enterprise itself, so in the conditions of formation and development of market relations lenders need to have an accurate idea of the borrower's solvency. Relevance of the research topic is explained by the fact that today the solvency of the enterprise requires a thorough and comprehensive research in terms of the solvency of enterprises and the development of a scientific justification for a common algorithm that borrowers can use to calculate their credit obligations. Today, there is no single algorithm for determining a borrower's solvency. Each banking institution uses its own methodology, which, in its opinion, is the most effective and takes into account a wide range of financial indicators. According to the valuation specificity for the assessment of legal entities regulated by the NBU, determining the borrower's solvency involves analyzing its financial and economic characteristics Requirements of the Regulations on the determination of the size of credit risk by the banks of Ukraine established the calculation of the credit risk indicator, which provides for the definition of the integral indicator, calculation of the borrower's financial class and the probability of default. Within the limits of the given research the complex estimation of Technologia JSC solvency has been carried out, by results of which the quantitative indicators received as a result of construction of the integral model, are included into the range of values corresponding to the second class of the borrower. Calculation of the overall qualitative indicator confirmed the high level of solvency of the studied enterprise, with a minimum probability of default. In order to improve the quality of the solvency assessment of the borrower we propose, in further studies, to consider the competitiveness of the enterprise as a factor of more accurate determination of its financial condition and solvency. Keywords: solvency, borrower, financial standing, financial factors.


1999 ◽  
Vol 277 (5) ◽  
pp. H1745-H1753 ◽  
Author(s):  
Gilles Faury ◽  
Gail M. Maher ◽  
Dean Y. Li ◽  
Mark T. Keating ◽  
Robert P. Mecham ◽  
...  

Resistance in blood vessels is directly related to the inner (luminal) diameter (ID). However, ID can be difficult to measure during physiological experiments because of poor transillumination of thick-walled or tightly constricted vessels. We investigated whether the wall cross-sectional area (WCSA) in cannulated arteries is nearly constant, allowing IDs to be calculated from outer diameters (OD) using a single determination of WCSA. With the use of image analysis, OD and ID were directly measured using either transillumination or a fluorescent marker in the lumen. IDs from a variety of vessel types were calculated from WCSA at several reference pressures. Calculated IDs at all of the reference WCSA were within 5% (mean <1%) of the corresponding measured IDs in all vessel types studied, including vessels from heterozygote elastin knockout animals. This was true over a wide range of transmural pressures, during treatment with agonists, and before and after treatment with KCN. In conclusion, WCSA remains virtually constant in cannulated vessels, allowing accurate determination of ID from OD measurement under a variety of experimental conditions.


1980 ◽  
Vol 192 (2) ◽  
pp. 719-723 ◽  
Author(s):  
P J Garlick ◽  
M A McNurlan ◽  
V R Preedy

A rapid procedure for measuring the specific radioactivity of phenylalanine in tissues was developed. This facilitates the accurate determination of rates of protein synthesis in a wide range of tissues by injection of 150 mumol of L-[4-(3)H]phenylalanine/100 g body wt. The large dose of amino acid results in a rapid rise in specific radioactivity of free phenylalanine in tissues to values close to that in plasma, followed by a slow but linear fall. This enables the rate of protein synthesis to be calculated from measurements of the specific radioactivity of free and protein-bound phenylalanine in tissues during a 10 min period after injection of radioisotope.


2012 ◽  
Vol 108 (07) ◽  
pp. 191-198 ◽  
Author(s):  
Gabriele Rohde ◽  
Gertrud Stratmann ◽  
Christian Hesse ◽  
Natalie Herth ◽  
Stephan Schwers ◽  
...  

SummaryRivaroxaban is a direct factor Xa inhibitor, which can be monitored by anti-factor Xa chromogenic assays. This ex vivo study evaluated different assays for accurate determination of rivaroxaban levels. Eighty plasma samples from patients receiving rivaroxaban (Xarelto®) 10 mg once daily and 20 plasma samples from healthy volunteers were investigated using one anti-factor Xa assay with the addition of exogenous antithrombin and two assays without the addition of antithrombin. Two different lyophilised rivaroxaban calibration sets were used for each assay (low concentration set: 0, 14.5, 59.6 and 97.1 ng/ml; high concentration set: 0, 48.3, 101.3, 194.2 and 433.3 ng/ml). Using a blinded study design, the rivaroxaban concentrations determined by the assays were compared with concentrations measured by HPLC-MS/MS. All assays showed a linear relationship between the rivaroxaban concentrations measured by HPLC-MS/MS and the optical density of the anti-FXa assays. However, the assay with the addition of exogenous anti-thrombin detected falsely high concentrations of rivaroxaban even in plasma samples from controls who had not taken rivaroxaban (intercept values using the high calibrator set and the low calibrator set: +26.49 ng/ml and +13.71 ng/ml, respectively). Plasma samples, initially determined by the high calibrator setting and containing rivaroxaban concentrations <25 ng/ml, had to be re-run using the low calibrator setting for precise measurement. In conclusion, anti-factor Xa chromogenic assays that use rivaroxaban calibrators at different concentration levels can be used to measure accurately a wide range of rivaroxaban concentrations ex vivo. Assays including exogenous antithrombin are unsuitable for measurement of rivaroxaban.


Proceedings ◽  
2020 ◽  
Vol 60 (1) ◽  
pp. 55
Author(s):  
Mariagrazia Lettieri ◽  
Pasquale Palladino ◽  
Simona Scarano ◽  
Maria Minunni

The outstanding properties of metal nanoclusters, stabilized with different scaffolds, i.e., proteins, nucleic acids, polymers and dendrimers, enable their application in a wide range of fields. The recent advances in the fabrication and synthesis of nanoclusters have revolutionized the design of biosensors, leading to significant improvements in the selective and sensitive determination of several targets. In particular, in recent years, copper nanoclusters (CuNCs) have attracted more attention mainly for their unique fluorescent properties, as well as their large Stokes shifts, low toxicity, and high biocompatibility. The high-photoluminescent features of CuNCs facilitate highly sensitive target detection even in complex biological matrices. For these reasons, in this work, we exploited the specific template-targeted CuNCs’ growth for the sensitive and accurate determination of human serum albumin (HSA) in urine and human serum. HSA is the most abundant protein in plasma, acting as a carrier for many key biological molecules such as hormones, fatty acids and steroids, and it contributes to the maintenance of the oncotic blood pressure. The concentration of HSA in body fluids greatly influences the state of health of the patients. Taking into account these considerations, the quantitative detection of human serum albumin plays a key role in the early diagnosis of serious pathological conditions such as albuminuria and albuminemia. Here, we present a CuNCs-based assay in which copper nanoclusters were used as fluorescent signal indicators to detect serum albumin in a complex biological matrix.


2017 ◽  
Vol 19 (2) ◽  
pp. 275-291 ◽  
Author(s):  
M. Rauliajtys-Grzybek ◽  
W. Baran ◽  
M. Macuda

The aim of this article is to assess the cost accounting solutions that are used currently in Polish hospitals. The evaluation covered three main areas that require cost data—management, external reporting and pricing of health services (performed by a regulatory body). The study concerned the costing model that was obligatory for Polish public hospitals in the years 1998–2011 (top-down micro-costing model) and has not yet been replaced by another solution. Different research methods have been used for each of the areas under research, inter alia surveys, direct interviews and case study methods. Empirical results indicate a limited usefulness of the researched costing model in all of the evaluated areas: management, financial reporting and pricing. First, the costing model is used moderately only for the most important management areas, such as planning, control and operational decision making. Second, it does not include the specifics of diagnosis-related groups (DRGs) (basic pricing object) and therefore does not allow for accurate determination of their costs. Third, due to lack of the complex costing methodology the model is used in an unstructured and incoherent manner.


1986 ◽  
Vol 17 (3) ◽  
pp. 169-173
Author(s):  
R. J. Snelgar

Most organizations regard the accurate determination of prevailing labour market rates as being of primary importance to decisions regarding the setting of competitive wage and salary levels. The techniques involved in establishing these rates are fraught with problems, mainly revolving around efforts at obtaining comparability. Justification has been provided for organizations using tailor-made survey approaches in preference to professional or 'commercial' surveys, as this allows reduction to a minimum of such comparability problems as those associated with job description responsibilities, and compensation mix. This study reveals the extent to which a single pay structure received differing adjustments as a result of analysis of data obtained from a tailor-made survey approach as opposed to that obtained from a 'commercial' survey. Results indicate significant differences in adjustments over a three-year survey period, attributable essentially to the wide range of comparability difficulties associated with use of 'commercial' survey data.


Sign in / Sign up

Export Citation Format

Share Document