arbitrary assumption
Recently Published Documents


TOTAL DOCUMENTS

11
(FIVE YEARS 1)

H-INDEX

4
(FIVE YEARS 0)

Author(s):  
Ruixuan Zhao ◽  
Daxin Wu ◽  
Jiao Wen ◽  
Qi Zhang ◽  
Guganglei Zhang ◽  
...  

To achieve the goal of efficiently analyzing the transient absorbance spectrum without arbitrary assumption and to overcome the limitation of conventional method in fitting ability and highly noised background, it...


2003 ◽  
Vol 125 (4) ◽  
pp. 972-978 ◽  
Author(s):  
A. Traverso ◽  
A. F. Massardo ◽  
M. Santarelli ◽  
M. Cali

An instrument for promoting CO2 emission reductions, taking the Kyoto Protocol goal into account, could be the assignment to energy conversion plants of a monetary charge linked to their specific emission intensity, usually called carbon tax. There are two main problems closely connected with this approach: the estimation of the charge (that must be related to the “external” cost associated with CO2 emission) and the choice of the strategy to determine the amount of the imposed charge. In this paper an analytical procedure proposed by the authors and called carbon exergy tax (CET) for the evaluation of CO2 emission externalities is presented. It is based on the thermoeconomic analysis of energy systems, which allows second law losses to be quantified in monetary terms: the resulting cost represents the taxation that is to be applied to the energy system under examination, calculated without any arbitrary assumption. Since the complete procedure of the CET evaluation is too complex to become a feasible instrument of energy policy, hereby, after applying the procedure to some conventional and advanced power plants, gas, oil, and coal-fueled, a new generalized approach, based on the results of the complete CET procedure, is proposed. The generalized CET evaluation requires much less information about the energy system and thus a simple and effective energy policy rule to manage global warming is obtained and available.


Author(s):  
A. Traverso ◽  
M. Santarelli ◽  
A. F. Massardo ◽  
M. Cali`

An instrument for promoting CO2 emission reductions, taking the Kyoto Protocol goal into account, could be the assignment to energy conversion plants of a monetary charge linked to their specific emission intensity, usually called Carbon Tax. There are two main problems closely connected with this approach: the estimation of the charge (that must be related to the “external” cost associated with CO2 emission) and the choice of the strategy to determine the amount of the imposed charge. In this paper an analytical procedure proposed by the authors and called Carbon Exergy Tax (CET) for the evaluation of CO2 emission externalities is presented. It is based on the thermoeconomic analysis of energy systems, which allows Second Law losses to be quantified in monetary terms: the resulting cost represents the taxation that is to be applied to the energy system under examination, calculated without any arbitrary assumption. Since the complete procedure of the CET evaluation is too complex to become a feasible instrument of energy policy, hereby, after applying the procedure to some conventional and advanced power plants, gas-, oil- and coal-fuelled, a new generalised approach, based on the results of the complete CET procedure, is proposed. The generalised CET evaluation requires much less information about the energy system and thus a simple and effective energy policy rule to manage global warming is obtained and available.


Science News ◽  
1983 ◽  
Vol 123 (1) ◽  
pp. 3
Author(s):  
Valentin D. Fikovsky
Keyword(s):  

Perception ◽  
1981 ◽  
Vol 10 (4) ◽  
pp. 431-434 ◽  
Author(s):  
David J Weiss

The idea that there is a single psychophysical function which describes how the human responds to stimulus intensity is rejected. The form of any empirical function depends upon the buried yet arbitrary assumption about how the stimuli are to be measured. Because psychophysical functions have this arbitrary basis, there can be no universal law, and further, no psychophysical function can reveal a general truth about the nervous system. The power law has been inappropriately reified; the descriptive usefulness of the power function has been incorrectly extended, perhaps because simplicity is appealing.


1974 ◽  
Vol 35 (2) ◽  
pp. 955-962
Author(s):  
Tom Ciborowski

Three different groups of college age Ss received biconditional rule-learning problems that were altered in such a way as to permit a direct test of an unpublished model of Ss' behavior proposed by C. K. Sawyer and P. Johnson and substantially extended by Salatas and Bourne (1972). The present experiment obtained strong support for the model and evidence to support the widely reported suggestion that the principal difficulty with a biconditional rule is that S must learn to classify together two groups of elements that share no elements in common. The major outcome of the experiment was that strong empirical support was obtained for a useful but arbitrary assumption by Salatas and Bourne concerning a metric for evaluating biconditional rule-difficulty.


1967 ◽  
Vol 89 (4) ◽  
pp. 732-736 ◽  
Author(s):  
S. Katz

Separating and mixing steady flows are analyzed at a 90 deg equal area branch (TEE junction). An arbitrary assumption is made linking secondary flow forces to the turning momentum through an experimental constant. This constant is evaluated from the data of Vogel on TEE junctions and the data of Barton on pipe laterals. Expressions are derived for the mechanical potential drops at the branch. These expressions are then used to solve a simple branch flow problem.


1954 ◽  
Vol 37 (6) ◽  
pp. 717-727 ◽  
Author(s):  
David I. Hitchcock

Measurements were made of electromotive force in the Donnan equilibrium of systems containing dilute solutions of protein and acid. Removal of the membrane produced a decrease of no more than 2 to 4 mv. in electromotive force, while the membrane potentials, as estimated by the usual arbitrary assumption, were of the order of 12 to 34 mv. Ion ratios, as calculated from analyses for total chloride, were definitely greater than those calculated from the electromotive force of cells with salt bridges, as if there had been combination of some of the chloride ion with protein.


1947 ◽  
Vol 45 (4) ◽  
pp. 397-406 ◽  
Author(s):  
D. J. Finney

When individual responses in a biological assay show considerable variation associated with the values of a concomitant variate, covariance analysis may be used in order to adjust the mean responses and to improve the precision of the assay. Usually this is preferable to the choice of an adjustment which involves an arbitrary assumption about the effect of variations in the concomitant variate on the measured response. Published accounts of the process are open to certain theoretical objections, though they may be sufficiently exact for most practical purposes.The present paper describes a method of calculating the relative potency, and its precision, which may be a little more laborious, but which is in full accord with standard statistical practice. The computations are illustrated on data from a prolactin assay by the pigeon crop-gland technique, in which the final crop-gland weight showed a positive correlation with the body weight at the start of the assay. The results are compared with those obtained either from the unadjusted crop-gland weights or from these weights expressed as proportions of body weights. The covariance method leads to a more precise estimate of the potency of the test preparation than do either of the others; there is evidence, however, that the increase in precision will not necessarily be large unless the correlation between the response and the concomitant variate is very close.In a final section, the full statistical tests of assay validity in the covariance analysis are described; these are lengthy, and fortunately are required only when the validity is in considerable doubt.The methods of adjustment have been described in this paper with respect to an assay depending upon parallel regression lines of responses on the logarithms of doses. They may be adapted for use with ‘slope-ratio’ assays (Bliss, 1946; Finney, 1945; 1948; Wood & Finney, 1946), in which the regression of response on dose itself is linear. So far the need for adjusting for concomitant variation in these assays seems not to have arisen, and discussion of computational details may be postponed until the need is felt.


The Raman effect in crystals is treated in this paper with the help of Placzek’s approximation. It consists of contributions of different orders with respect to the amplitudes of the vibrations; the first-order effect is a line spectrum depending only on the vibrations of infinite wavelength, the second-order effect is a continuous spectrum depending on combination frequencies of all pairs of branches of the lattice vibrations, each pair taken for the same wave vector. In highly symmetrical crystals like rock-salt the first-order effect is zero. The second order effect can be calculated for rock-salt with the help of the tables of the lattice frequencies published by Kellermann. It consists of thirty-six peaks, each belonging to a combination frequency. The superposition of these allows us to determine without any arbitrary assumption about the coupling constants, the frequency of the observable maxima in fair agreement with Krishnan’s measurements. By adapting three coupling constants one can also determine the relative intensities of the most prominent peaks and obtain a curve which in its main features agrees with the observed one. The results show that lattice dynamics can account quantitatively for the Raman effect in crystals and that Raman’s attacks against the theory are unfounded.


Sign in / Sign up

Export Citation Format

Share Document