THE NUMERIC DATA

Author(s):  
G.K. Hartmann
Keyword(s):  
1974 ◽  
Vol 29 (7) ◽  
pp. 1112-1116
Author(s):  
Raimund Ulbrich

For the calculation of the thermal expansion of nonassociated liquids in terms of molar volume, mol refraction and the "effective electron numbers" a corrected approximation formula is proposed. The thermal expansion of 26 solvents is calculated. - The usefulness of the "effective electron numbers" for the derivation of numeric data for dispersion force potentials of polyatomic molecules is affirmed once more.


2016 ◽  
Vol 21 (1) ◽  
pp. 102-115 ◽  
Author(s):  
Stephen Gorard

This paper reminds readers of the absurdity of statistical significance testing, despite its continued widespread use as a supposed method for analysing numeric data. There have been complaints about the poor quality of research employing significance tests for a hundred years, and repeated calls for researchers to stop using and reporting them. There have even been attempted bans. Many thousands of papers have now been written, in all areas of research, explaining why significance tests do not work. There are too many for all to be cited here. This paper summarises the logical problems as described in over 100 of these prior pieces. It then presents a series of demonstrations showing that significance tests do not work in practice. In fact, they are more likely to produce the wrong answer than a right one. The confused use of significance testing has practical and damaging consequences for people's lives. Ending the use of significance tests is a pressing ethical issue for research. Anyone knowing the problems, as described over one hundred years, who continues to teach, use or publish significance tests is acting unethically, and knowingly risking the damage that ensues.


2014 ◽  
Vol 2 ◽  
pp. 1-5
Author(s):  
A. Deshpande

In everyday life and field, people mostly deal with concepts that involve factors that defy classification into crisp sets. The decisions people usually make are perceptions without rigorous analysis of numeric data. Like in other field of studies, there may exist imprecision in air quality parametric data collected and in the perception made by air quality experts in defining these parameters in linguistic terms such as: very good, good, poor. This is the reason why over the past few decades, soft computing tools such as fuzzy logic based methods, neural networks, and genetic algorithms have had significant and growing impacts to deal with aleatory as well as epistemic uncertainty in air quality related issues. This paper has highlighted mathematical preliminaries of air pollution studies like Similarity Measures (Cosine Amplitude Method), Fuzzy to Crisp Conversion (Alpha cut method), Fuzzy c Mean Clustering, Zadeh-Deshpande (ZD) Approach and linguistic description of air quality. Similarly, the applications of fuzzy similarity measures and fuzzy c mean clustering with defined possibility (- cut) levels in case air pollution studies for Delhi, India have been reflected. Though the approach of using fuzzy logic in pollution studies are not of common practice, the comprehensive approach that involves air pollution exposure surveys, toxicological data, and epidemiological studies coupled with fuzzy modeling will go a long way toward resolving some of the divisiveness and controversy in the current regulatory paradigm.


Sign in / Sign up

Export Citation Format

Share Document