A Few Data Sets and Research Questions

Author(s):  
Pierre Lafaye de Micheaux ◽  
Rémy Drouilhet ◽  
Benoit Liquet
1988 ◽  
Vol 34 (117) ◽  
pp. 200-207 ◽  
Author(s):  
R. J. Braithwaite ◽  
Ole B. Olesen

AbstractRun-off data for two basins in south Greenland, one of which contains glaciers, are compared with precipitation at a nearby weather station and with ablation measured in the glacier basin. Seasonal variations of run-off for the two basins are broadly similar while run-off from the glacier basin has smaller year-to-year variations. A simple statistical model shows that this is the result of a negative correlation between ablation and precipitation, which has the effect of reducing run-off variations in basins with a moderate amount of glacier cover although run-off variations may become large again for highly glacierized basins. The model also predicts an increasing run-off with ablation correlation and a decreasing run-off with precipitation correlation as the amount of glacier cover increases. Although there are still too few data sets from other parts of Greenland for final conclusions, there are indications that the present findings may be applicable to other Greenland basins.


Author(s):  
Peter Temin

This chapter discusses how there is little of what economists call data on markets in Roman times, despite lots of information about prices and transactions. Data, as economists consider it, consist of a set of uniform prices that can be compared with each other. According to scholars, extensive markets existed in the late Roman Republic and early Roman Empire. Even though there is a lack of data, there are enough observations for the price of wheat, the most extensively traded commodity, to perform a test. The problem is that there is only a little bit of data by modern standards. Consequently, the chapter explains why statistics are useful in interpreting small data sets and how one deals with various problems that arise when there are only a few data points.


Author(s):  
Kathryn L. Ness

“A Changing Spanish Identity” outlines the research questions and data sets discussed in Setting the Table by introducing the notion of early modern Spanish cultural identity and the changes it encountered in the eighteenth century. It explains the author’s use of Don Quixote as a guide through the study and why this quintessential Spanish novel is appropriate for exploring themes of cultural change and identity. The chapter argues that, despite the major role the Spanish Empire played in early modern history, it has been largely underrepresented in studies of the Atlantic world. The majority of the chapter contains a brief introduction to the three sites addressed in the study as well as the methodology used to investigate these sites. The chapter concludes with an outline of subsequent chapters.


Author(s):  
Brent Wolff ◽  
Frank Mahoney ◽  
Anna Leena Lohiniva ◽  
Melissa Corkum

Qualitative research provides an adaptable, open-ended, rigorous method to explore local perceptions of an issue. Qualitative approaches are effective at revealing the subjective logic motivating behavior. They are particularly appropriate for research questions that are exploratory in nature or involve issues of meaning rather than magnitude or frequency. Key advantages of qualitative approaches include speed, flexibility and high internal validity resulting from emphasis on rapport building and ability to probe beneath the surface of initial responses. Given the time-intensive nature of qualitative analysis, samples tend to be small and purposively selected to assure every interview counts. Qualitative studies can be done independently or embedded in mixed-method designs. Qualitative data analysis depends on rigorous reading and rereading texts ideally with more than one analyst to confirm interpretations. Computer software is useful for analyzing large data sets but manual coding is often sufficient for rapid assessments in field settings..


Author(s):  
Thomas Bäck

In section 1.1.3 it was clarified that a variety of different, more or less drastic changes of the genome are summarized under the term mutation by geneticists and evolutionary biologists. Several mutation events are within the bounds of possibility, ranging from single base pair changes to genomic mutations. The phenotypic effect of genotypic mutations, however, can hardly be predicted from knowledge about the genotypic change. In general, advantageous mutations have a relatively small effect on the phenotype, i.e., their expression does not deviate very much (in phenotype space) from the expression of the unmutated genotype ([Fut90], p. 85). More drastic phenotypic changes are usually lethal or become extinct due to a reduced capability of reproduction. The discussion, to which extent evolution based on phenotypic macro-mutations in the sense of “hopeful monsters” is important to facilitate the process of speciation, is still ongoing (such macromutations have been observed and classified for the fruitfly Drosophila melangonaster, see [Got89], p. 286). Actually, only a few data sets are available to assess the phylogenetic significance of macro-mutations completely, but small phenotypical effects of mutation are clearly observed to be predominant. This is the main argument justifying the use of normally distributed mutations with expectation zero in Evolutionary Programming and Evolution Strategies. It reflects the emphasis of both algorithms on modeling phenotypic rather than genotypic change. The model of mutation is quite different in Genetic Algorithms, where bit reversal events (see section 2.3.2) corresponding with single base pair mutations in biological reality implement a model of evolution on the basis of genotypic changes. As observed in nature, the mutation rate used in Genetic Algorithms is very small (cf. section 2.3.2). In contrast to the biological model, it is neither variable by external influences nor controlled (at least partially) by the genotype itself (cf. section 1.1.3). Holland defined the role of mutation in Genetic Algorithms to be a secondary one, of little importance in comparison to crossover (see [Hol75], p. 111): . . . Summing up: Mutation is a “background” operator, assuring that the crossover operator has a full range of alleles so that the adaptive plan is not trapped on local optima. . . .


2017 ◽  
Vol 66 (1) ◽  
pp. 43-62 ◽  
Author(s):  
Joost Berkhout ◽  
Jan Beyers ◽  
Caelesta Braun ◽  
Marcel Hanegraaff ◽  
David Lowery

Scholars of mobilisation and policy influence employ two quite different approaches to mapping interest group systems. Those interested in research questions on mobilisation typically rely on a bottom-up mapping strategy in order to characterise the total size and composition of interest group communities. Researchers with an interest in policy influence usually rely on a top-down strategy in which the mapping of politically active organisations depends on samples of specific policies. But some scholars also use top-down data gathered for other research questions on mobilisation (and vice versa). However, it is currently unclear how valid such large-N data for different types of research questions are. We illustrate our argument by addressing these questions using unique data sets drawn from the INTEREURO project on lobbying in the European Union and the European Union’s Transparency Register. Our findings suggest that top-down and bottom-up mapping strategies lead to profoundly different maps of interest group communities.


2021 ◽  
Vol 13 (6) ◽  
pp. 0-0

Gait is a behavioural biometric which sometimes changes due to diseases but it is still a strong identification metric that is widely used in forensic works, state biometric preserve sectors, and medical laboratories. Gait analysis sometimes helps to identify person’s present mental state which reflects on physiological therapy for improved biological system. There are various gait measurement forms which expand the research area from crime detection to medical enhancement. Many research works have been done so far for gait recognition. Many researchers focused on skeleton image of people to extract gait features and many worked on stride length. Various sensors have been used to detect gait in various light forms. This paper is a brief survey of works on gait recognition, collected from various sources of science and technology literature. We have discussed few efficient models that worked best as well as we have discussed about few data sets available.


2021 ◽  
Vol 67 (1) ◽  
pp. 1-59
Author(s):  
Christophe Chesneau ◽  

Engineers, economists, hydrologists, social scientists, and behavioural scientists often deal with data belonging to the unit interval. One of the most common approaches for modeling purposes is the use of unit distributions, beginning with the classical power distribution. A simple way to improve its applicability is proposed by the transmuted scheme. We propose an alternative in this article by slightly modifying this scheme with a logarithmic weighted function, thus creating the log-weighted power distribution. It can also be thought of as a variant of the log-Lindley distribution, and some other derived unit distributions. We investigate its statistical and functional capabilities, and discuss how it distinguishes between power and transmuted power distributions. Among the functions derived from the log-weighted distribution are the cumulative distribution, probability density, hazard rate, and quantile functions. When appropriate, a shape analysis of them is performed to increase the exibility of the proposed modelling. Various properties are investigated, including stochastic ordering (first order), generalized logarithmic moments, incomplete moments, Rényi entropy, order statistics, reliability measures, and a list of new distributions derived from the main one are offered. Subsequently, the estimation of the model parameters is discussed through the maximum likelihood procedure. Then, the proposed distribution is tested on a few data sets to show in what concrete statistical scenarios it may outperform the transmuted power distribution.


1979 ◽  
Author(s):  
R. E. York ◽  
L. D. Hylton ◽  
R. G. Fox ◽  
J. C. Simonich

Full application of 2D boundary layer calculations for heat transfer predictions during turbine vane design still awaits verification against relevant data. Although there are a few data sets in the literature, there is a definite need for basic vane heat transfer data under conditions that fully simulate the aerodynamic and thermal conditions of a modern turbine. Accordingly, an experiment was performed to obtain the local heat transfer distribution on a typical engine vane in an aerothermodynamic cascade facility. Heat transfer data were obtained for a range of Mach and Reynolds numbers. The cascade was closely coupled behind the facility burner so that the test included the effects of high free-stream turbulence. Turbulence data were obtained by LDV and are included.


Sign in / Sign up

Export Citation Format

Share Document