Measuring User Behavior

Author(s):  
Yu Wang

Measurement plays a fundamental role in our modern world, and the measurement theory uses statistical tools to measure and to analyze data. In this chapter, we will examine several statistical techniques for measuring user behavior. We will first discuss the fundamental characteristics of user behavior, and then we will describe the scoring and profiling approaches to measure user behavior. The fundamental idea of measurement theory is that measurements are not the same as the outcome being measured. Hence, if we want to draw conclusions about the outcome, we must take into account the nature of the correspondence between the outcome and the measurements. Our goal for measuring user behavior is to understand the behavior patterns so we can further profile users or groups correctly. Readers who are interested in basic measurement theory should refer to Krantz, Luce, Suppes & Tversky (1971), Suppes, Krantz, Luce & Tversky (1989), Luce, Krantz, Suppes & Tversky (1991), Hand (2004), and Shultz & Whitney (2005). Any measurement could involve two types of errors, systematic errors and random errors. A systematic error remains the same direction throughout a set of measurement processes, and can have all positive or all negative (or both) values consistently. Generally, a systematic error is difficult to identify and account for. System errors generally originate in one of two ways: (1) error of calibration, and (2) error of use. Error due to calibration occurs, for example, if network data is collected incorrectly. More specifically, if an allowable value for one variable should have a range from 1 to 1000 but we incorrectly limit the range to a maximum of 100, then all the collected traffic data corresponding to this variable will be affected in the same way, giving rise to a systematic error. Errors of use occur, for example, if the data is collected correctly but was somehow transferred incorrectly. If we define a “byte” as a data type for a variable with a maximum range greater than 256, we expect incorrect results on observations with values greater than 256 for this variable. A random error varies from a process to process and is equally likely to be randomly selected as positive or negative. Random errors arise because of either uncontrolled variables or specimen variations. In any case, the idea is to control all variables that can influence the result of the measurement and to control them closely enough that the resulting random errors are no longer objectionable. Random errors can be addressed with statistical methods. In most measurements, only random errors will contribute to estimates of probable error. One of the common random errors in measuring user behavior is the variance. A robust profiling measurement has to be able to take into account the variances in profiling patterns on (1) the network system side, such as variances in network groups or domains, traffic volume, and operating systems, (2) the user side, such as job responsibilities, working schedules, department categorization, security privileges, and computer skills must also be considered. The profiling measurement must be able to separate such variances from the system and user sides. Hence, revolutionizing network infrastructure or altering employment would have less of an impact on the overall profiling system. Recently, the hierarchical generalized linear model has been increasingly used to address such variances; we will further discuss this modern technique later in this chapter.

2019 ◽  
Vol 13 (1) ◽  
pp. 14
Author(s):  
Hendro Supratikno ◽  
David Premana

Parking is a condition of not moving a vehicle that is temporary because it was abandoned by the driver. Included in the definition of parking is every vehicle that stops at certain places whether stated by traffic signs or not, and not solely for the benefit of raising and / or lowering people and / or goods.Campus 3 Lumajang State Community Academy has facilities and infrastructure prepared by the Lumajang Regency government. However, the parking lots provided cannot accommodate vehicles optimally because of the ratio of the number of vehicles and the area of the parking area that is not appropriate. This is because the area of the parking lot is not analyzed by data error when measuring.Each measurement data is assumed to have errors both systematic errors, random errors, and large errors (blunders), so that in the measurement of parking lots certainly there are errors. From this the authors intend to conduct research to find out how the propagation of systematic errors and the large systematic errors of the area of campus parking lot 3 Lumajang Community Academy.The methods used in this study include preparing materials and tools, making land sketches, decomposing them, determining distances using theodolite, determining land area equations, and finding systematic error propagation. So that the final goal in this study is to find large systematic errors in the parking area of Campus 3 of the Lumajang State Community Academy


Author(s):  
Patrick Suppes

A conceptual analysis of measurement can properly begin by formulating the two fundamental problems of any measurement procedure. The first problem is that of representation, justifying the assignment of numbers to objects or phenomena. We cannot literally take a number in our hands and ’apply’ it to a physical object. What we can show is that the structure of a set of phenomena under certain empirical operations and relations is the same as the structure of some set of numbers under corresponding arithmetical operations and relations. Solution of the representation problem for a theory of measurement does not completely lay bare the structure of the theory, for there is often a formal difference between the kind of assignment of numbers arising from different procedures of measurement. This is the second fundamental problem, determining the scale type of a given procedure. Counting is an example of an absolute scale. The number of members of a given collection of objects is determined uniquely. In contrast, the measurement of mass or weight is an example of a ratio scale. An empirical procedure for measuring mass does not determine the unit of mass. The measurement of temperature is an example of an interval scale. The empirical procedure of measuring temperature by use of a thermometer determines neither a unit nor an origin. In this sort of measurement the ratio of any two intervals is independent of the unit and zero point of measurement. Still another type of scale is one which is arbitrary except for order. Moh’s hardness scale, according to which minerals are ranked in regard to hardness as determined by a scratch test, and the Beaufort wind scale, whereby the strength of a wind is classified as calm, light air, light breeze, and so on, are examples of ordinal scales. A distinction is made between those scales of measurement which are fundamental and those which are derived. A derived scale presupposes and uses the numerical results of at least one other scale. In contrast, a fundamental scale does not depend on others. Another common distinction is that between extensive and intensive quantities or scales. For extensive quantities like mass or distance an empirical operation of combination can be given which has the structural properties of the numerical operation of addition. Intensive quantities do not have such an operation; typical examples are temperature and cardinal utility. A widespread complaint about this classical foundation of measurement is that it takes too little account of the analysis of variability in the quantity measured. One important source is systematic variability in the empirical properties of the object being measured. Another source lies not in the object but in the procedures of measurement being used. There are also random errors which can arise from variability in the object, the procedures or the conditions surrounding the observations.


2011 ◽  
Vol 4 (4) ◽  
pp. 5147-5182
Author(s):  
V. A. Velazco ◽  
M. Buchwitz ◽  
H. Bovensmann ◽  
M. Reuter ◽  
O. Schneising ◽  
...  

Abstract. Carbon dioxide (CO2) is the most important man-made greenhouse gas (GHG) that cause global warming. With electricity generation through fossil-fuel power plants now as the economic sector with the largest source of CO2, power plant emissions monitoring has become more important than ever in the fight against global warming. In a previous study done by Bovensmann et al. (2010), random and systematic errors of power plant CO2 emissions have been quantified using a single overpass from a proposed CarbonSat instrument. In this study, we quantify errors of power plant annual emission estimates from a hypothetical CarbonSat and constellations of several CarbonSats while taking into account that power plant CO2 emissions are time-dependent. Our focus is on estimating systematic errors arising from the sparse temporal sampling as well as random errors that are primarily dependent on wind speeds. We used hourly emissions data from the US Environmental Protection Agency (EPA) combined with assimilated and re-analyzed meteorological fields from the National Centers of Environmental Prediction (NCEP). CarbonSat orbits were simulated as a sun-synchronous low-earth orbiting satellite (LEO) with an 828-km orbit height, local time ascending node (LTAN) of 13:30 (01:30 p.m.) and achieves global coverage after 5 days. We show, that despite the variability of the power plant emissions and the limited satellite overpasses, one CarbonSat can verify reported US annual CO2 emissions from large power plants (≥5 Mt CO2 yr−1) with a systematic error of less than ~4.9 % for 50 % of all the power plants. For 90 % of all the power plants, the systematic error was less than ~12.4 %. We additionally investigated two different satellite configurations using a combination of 5 CarbonSats. One achieves global coverage everyday but only samples the targets at fixed local times. The other configuration samples the targets five times at two-hour intervals approximately every 6th day but only achieves global coverage after 5 days. From the statistical analyses, we found, as expected, that the random errors improve by approximately a factor of two if 5 satellites are used. On the other hand, more satellites do not result in a large reduction of the systematic error. The systematic error is somewhat smaller for the CarbonSat constellation configuration achieving global coverage everyday. Finally, we recommend the CarbonSat constellation configuration that achieves daily global coverage.


1970 ◽  
Vol 53 (3) ◽  
pp. 568-571
Author(s):  
Grayson R Rogers

Abstract An ion exchange-colorimetric method for determining betaine in orange juice was studied by 11 collaborators on 4 orange juice samples and 2 synthetic water solutions consisting of sucrose, dextrose, and various amino acids found in orange juice. Average recoveries in the collaborative study were 96.7 and 95.9%. Results show that the precision standard deviation among laboratories is generally acceptable. The distribution of the actual data is greater than normally expected, but random errors appear to be responsible since no significant systematic error can be detected in the data. The method is recommended for adoption as official first action.


Author(s):  
Katrine Okholm Kryger ◽  
Séan Mitchell ◽  
Steph Forrester

The aim of this study was to measure the level of agreement of four portable football velocity and spin rate measurement systems (Jugs speed radar gun, 2-D high-speed video, TrackMan and adidas miCoach football) against a Vicon motion analysis system. One skilled male university football player performed 70 shots covering a wide range of ball velocities (12–30 m s−1) and spin rates (94–743 r/min). A Bland–Altman analysis was used to assess the level of agreement. For ball velocity, the 2-D high-speed video had the smallest systematic error, followed by the radar gun, TrackMan and miCoach football at 0.2, 0.4, 0.5 and 4.8 m s−1, respectively. A similar ranking was also observed for the random errors (95% confidence intervals: ±0.4, ±1.5, ±1.9 and ±6.0 m s−1). The first three systems all tracked ball velocity in >90% of shots, while the miCoach football tracked slightly fewer shots (79%). For spin rate, the miCoach football had a much smaller systematic error (4 vs 38 r/min) and random error (95% confidence intervals: ±24 vs ±355 r/min) compared to TrackMan. The miCoach also successfully tracked spin rate in more shots than the TrackMan (79% vs 44%). These results indicate that 2-D high-speed video would be the preferred option for the field assessment of ball velocity; however, radar gun and TrackMan may also be appropriate. A minimum of 10 frames of 2-D high-speed video, captured close to the ball starting position, was demonstrated to be sufficient in providing a reliable measure of ball velocity. The miCoach ball is the preferred option for field assessment of ball spin rate.


1996 ◽  
pp. 201-238 ◽  
Author(s):  
Thomas D. Hall

This paper makes six arguments. First, socio-cultural evolution must be studied from a "world-system" or intersocietal interaction perspective. A focus on change in individual "societies" or "groups" fails to attend adequately to the effects of intersocietal interaction on social and cultural change. Second, in order to be useful, theories of the modern world-system must be modified extensively to deal with non-capitalist settings. In particular, changes in system boundaries marked by exchange networks (for information, luxury or prestige goods, political/military interactions, and bulk goods) seldom coincide,and follow different patterns of change. Third, all such systems tend to pulsate, that is, expand and contract, or at least expand rapidly and less rapidly. Fourth, once hierarchical forms of social organization develop such systems typically have cycles of rise and fall in the relative positions of constituent politics. Fifth, expansion of world-systems forms and transforms social relations in newly incorporated areas. While complex in the modern world-system, these changes are even more complex in precapitalist settings. Sixth, thesetwo cycles combine with demographic and epidemiological processes to shape long -term socio-cultural evolution.


2013 ◽  
Vol 333-335 ◽  
pp. 254-258 ◽  
Author(s):  
Hui Huang ◽  
Xin Meng Liu ◽  
Xin Lv

This paper presents a method improving accuracy for evaluating S-parameters (Scattering-parameters) of MCP (Microwave Coplanar Probe). This method may be named one-port two-tier Multi-TRL (Thru-Reflect-Line) calibration method. It measures two-port devices only at one port of VNA (Vector Network Analyzer). It decreases the random errors caused of cable movements and connecting times. This method is implemented with coaxial OSL (Open-Short-Load) and on-wafer TRL calibration kit. It directly calculates and removes the residual errors caused of coaxial OSL calibration kit imperfection. It significantly reduced system errors by using on-wafer TRL calibration kit. To verify the effectiveness of the proposed method, the measured S-parameters up to 50GHz of MCP configured with GSG-100 are given and discussed.


1975 ◽  
Vol 6 (4) ◽  
pp. 202-220 ◽  
Author(s):  
L. S. Cox

In a two-year study, frequencies and descriptions of systematic errors in four algorithms in arithmetic were studied in upper-middle income regular and special education classrooms involving 744 children. Children were screened for adequate knowledge of basic facts and for receiving prior instruction on the computational processes. Systematic errors contained a recurring incorrect computational process and were differentiated from careless errors and random errors. Errors were studied within levels of computational skill for each algorithm. Results showed that 5-6% of the children made systematic errors in the addition, multiplication, and division algorithms. The figure was 13% for the subtraction algorithm. One year later 23% of the children were making either the identical systematic error or another systematic error.


1988 ◽  
Vol 67 (1) ◽  
pp. 255-262
Author(s):  
Maria Del Carmen García-López

Systematic observation of individuals or groups focuses on the visible behavior in relation to visible values of the environment. The researcher may find it useful to record both properties of environment-behavior events. The building block of any systematic observation system is defined in a clear set of selection rules which can be used by human observers, the next step is to decide which properties will be measured. Every property is a variable, and each variable makes up a set of values. Measurement theory demands that the variables be defined in such a way that any event will get one and only one value for each variable. The values have to be mutually exclusive and exhaustive. Systematic observation generally does not permit interpretation of events in the particular environment. It is better to separate the observation and its interpretation into an index of cross-reference instead of relating them. On the other hand, it is absolutely essential that constant analysis of the notes and relationships should be carried out while the work of natural observation is still taking place. In spite of the fact that natural systematic observation does not possess high control, it can quite well be complemented by very structured observation and by accurate quantification using techniques of qualitative numeration for this purpose.


Sign in / Sign up

Export Citation Format

Share Document