arbitrary choice
Recently Published Documents


TOTAL DOCUMENTS

48
(FIVE YEARS 14)

H-INDEX

8
(FIVE YEARS 1)

The inspiration for this model ware possibilities of the human ear to distinguish the frequency of sounds and a diffraction grating. Detection takes place after max. 15 length of the wave (arbitrary choice). The range of frequencies to detect for tests – 800-3200 Hz: detection every 5 Hz in the range 800-1600 Hz and 10 Hz in the range 1600-3200 Hz (arbitrary choice). It can explain the residual hearing effect (missing tone f is heard when harmonic tones 2f, 3f and 4f are played). The algorithm can be used as an alternative for FFT. Model uses only memory for delay line end for results, and adding operation, so it should be fast and cheep, and can work on-line in real-time. Testing program was written in Perl.


2021 ◽  
Author(s):  
Michael Ostrowski

Abstract The inspiration for this model ware possibilities of the human ear to distinguish the frequency of sounds and a diffraction grating. Detection takes place after max. 15 length of the wave (arbitrary choice). The range of frequencies to detect for tests – 800-3200 Hz: detection every 5 Hz in the range 800-1600 Hz and 10 Hz in the range 1600-3200 Hz (arbitrary choice). It can explain the residual hearing effect (missing tone f is heard when harmonic tones 2f, 3f and 4f are played). The algorithm can be used as an alternative for FFT. Model uses only memory for delay line end for results, and adding operation, so it should be fast and cheep, and can work on-line in real-time. Testing program was written in Perl.


Author(s):  
Andres Yi Chang

Social scientists frequently rely on the cardinal comparability of test scores to assess achievement gaps between population subgroups and their evolution over time. This approach has been criticized because of the ordinal nature of test scores and the sensitivity of results to order-preserving transformations that are theoretically plausible. Bond and Lang (2013, Review of Economics and Statistics 95: 1468–1479) document the sensitivity of measured ability to scaling choices and develop a method to assess the robustness of changes in ability over time to scaling choices. In this article, I present the scale_transformation command, which expands the Bond and Lang (2013) method to more general cases and optimizes their algorithm to work with large datasets. The command assesses the robustness of an achievement gap between two subgroups to any arbitrary choice of scale by finding bounds for the original gap estimation. Additionally, it finds scale transformations that are very likely and unlikely to benchmark against the results obtained. Finally, it also allows the user to measure how much gap growth coefficients change when including controls in their specifications.


2021 ◽  
Author(s):  
Dario R. Crisci

This paper studies the explicit calculation of the set of superhedging (and underhedging) portfolios where one asset is used to superhedge another in a discrete time setting. A general operational framework is proposed and trajectory models are defined based on a class of investors characterized by how they operate on financial data leading to potential portfolio rebalances. Trajectory market models will be specified by a trajectory set and a set of portfolios. Beginning with observing charts in an operationally prescribed manner, our trajectory sets will be constructed by moving forward recursively, while our superhedging portfolios are computed through a backwards recursion process involving a convex hull algorithm. The models proposed in this thesis allow for an arbitrary number of stocks and arbitrary choice of numeraire. Although price bounds, V 0 (X0, X2 ,M) ≤ V 0(X0, X2 ,M), will never yield a market misprice, our models will allow an investor to determine the amount of risk associated with an initial investment v.


2021 ◽  
Author(s):  
Dario R. Crisci

This paper studies the explicit calculation of the set of superhedging (and underhedging) portfolios where one asset is used to superhedge another in a discrete time setting. A general operational framework is proposed and trajectory models are defined based on a class of investors characterized by how they operate on financial data leading to potential portfolio rebalances. Trajectory market models will be specified by a trajectory set and a set of portfolios. Beginning with observing charts in an operationally prescribed manner, our trajectory sets will be constructed by moving forward recursively, while our superhedging portfolios are computed through a backwards recursion process involving a convex hull algorithm. The models proposed in this thesis allow for an arbitrary number of stocks and arbitrary choice of numeraire. Although price bounds, V 0 (X0, X2 ,M) ≤ V 0(X0, X2 ,M), will never yield a market misprice, our models will allow an investor to determine the amount of risk associated with an initial investment v.


2021 ◽  
Vol 25 (5) ◽  
pp. 2649-2662
Author(s):  
Shusen Wang ◽  
Junhua Li ◽  
Hazen A. J. Russell

Abstract. Streamflow hydrograph analysis has long been used for separating streamflow into baseflow and surface runoff components, providing critical information for studies in hydrology, climate and water resources. Issues with established methods include the lack of physics and arbitrary choice of separation parameters, problems in identifying snowmelt runoff, and limitations on watershed size and hydrogeological conditions. In this study, a Gravity Recovery and Climate Experiment (GRACE)-based model was developed to address these weaknesses and improve hydrograph separation. The model is physically based and requires no arbitrary choice of parameters. The new model was compared with six hydrograph separation methods provided with the U.S. Geological Survey Groundwater Toolbox. The results demonstrated improved estimates by the new model particularly in filtering out the bias of snowmelt runoff in baseflow estimate. This new model is specifically suitable for applications over large watersheds which is complementary to the traditional methods that are limited by watershed size. The output from the model also includes estimates for watershed hydraulic conductivity and drainable water storage, which are useful parameters in evaluating aquifer properties, calibrating and validating hydrological and climate models, and assessing regional water resources.


2021 ◽  
Vol 5 (3) ◽  
pp. 76
Author(s):  
Ho Sung Kim ◽  
Saijie Huang

S-N curve characterisation and prediction of remaining fatigue life are studied using polyethylene terephthalate glycol-modified (PETG). A new simple method for finding a data point at the lowest number of cycles for the Kim and Zhang S-N curve model is proposed to avoid the arbitrary choice of loading rate for tensile testing. It was demonstrated that the arbitrary choice of loading rate may likely lead to an erroneous characterisation for the prediction of the remaining fatigue life. The previously proposed theoretical method for predicting the remaining fatigue life of composite materials involving the damage function was verified at a stress ratio of 0.4 for the first time. Both high to low and low to high loadings were conducted for predicting the remaining fatigue lives and a good agreement between predictions and experimental results was found. Fatigue damage consisting of cracks and whitening is described.


Entropy ◽  
2020 ◽  
Vol 23 (1) ◽  
pp. 5
Author(s):  
Matteo Fraschini ◽  
Simone Maurizio La Cava ◽  
Luca Didaci ◽  
Luigi Barberini

The idea of estimating the statistical interdependence among (interacting) brain regions has motivated numerous researchers to investigate how the resulting connectivity patterns and networks may organize themselves under any conceivable scenario. Even though this idea has developed beyond its initial stages, its practical application is still far away from being widespread. One concurrent cause may be related to the proliferation of different approaches that aim to catch the underlying statistical interdependence among the (interacting) units. This issue has probably contributed to hindering comparisons among different studies. Not only do all these approaches go under the same name (functional connectivity), but they have often been tested and validated using different methods, therefore, making it difficult to understand to what extent they are similar or not. In this study, we aim to compare a set of different approaches commonly used to estimate the functional connectivity on a public EEG dataset representing a possible realistic scenario. As expected, our results show that source-level EEG connectivity estimates and the derived network measures, even though pointing to the same direction, may display substantial dependency on the (often arbitrary) choice of the selected connectivity metric and thresholding approach. In our opinion, the observed variability reflects the ambiguity and concern that should always be discussed when reporting findings based on any connectivity metric.


2020 ◽  
Vol 27 (2) ◽  
pp. 195-212
Author(s):  
Valeria Koroliova ◽  
Iryna Popova

The aim of the article is characteristics of mechanisms of pragmatics distraction in communication of active participants of modern Ukrainian plays with features of the theatre of the absurdity. Structural and contextual mechanisms of dialogic speech depragmatization are singled out on factual material. In a dramatic dialogue absurdity is explained as a purposeful instruction to convey the thought about illogicalness and chaotic nature of reality, the aimlessness of a human being. The main methods of the study are descriptive, context-interpreting and presuppositional. Study results. One of absurdity occurrence mechanisms is depragmatization – purposeful non-normative usage of language pragmatic resources. We identify structural and contextual violations within depragmatization. Structural violations are characteristic for an absurdist drama in which characters’ cues do not have illocutionary and thematic coherence. Another type of structural violations is conscious violations of formal structure of linguistic units. Role exchange, during which an active participant takes over someone else’s communicative role, is an example of contextual depragmatization. Within contextual violations we also identify the group of cognitive violations which is based on non-observance of causally consecutive and logical connections. Anomalies based on an arbitrary choice of language stylistic means, which are uncoordinated with general principles of stylistic formalization of the text, are considered the contextual variety of depragmatization. Conclusions. Structural and contextual communicative violations are used by playwrights to activate the sense of the situational absurdity depicted in a work. Active participants of drama of the absurdity communicate without communicative purpose and taking into account situational needs, which results in actualization of pragmatic potential of used linguistic units, falsification of meaningful speech.


Sign in / Sign up

Export Citation Format

Share Document