seminal paper
Recently Published Documents


TOTAL DOCUMENTS

227
(FIVE YEARS 75)

H-INDEX

22
(FIVE YEARS 3)

2022 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Yuanyuan Xu ◽  
Jian Li ◽  
Linjie Wang ◽  
Chongguang Li

PurposeThis paper aims to present the first empirical liquidity measurement of China’s agricultural futures markets and study time-varying liquidity dependence across markets.Design/methodology/approachBased on both high- and low-frequency trading data of soybean and corn, this paper evaluates short-term liquidity adjustment in Chinese agricultural futures market measured by liquidity benchmark and long-term liquidity development measured by liquidity proxies.FindingsBy constructing comparisons, the authors identify the seminal paper of Fong, Holden and Trzcinka (2017) as the best low-frequency liquidity proxy in China’s agricultural futures market and capture similar historical patterns of the liquidity in soybean and corn markets. The authors further employ Copula-generalized autoregressive conditional heteroskedasticity models to investigate liquidity dependence between soybean and corn futures markets. Results show that cross-market liquidity dependence tends to be dynamic and asymmetric (in upper versus lower tails). The liquidity dependence becomes stronger when these markets experience negative shocks than positive shocks, indicating a concern on the contagion effect of liquidity risk under negative financial situations.Originality/valueThe findings of this study provide useful information on the dynamic evolution of liquidity pattern and cross-market dependence of fastest-growing agricultural futures in the largest emerging economy.


Author(s):  
Rebecca J. Lewis

Thorlief Schjelderup-Ebbe's seminal paper on the ‘pecking’ order of chickens inspired numerous ethologists to research and debate the phenomenon of dominance. The expansion of dominance to the broader concept of power facilitated disentangling aggression, strength, rank and power. Aggression is only one means of coercing other individuals, and can sometimes highlight a lack of power. The fitness advantages of aggression may only outweigh the costs during periods of uncertainty. Effective instruments of power also include incentives and refusals to act. Moreover, the stability of the power relationship might vary with the instruments used if different means of power vary in the number and types of outcomes achieved, as well as the speed of accomplishing those outcomes. In well-established relationships, actions or physiological responses in the subordinate individual may even be the only indicator of a power differential. A focus on strength, aggression and fighting provides an incomplete understanding of the power landscape that individuals actually experience. Multiple methods for constructing hierarchies exist but greater attention to the implications of the types of data used in these constructions is needed. Many shifts in our understanding of power were foreshadowed in Schjelderup-Ebbe's discussion about deviations from the linear hierarchy in chickens. This article is part of the theme issue ‘The centennial of the pecking order: current state and future prospects for the study of dominance hierarchies’.


Author(s):  
Deniz Appelbaum ◽  
Eric Cohen ◽  
Ethan Kinory ◽  
Sean Stein Smith

Satoshi Nakamoto (2008) published a seminal paper on a promising digital currency application and proposed a distributed ledger technology (DLT) to support it. Shortly thereafter, in 2009, bitcoin and the customized DLT that supports it were established. Although the DLT described by Nakamoto (2008), which packages data into blocks that are then cryptographically chained together (i.e., "block chain", or "blockchain"), possesses features that are desirable for some business applications and/or their auditors, over a dozen years later there is not yet a widescale adoption of blockchain for business operations. This paper explores functionality, data and process integrity, and regulatory concerns as potential explanations for the lag in mainstream business and accounting adoption. We also contextualize some of the concerns that are likely to have delayed blockchain implementation by providing a framework of questions directed at both researchers and practitioners.


2022 ◽  
Vol 47 (1) ◽  
Author(s):  
Enric Pérez ◽  
Joana Ibáñez

AbstractIn this paper, we deal with the historical origins of Fermi–Dirac statistics, focusing on the contribution by Enrico Fermi of 1926. We argue that this statistics, as opposed to that of Bose–Einstein, has been somewhat overlooked in the usual accounts of the old quantum theory. Our main objective is to offer a critical analysis of Fermi’s seminal paper and its immediate impact. Secondly, we are also interested in assessing the status of the particle concept in the years 1926–1927, especially regarding the germ of quantum indistinguishability. We will see, for example, that the first applications of the Fermi–Dirac statistics to the study of metals or stellar matter had a technical nature, and that their main instigators barely touched upon interpretative matters. Finally, we will discuss the reflections and remarks made in these respects in two famous events in physics of 1927, the Como conference and the fifth Solvay congress.


2022 ◽  
pp. 209-232
Author(s):  
Carlos N. Bouza-Herrera

The authors develop the estimation of the difference of means of a pair of variables X and Y when we deal with missing observations. A seminal paper in this line is due to Bouza and Prabhu-Ajgaonkar when the sample and the subsamples are selected using simple random sampling. In this this chapter, the authors consider the use of ranked set-sampling for estimating the difference when we deal with a stratified population. The sample error is deduced. Numerical comparisons with the classic stratified model are developed using simulated and real data.


2021 ◽  
Vol 8 (4) ◽  
pp. 1-25
Author(s):  
Laurent Feuilloley ◽  
Pierre Fraigniaud

We carry on investigating the line of research questioning the power of randomization for the design of distributed algorithms. In their seminal paper, Naor and Stockmeyer [STOC 1993] established that, in the context of network computing in which all nodes execute the same algorithm in parallel, any construction task that can be solved locally by a randomized Monte-Carlo algorithm can also be solved locally by a deterministic algorithm. This result, however, holds only for distributed tasks such that the correctness of their solutions can be locally checked by a deterministic algorithm. In this article, we extend the result of Naor and Stockmeyer to a wider class of tasks. Specifically, we prove that the same derandomization result holds for every task such that the correctness of their solutions can be locally checked using a 2-sided error randomized Monte-Carlo algorithm.


2021 ◽  
Vol 23 (1) ◽  
pp. 427
Author(s):  
Ajay Matta ◽  
William Mark Erwin

Numerous publications over the past 22 years, beginning with a seminal paper by Aguiar et al., have demonstrated the ability of notochordal cell-secreted factors to confer anabolic effects upon intervertebral disc (IVD) cells. Since this seminal paper, other scientific publications have demonstrated that notochordal cells secrete soluble factors that can induce anti-inflammatory, pro-anabolic and anti-cell death effects upon IVD nucleus pulposus (NP) cells in vitro and in vivo, direct human bone marrow-derived mesenchymal stem cells toward an IVD NP-like phenotype and repel neurite ingrowth. More recently these factors have been characterized, identified, and used therapeutically to induce repair upon injured IVDs in small and large pre-clinical animal models. Further, notochordal cell-rich IVD NPs maintain a stable, healthy extracellular matrix whereas notochordal cell-deficient IVDs result in a biomechanically and extracellular matrix defective phenotype. Collectively this accumulating body of evidence indicates that the notochordal cell, the cellular originator of the intervertebral disc holds vital instructional cues to establish, maintain and possibly regenerate the intervertebral disc.


2021 ◽  
Author(s):  
David Kellen

Regenwetter, Robinson, and Wang (in press) argue that research on decision making is plagued with conjunction fallacies or “Linda Effects”. As a case study, they provide a critical analysis of Kahneman and Tversky’s seminal paper on Prospect Theory and its 1992 sequel. This commentary evaluates their criticisms and ultimately finds them to be predicated on a number of misconceptions. As argued below, a reliance on stylized effects at the aggregate level is perfectly legitimate when dismissing a received view and first proposing a new account that organizes said effects in theoretically-meaningful ways.


2021 ◽  
Vol 7 (4) ◽  
pp. 230-242
Author(s):  
T. Ciano ◽  
P. Fotia ◽  
B. A. Pansera ◽  
M. Ferrara

Abstract: Patent data is a key source of information for innovation economists. In recent decades it has been possible to observe its significant diffusion and success mainly thanks either to archives digitization or to authorities’ greater openness with respect to patent granting procedure. Furthermore, the use of this information over time has not been limited to simple statistics on patents and their classification, but, going further, has extended to the analysis of applicants, inventors, citations, and much more. By this seminal paper, we are going to analyze starting from Data analysis related to a selection of Balkanic Countries, chosen among the most dynamic in innovation process and production of patents: Croatia, Serbia, and Bosnia and Herzegovina. How it will explain into the work, this selection was not accidental: the aim was to represent the evolution of these Countries, in terms of patent internationalization, depending on their “link” with the European Union, not all Western Balkan Countries are in fact part of it. Croatia, an official EU member since 2012, was chosen as the representative state of European influence. Some interesting results were obtained with a novel approach by social network analysis techniques.


Stats ◽  
2021 ◽  
Vol 4 (4) ◽  
pp. 931-942
Author(s):  
Diane Hindmarsh ◽  
David Steel

Small area estimation (SAE) methods can provide information that conventional direct survey estimation methods cannot. The use of small area estimates based on linear and generalized linear mixed models is still very limited, possibly because of the perceived complexity of estimating the root mean square errors (RMSEs) of the estimates. This paper outlines a study used to determine the conditions under which the estimated RMSEs, produced as part of statistical output (‘plug-in’ estimates of RMSEs) could be considered appropriate for a practical application of SAE methods where one of the main requirements was to use SAS software. We first show that the estimated RMSEs created using an EBLUP model in SAS and those obtained using a parametric bootstrap are similar to the published estimated RMSEs for the corn data in the seminal paper by Battese, Harter and Fuller. We then compare plug-in estimates of RMSEs from SAS procedures used to create EBLUP and EBP estimators against estimates of RMSEs obtained from a parametric bootstrap. For this comparison we created estimates of current smoking in males for 153 local government areas (LGAs) using data from the NSW Population Health Survey in Australia. Demographic variables from the survey data were included as covariates, with LGA-level population proportions, obtained mainly from the Australian Census used for prediction. For the EBLUP, the estimated plug-in estimates of RMSEs can be used, provided the sample size for the small area is more than seven. For the EBP, the plug-in estimates of RMSEs are suitable for all in-sample areas; out-of-sample areas need to use estimated RMSEs that use the parametric bootstrap.


Sign in / Sign up

Export Citation Format

Share Document