influential paper
Recently Published Documents


TOTAL DOCUMENTS

86
(FIVE YEARS 30)

H-INDEX

6
(FIVE YEARS 1)

2021 ◽  
Vol 9 (1) ◽  
Author(s):  
Tamar Johnson ◽  
Kexin Gao ◽  
Kenny Smith ◽  
Hugh Rabagliati ◽  
Jennifer Culbertson

Research on cross-linguistic differences in morphological paradigms reveals a wide range of variation on many dimensions, including the number of categories expressed, the number of unique forms, and the number of inflectional classes. However, in an influential paper, Ackerman & Malouf (2013) argue that there is one dimension on which languages do not differ widely: in predictive structure. Predictive structure in a paradigm describes the extent to which forms predict each other, called i-complexity. Ackerman & Malouf (2013) show that although languages differ according to measure of surface paradigm complexity, called e-complexity, they tend to have low i-complexity. They conclude that morphological paradigms have evolved under a pressure for low i-complexity, such that even paradigms with very high e-complexity are relatively easy to learn so long as they have low i-complexity. While this would potentially explain why languages are able to maintain large paradigms, recent work by Johnson et al. (submitted) suggests that both neural networks and human learners may actually be more sensitive to e-complexity than i-complexity. Here we will build on this work, reporting a series of experiments under more realistic learning conditions which confirm that indeed, across a range of paradigms that vary in either e- or i-complexity, neural networks (LSTMs) are sensitive to both, but show a larger effect of e-complexity (and other measures associated with size and diversity of forms). In human learners, we fail to find any effect of i-complexity at all. Further, analysis of a large number of randomly generated paradigms show that e- and i-complexity are negatively correlated: paradigms with high e-complexity necessarily show low i-complexity.These findings suggest that the observations made by Ackerman & Malouf (2013) for natural language paradigms may stem from the nature of these measures rather than learning pressures specially attuned to i-complexity.


Author(s):  
Jorge Padilla ◽  
Salvatore Piccolo ◽  
Pekka Sääskilahti

Abstract In a recent influential paper Coate et al. (2021) have criticized the sequential product-level approach to market definition in merger review. They argue that a simultaneous market-level approach to critical loss is more appropriate than a product-level critical loss analysis, because under certain plausible demand scenarios (nonlinear demand functions) the latter could yield the wrong answer on market definition—i.e., excessively broad or narrow markets. We extend their analysis by showing that a sequential product-level approach actually leads to an excessively narrow market definition when the typical nonlinear demand functions used in merger analysis are employed.


Author(s):  
Jonathan Maletic ◽  
Michael L. Collard ◽  
Michael Decker

2021 ◽  
Vol 46 (3) ◽  
pp. 21-23
Author(s):  
Martin Pinzger ◽  
Emanuel Giger ◽  
Harald C. Gall

More than two decades ago, researchers started to mine the data stored in software repositories to help software developers in making informed decisions for developing and testing software systems. Bug prediction was one of the most promising and popular research directions that uses the data stored in software repositories to predict the bug-proneness or number of bugs in source files. On that topic and as part of Emanuel's PhD studies, we submitted a paper with the title Comparing fine-grained source code changes and code churn for bug prediction [8] to the 8th Working Conference on Mining Software Engineering, held 2011 in beautiful Honolulu, Hawaii. Ten years later, it got selected as one of the finalists to receive the MSR 2021 Most Influential Paper Award. In the following, we provide a retrospective on our work, describing the road to publishing this paper, its impact in the field of bug prediction, and the road ahead.


AMS Review ◽  
2021 ◽  
Author(s):  
Torik Holmes ◽  
Josi Fernandes ◽  
Teea Palo

AbstractSocio-material conceptualisations of markets suggest that they are spatial formations. Yet, the everyday practical and spatial dimensions of market making have received little explicit attention. We thus introduce the concept of spatio-market practices, drawing on key ideas in market studies and spatial theory. We argue that examining spatio-market practices (and thus the spatial dimensions of markets) promises to provide fresh insight regarding what it takes to realise markets, their uneven distribution, and what and whom markets are (and are not) designed to serve. To demonstrate what the concept calls for, supports and promises, we take Humphreys’ (2010) influential paper as a starting point and draw on other secondary sources in order to articulate an alternative and spatially-oriented account of the growth and legitimacy of the American casino gambling market. This paper, in turn, contributes a subtle and yet incisive shift in thinking, which supports a more explicit means of exploring markets as spatial formations.


2021 ◽  
Author(s):  
Cameron Brick ◽  
Adrien Alejandro Fillon ◽  
Siu Kit Yeung ◽  
WANG Meiying ◽  
Hongye Lyu ◽  
...  

Self-interest is a central driver of attitudes and behaviors, but people also act against their immediate self-interest through prosocial behaviors, voting incongruously with their finances, or punishing others at personal cost. How much people believe that self-interest causes attitudes and behaviors is important, because this belief may shape regulation, shared narratives, and institutional structures. An influential paper claimed that people overestimate the power of self- interest on others' attitudes and behavioral intentions (Miller & Ratner, 1998). We present two registered, close, and successful replications (U.S. MTurk, N = 800; U.K. Prolific, N = 799) that compared actual to estimated intentions, with open data and code. Consistent with the original article, participants overestimated the impact of payment on blood donation in Study 1, ds = 0.59 [0.51, 0.66], 0.57 [0.49, 0.64], and overestimated the importance of smoking status for smoking policy preferences in Study 4, ds = 0.75 [0.59, 0.90], 0.84 [0.73, 0.96]. These replications included two extensions: 1) communal orientation as a moderator of overestimation and 2) a more detailed measure of self-interest in Study 4 (ordinal smoking status). Communal orientation did not predict overestimation, and the ordinal smoking measure yielded similar results to the main study. Verifying the overestimation error informs behavioral theories across several fields and has practical implications for institutions that require trust and cooperation. All materials, data, and code are available at osf.io/57mdc/.


2021 ◽  
pp. 1-32 ◽  
Author(s):  
Björn Bartling ◽  
Ernst Fehr ◽  
Yagiz Özdemir

The widespread use of markets leads to unprecedented material well-being in many societies. We study whether market interaction, as a side effect, erodes moral values. In an influential paper, Falk and Szech (2013) provide experimental data that seem to suggest that “market interaction erodes moral values.” Although we replicate their main treatment effect, we show that additional treatments are necessary to corroborate their conclusion. These treatments reveal that playing repeatedly, and not market interaction, causes the erosion of moral values. Our paper thus shows that neither Falk and Szech's data nor our data support the claim that markets erode morals.


Sign in / Sign up

Export Citation Format

Share Document