scholarly journals Svennilson′s publication on pallidotomy for parkinsonism in 1960: a most influential paper in the field

Author(s):  
Joachim K. Krauss ◽  
Filipe Wolff Fernandes
Keyword(s):  
2019 ◽  
Author(s):  
Matthew McBee ◽  
Rebecca Brand ◽  
Wallace E. Dixon

In 2004, Christakis and colleagues published an influential paper claiming that early childhood television exposure causes later attention problems (Christakis, Zimmerman, DiGiuseppe, & McCarty, 2004), which continues to be frequently promoted by the popular media. Using the same NLSY-79 dataset (n = 2,108), we conducted two multiverse analyses to examine whether the finding reported by Christakis et al. was robust to different analytic choices. We evaluated 848 models, including logistic regression as per the original paper, plus linear regression and two forms of propensity score analysis. Only 166 models (19.6%) yielded a statistically significant relationship between early TV exposure and later attention problems, with most of these employing problematic analytic choices. We conclude that these data do not provide compelling evidence of a harmful effect of TV on attention. All material necessary to reproduce our analysis is available online via Github (https://github.com/mcbeem/TVAttention) and as a Docker container (https://hub.docker.com/repository/docker/mmcbee/rstudio_tvattention)


1988 ◽  
Vol 15 (4) ◽  
pp. 313-318
Author(s):  
Anthony Stevens

During the last twenty years, the most enthusiastic advocates of the use of animal models in the study of human psychiatric dysfunction have been Harlow and Suomi. In an influential paper, Induced Depression in Monkeys (1974), they argued that more extensive use of non-human primates “would have great potential utility since many manipulations and measurements presently prohibited in human study by ethical and practical considerations could be readily performed on non-human primate subjects in well-controlled experimental environments.” Harlow & Suomi concluded this paper with the following statement: “The results obtained to date on induced depression in monkeys show that proper and profound depressions can be produced relatively easily by a variety of techniques. These induced depressions either bear a close resemblance to human depression or have such similarity as to suggest that closely correlated human and animal depressive patterns may be achieved with refined techniques. The results to date also provide adequate data for the conduct of meaningful researches on the effects of pharmacological agents which either enhance, inhibit or preclude the experimental production of depression. Further, the existence of firm and fast monkey depression syndromes offers vast opportunities for testing a wide range of therapeutic techniques, either behavioural or biochemical.”


2016 ◽  
Vol 33 (5) ◽  
pp. 1046-1080 ◽  
Author(s):  
Donald W.K. Andrews ◽  
Patrik Guggenberger

An influential paper by Kleibergen (2005, Econometrica 73, 1103–1123) introduces Lagrange multiplier (LM) and conditional likelihood ratio-like (CLR) tests for nonlinear moment condition models. These procedures aim to have good size performance even when the parameters are unidentified or poorly identified. However, the asymptotic size and similarity (in a uniform sense) of these procedures have not been determined in the literature. This paper does so.This paper shows that the LM test has correct asymptotic size and is asymptotically similar for a suitably chosen parameter space of null distributions. It shows that the CLR tests also have these properties when the dimension p of the unknown parameter θ equals 1. When p ≥ 2, however, the asymptotic size properties are found to depend on how the conditioning statistic, upon which the CLR tests depend, is weighted. Two weighting methods have been suggested in the literature. The paper shows that the CLR tests are guaranteed to have correct asymptotic size when p ≥ 2 when the weighting is based on an estimator of the variance of the sample moments, i.e., moment-variance weighting, combined with the Robin and Smith (2000, Econometric Theory 16, 151–175) rank statistic. The paper also determines a formula for the asymptotic size of the CLR test when the weighting is based on an estimator of the variance of the sample Jacobian. However, the results of the paper do not guarantee correct asymptotic size when p ≥ 2 with the Jacobian-variance weighting, combined with the Robin and Smith (2000, Econometric Theory 16, 151–175) rank statistic, because two key sample quantities are not necessarily asymptotically independent under some identification scenarios.Analogous results for confidence sets are provided. Even for the special case of a linear instrumental variable regression model with two or more right-hand side endogenous variables, the results of the paper are new to the literature.


Author(s):  
Subrata Dasgupta

When Caxton Foster of the University of Massachusetts published his book Computer Architecture in 1970, this term was only just being recognized, reluctantly, by the computing community. This despite an influential paper published in 1964 by a group of IBM engineers on the “Architecture of the IBM System/360.” For instance, ACM’s “Curriculum 68” made no mention of the term in its elaborate description of the entire scope of computing as an academic discipline. Rather, in the late 1960s and well into the ’70s terms such as computer organization, computer structures, logical organization, computer systems organization, or, most blandly, computer design were preferred to describe computers in an abstract sort of way, independent of the physical (hardware) details. Thus a widely referenced paper by Michael Flynn of Stanford University, published in 1974, was titled “Trends and Problems in Computer Organization.” And Maurice Wilkes, even in the third edition of his Time-Sharing Computer Systems (1975) declined to use the term computer architecture. Yet, computer architecture as both an abstract way of looking at, understanding, and designing computers, and as a field of computer science emerged in the first years of the ’70s. The Institute of Electrical and Electronics Engineers (IEEE) founded a Technical Committee on Computer Architecture (TCCA) in 1970 to join the ranks of other specialist IEEE TCs. The Association for Computing Machinery (ACM) followed suit in 1971 by establishing, alongside other special-interest groups, the Special Interest Group on Computer Architecture (SIGARCH). And in 1974, the first of what came to be the annual International Symposium on Computer Architecture (ISCA) was held in Gainesville, Florida. By the end of the decade a series of significant textbooks and articles bearing the term computer architecture(s) had appeared. The reason for naming an aspect of the computer its “architecture” and the reason for naming an academic and research discipline “computer architecture” can be traced back to the mid-1940s and the paradigm-shaping unpublished reports by John von Neumann of the Institute of Advanced Study, Princeton, and his collaborators, Arthur Burks and Herman Goldstine.


2019 ◽  
Vol 6 (4) ◽  
pp. 181575 ◽  
Author(s):  
Hans IJzerman ◽  
Jaap J. A. Denissen

We report a replication and extension of a finding from Studies 1 and 2 of Van Lange et al .'s influential paper (Van Lange et al. 1997 J. Pers. Soc. Psychol. 73 , 733–746. ( doi:10.1037/0022-3514.73.4.733 )), which showed an association between Social Value Orientation (SVO) and attachment security. We report a close replication but with measures of attachment that are considered superior in comparison to measures used by Van Lange et al ., due to subsequent psychometric improvements. Psychometric analyses indeed showed that our attachment measures were reliable and valid, demonstrating theoretically predicted associations with other outcomes. With a sample ( N = 879) sufficiently large to detect d = 0.19 (and larger than the original N = 573), we failed to replicate the effect. Based on the available evidence, we interpret as there being no evidence for the link between attachment security and Social Value Orientation, but further replication research that uses solid measures and large samples can provide more definite conclusions about the association between attachment and SVO.


Stochastic processes are systems that evolve in time probabilistically; their study is the ‘dynamics’ of probability theory as contrasted with rather more traditional ‘static’ problems. The analysis of stochastic processes has as one of its main origins late 19th century statistical physics leading in particular to studies of random walk and brownian motion (Rayleigh 1880; Einstein 1906) and via them to the very influential paper of Chandrasekhar (1943). Other strands emerge from the work of Erlang (1909) on congestion in telephone traffic and from the investigations of the early mathematical epidemiologists and actuarial scientists. There is by now a massive general theory and a wide range of special processes arising from applications in many fields of study, including those mentioned above. A relatively small part of the above work concerns techniques for the analysis of empirical data arising from such systems.


2015 ◽  
Vol 24 (4) ◽  
pp. 391-406 ◽  
Author(s):  
HOPE R. FERDOWSIAN ◽  
JOHN P. GLUCK

Abstract:In 1966, Henry K. Beecher published an article entitled “Ethics and Clinical Research” in the New England Journal of Medicine, which cited examples of ethically problematic human research. His influential paper drew attention to common moral problems such as inadequate attention to informed consent, risks, and efforts to provide ethical justification. Beecher’s paper provoked significant advancements in human research policies and practices. In this paper, we use an approach modeled after Beecher’s 1966 paper to show that moral problems with animal research are similar to the problems Beecher described for human research. We describe cases that illustrate ethical deficiencies in the conduct of animal research, including inattention to the issue of consent or assent, incomplete surveys of the harms caused by specific protocols, inequitable burdens on research subjects in the absence of benefits to them, and insufficient efforts to provide ethical justification. We provide a set of recommendations to begin to address these deficits.


Philosophy ◽  
2012 ◽  
Vol 87 (4) ◽  
pp. 583-593 ◽  
Author(s):  
Craig Taylor

AbstractIn his influential paper ‘The Conscience of Huckleberry Finn’, Jonathan Bennett suggests that Huck's failure to turn in the runaway slave Jim as his conscience – a conscience distorted by racism – tells him he ought to is not merely right but also praiseworthy. James Montmarquet however argues against what he sees here as Bennett's ‘anti-intellectualism’ in moral psychology that insofar as Huck lacks and so fails to act on the moral belief that he should help Jim his action is not praiseworthy. In this paper I suggest that we should reject Montmarquet's claim here; that the case of Huck Finn indicates rather how many of our everyday moral responses to others do not and need not depend on any particular moral beliefs we hold about them or their situation.


Utilitas ◽  
2014 ◽  
Vol 26 (1) ◽  
pp. 105-119 ◽  
Author(s):  
JOANNA M. BURCH-BROWN

In an influential paper, James Lenman argues that consequentialism can provide no basis for ethical guidance, because we are irredeemably ignorant of most of the consequences of our actions. If our ignorance of distant consequences is great, he says, we can have little reason to recommend one action over another on consequentialist grounds. In this article, I show that for reasons to do with statistical theory, the cluelessness objection is too pessimistic. We have good reason to believe that certain patterns of action will tend to have better consequences, and we have good reason to recommend acting in accordance with strategies based on those advantageous patterns. I close by saying something about the strategies that this argument should lead us to favour.


Sign in / Sign up

Export Citation Format

Share Document