scholarly journals Timescales of motor memory formation in dual-adaptation

2019 ◽  
Author(s):  
Marion Forano ◽  
David W. Franklin

AbstractThe timescales of adaptation to novel dynamics are well explained by a dual-rate model with slow and fast states. This model can predict interference, savings and spontaneous recovery, but cannot account for adaptation to multiple tasks, as each new task drives unlearning of the previously learned task. Nevertheless, in the presence of appropriate contextual cues, humans are able to adapt simultaneously to opposing dynamics. Consequently this model was expanded, suggesting that dual-adaptation occurs through a single fast process and multiple slow processes. However, such a model does not predict spontaneous recovery within dual-adaptation. Here we assess the existence of multiple fast processes by examining the presence of spontaneous recovery in two experimental variations of an adaptation-de-adaptation-error-clamp paradigm within dual-task adaptation in humans. In both experiments, evidence for spontaneous recovery towards the initially learned dynamics (A) was found in the error-clamp phase, invalidating the one-fast-two-slow dual-rate model. However, as adaptation is not only constrained to two timescales, we fit twelve multi-rate models to the experimental data. BIC model comparison again supported the existence of two fast processes, but extended the timescales to include a third rate: the ultraslow process. Even within our single day experiment, we found little evidence for decay of the learned memory over several hundred error-clamp trials. Overall, we show that dual-adaptation can be best explained by a two-fast-triple-rate model over the timescales of adaptation studied here. Longer term learning may require even slower timescales, explaining why we never forget how to ride a bicycle.Author SummaryRetaining motor skills is crucial to perform basic daily life tasks. However we still have limited understanding of the computational structure of these motor memories, an understanding that is critical for designing rehabilitation. Here we demonstrate that learning any task involves adaptation of independent fast, slow and ultraslow processes to build a motor memory. The selection of the appropriate motor memory is gated through a contextual cue. Together this work extends our understanding of the architecture of motor memories, by merging disparate computational theories to propose a new model.

Author(s):  
M. A. Artyukhova ◽  

Evaluation of reliability indicators is necessary procedure in the design of a technical system. The article consider two failure rate models for integrated circuits and a number of conclusions, derived from model comparison with operating experience.


1975 ◽  
Vol 26 ◽  
pp. 395-407
Author(s):  
S. Henriksen

The first question to be answered, in seeking coordinate systems for geodynamics, is: what is geodynamics? The answer is, of course, that geodynamics is that part of geophysics which is concerned with movements of the Earth, as opposed to geostatics which is the physics of the stationary Earth. But as far as we know, there is no stationary Earth – epur sic monere. So geodynamics is actually coextensive with geophysics, and coordinate systems suitable for the one should be suitable for the other. At the present time, there are not many coordinate systems, if any, that can be identified with a static Earth. Certainly the only coordinate of aeronomic (atmospheric) interest is the height, and this is usually either as geodynamic height or as pressure. In oceanology, the most important coordinate is depth, and this, like heights in the atmosphere, is expressed as metric depth from mean sea level, as geodynamic depth, or as pressure. Only for the earth do we find “static” systems in use, ana even here there is real question as to whether the systems are dynamic or static. So it would seem that our answer to the question, of what kind, of coordinate systems are we seeking, must be that we are looking for the same systems as are used in geophysics, and these systems are dynamic in nature already – that is, their definition involvestime.


2019 ◽  
Vol 37 (1) ◽  
pp. 89-110
Author(s):  
Rachel Fensham

The Viennese modern choreographer Gertrud Bodenwieser's black coat leads to an analysis of her choreography in four main phases – the early European career; the rise of Nazism; war's brutality; and postwar attempts at reconciliation. Utilising archival and embodied research, the article focuses on a selection of Bodenwieser costumes that survived her journey from Vienna, or were remade in Australia, and their role in the dramaturgy of works such as Swinging Bells (1926), The Masks of Lucifer (1936, 1944), Cain and Abel (1940) and The One and the Many (1946). In addition to dance history, costume studies provides a distinctive way to engage with the question of what remains of performance, and what survives of the historical conditions and experience of modern dance-drama. Throughout, Hannah Arendt's book The Human Condition (1958) provides a critical guide to the acts of reconstruction undertaken by Bodenwieser as an émigré choreographer in the practice of her craft, and its ‘materializing reification’ of creative thought. As a study in affective memory, information regarding Bodenwieser's personal life becomes interwoven with the author's response to the material evidence of costumes, oral histories and documents located in various Australian archives. By resurrecting the ‘dead letters’ of this choreography, the article therefore considers how dance costumes offer the trace of an artistic resistance to totalitarianism.


Kybernetes ◽  
2019 ◽  
Vol 49 (4) ◽  
pp. 1083-1102
Author(s):  
Georgios N. Aretoulis ◽  
Jason Papathanasiou ◽  
Fani Antoniou

Purpose This paper aims to rank and identify the most efficient project managers (PMs) based on personality traits, using Preference Ranking Organization METHod for Enrichment Evaluations (PROMETHEE) methodology. Design/methodology/approach The proposed methodology relies on the five personality traits. These were used as the selection criteria. A questionnaire survey among 82 experienced engineers was used to estimate the required weights per personality trait. A second two-part questionnaire survey aimed at recording the PMs profile and assess the performance of personality traits per PM. PMs with the most years of experience are selected to be ranked through Visual PROMETHEE. Findings The findings suggest that a competent PM is the one that scores low on the “Neuroticism” trait and high especially on the “Conscientiousness” trait. Research limitations/implications The research applied a psychometric test specifically designed for Greek people. Furthermore, the proposed methodology is based on the personality characteristics to rank the PMs and does not consider the technical skills. Furthermore, the type of project is not considered in the process of ranking PMs. Practical implications The findings could contribute in the selection of the best PM that maximizes the project team’s performance. Social implications Improved project team communication and collaboration leading to improved project performance through better communication and collaboration. This is an additional benefit for the society, especially in the delivery of public infrastructure projects. A lot of public infrastructure projects deviate largely as far as cost and schedule is concerned and this is an additional burden for public and society. Proper project management through efficient PMs would save people’s money and time. Originality/value Identification of the best PMbased on a combination of multicriteria decision-making and psychometric tests, which focus on personality traits.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Colin Griesbach ◽  
Benjamin Säfken ◽  
Elisabeth Waldmann

Abstract Gradient boosting from the field of statistical learning is widely known as a powerful framework for estimation and selection of predictor effects in various regression models by adapting concepts from classification theory. Current boosting approaches also offer methods accounting for random effects and thus enable prediction of mixed models for longitudinal and clustered data. However, these approaches include several flaws resulting in unbalanced effect selection with falsely induced shrinkage and a low convergence rate on the one hand and biased estimates of the random effects on the other hand. We therefore propose a new boosting algorithm which explicitly accounts for the random structure by excluding it from the selection procedure, properly correcting the random effects estimates and in addition providing likelihood-based estimation of the random effects variance structure. The new algorithm offers an organic and unbiased fitting approach, which is shown via simulations and data examples.


1992 ◽  
Vol 75 (3_suppl) ◽  
pp. 1124-1126
Author(s):  
John F. Walsh

A statistical test is developed based on the comparison of sums of squared errors associated with two competing models. A model based on cell means is compared to a representation that specifies the means for the treatment conditions. Comparing models is more general than the traditional H0 in analysis of variance wherein all the cell means are assumed equal. The test statistic, Proportional Increase in Error, is computed using the SAS statistical system.


1969 ◽  
Vol 13 (2) ◽  
pp. 117-126 ◽  
Author(s):  
Derek J. Pike

Robertson (1960) used probability transition matrices to estimate changes in gene frequency when sampling and selection are applied to a finite population. Curnow & Baker (1968) used Kojima's (1961) approximate formulae for the mean and variance of the change in gene frequency from a single cycle of selection applied to a finite population to develop an iterative procedure for studying the effects of repeated cycles of selection and regeneration. To do this they assumed a beta distribution for the unfixed gene frequencies at each generation.These two methods are discussed and a result used in Kojima's paper is proved. A number of sets of calculations are carried out using both methods and the results are compared to assess the accuracy of Curnow & Baker's method in relation to Robertson's approach.It is found that the one real fault in the Curnow-Baker method is its tendency to fix too high a proportion of the genes, particularly when the initial gene frequency is near to a fixation point. This fault is largely overcome when more individuals are selected. For selection of eight or more individuals the Curnow-Baker method is very accurate and appreciably faster than the transition matrix method.


PEDIATRICS ◽  
1962 ◽  
Vol 30 (2) ◽  
pp. 287-296
Author(s):  
W. F. Dodge ◽  
C. W. Daeschner ◽  
J. C. Brennan ◽  
H. S. Rosenberg ◽  
L. B. Travis ◽  
...  

Since 1951, when the percutaneous renal biopsy was introduced as an adjunctive method for study of patients with renal disease, reports of some 4,000 kidney biopsies have appeared in the literature. Only about 250 of these, however, have been performed in children. A biopsy specimen containing 5 to 10 glomeruli has been reported to be adequate for interpretation and to be representative of the total renal parenchyma in 84% of the cases with diffuse renal disease. Using a biopsy technique similar to that described by Kark, we have obtained an adequate specimen in 92% of 205 kidney biopsies performed in 168 children with diffuse renal diseases. Seven deaths have been previously reported in the literature. The circumstances surrounding the death of these seven patients and of the one death that occurred in our series are described. Perirenal hematoma has had a reported incidence of 0.4%. It has been our experience, as well as that of the other investigators, that if blood boss is replaced, the patient has an otherwise uneventful course and the mass subsequently disappears. Gross hematuria has had a reported incidence of 5.2%. Microscopic hematuria, lasting for 6 to 12 hours after biopsy, has been found to be the rule rather than the exception. The complications which have occurred have been associated with bleeding, and therefore a careful history concerning bleeding tendency and a study of the clotting mechanism is essential if the risk of needle renal biopsy is to be minimized. In addition to a bleeding tendency or defect in clotting mechanism, most investigators are agreed that the presence of only one kidney or an uncooperative patient are absolute contraindications to renal biopsy. The renal biopsy is primarily, at present, an additional and most useful investigative tool in the elucidation of the pathogenesis, natural history (by serial studies) and effectiveness of specific therapy upon the various renal diseases. It is of practical clinical importance in the selection of those patients with the nephrotic syndrome in whom glucocorticoid therapy is likely to be beneficial or the patient with anuria whose renal lesion is probably reversible with time; and, as a guide to the effectiveness of therapy in patients with pyelonephritis or lupus nephritis. It is not a technique that can be recommended for general or casual use. A classification of the pathohistobogic findings of diffuse glomerulonephritis, patterned after Ellis, is presented and discussed. This classification will be used in the description and discussion of various renal diseases and systemic diseases with associated nephritis in the three subsequent papers.


We present various techniques for the asymptotic expansions of generalized functions. We show that the moment asymptotic expansions hold for a very wide variety of kernels such as generalized functions of rapid decay and rapid oscillations. We do not use Mellin transform techniques as done by previous authors in the field. Instead, we introduce a direct approach that not only solves the one-dimensional problems but also applies to various multidimensional integrals and oscillatory kernels as well. This approach also helps in the development of various asymptotic series arising in diverse fields of mathematics and physics. We find that the asymptotic expansions of generalized functions depend on the selection of suitable spaces of test functions. Accordingly, we have exercised special care in classifying the spaces and the distributions defined on them. Furthermore, we use the theory of topological tensor products to obtain the expansions of vector-valued distributions. We present several examples to illustrate that many classical results follow in a simple manner. For instance, we derive from our results the asymptotic expansions of certain series considered by Ramanujan.


2016 ◽  
Vol 4 (1) ◽  
pp. 67-91 ◽  
Author(s):  
Steffen Dalsgaard

This article refers to carbon valuation as the practice of ascribing value to, and assessing the value of, actions and objects in terms of carbon emissions. Due to the pervasiveness of carbon emissions in the actions and objects of everyday lives of human beings, the making of carbon offsets and credits offers almost unlimited repertoires of alternatives to be included in contemporary carbon valuation schemes. Consequently, the article unpacks how discussions of carbon valuation are interpreted through different registers of alternatives - as the commensuration and substitution of variants on the one hand, and the confrontational comparison of radical difference on the other. Through the reading of a wide selection of the social science literature on carbon markets and trading, the article argues that the value of carbon emissions itself depends on the construction of alternative, hypothetical scenarios, and that emissions have become both a moral and a virtual measure pitting diverse forms of actualised actions or objects against each other or against corresponding nonactions and non-objects as alternatives.


Sign in / Sign up

Export Citation Format

Share Document