scholarly journals German and English Bodies: No Evidence for Cross-Linguistic Differences in Preferred Orthographic Grain Size

2017 ◽  
Vol 3 (1) ◽  
Author(s):  
Xenia Schmalz ◽  
Serje Robidoux ◽  
Anne Castles ◽  
Max Coltheart ◽  
Eva Marinus

Previous studies have found that words and nonwords with many body neighbours (i.e., words with the same orthographic body, e.g., cat, brat, at) are read faster than items with fewer body neighbours. This body-N effect has been explored in the context of cross-linguistic differences in reading where it has been reported that the size of the effect differs as a function of orthographic depth: readers of English, a deep orthography, show stronger facilitation than readers of German, a shallow orthography. Such findings support the psycholinguistic grain size theory, which proposes that readers of English rely on large orthographic units to reduce ambiguity of print-to-speech correspondences in their orthography. Here we re-examine the evidence for this pattern and find that there is no reliable evidence for such a cross-linguistic difference. Re-analysis of a key study (Ziegler et al., 2001), analysis of data from the English Lexicon Project (Balota et al., 2007), and a large-scale analysis of nine new experiments all support this conclusion. Using Bayesian analysis techniques, we find little evidence of the body-N effect in most tasks and conditions. Where we do find evidence for a body-N effect (lexical decision for nonwords), we find evidence against an interaction with language.

2018 ◽  
Vol 39 (12) ◽  
pp. 1457-1462 ◽  
Author(s):  
Jan A. Roth ◽  
Manuel Battegay ◽  
Fabrice Juchler ◽  
Julia E. Vogt ◽  
Andreas F. Widmer

AbstractTo exploit the full potential of big routine data in healthcare and to efficiently communicate and collaborate with information technology specialists and data analysts, healthcare epidemiologists should have some knowledge of large-scale analysis techniques, particularly about machine learning. This review focuses on the broad area of machine learning and its first applications in the emerging field of digital healthcare epidemiology.


2005 ◽  
Vol 94 (11) ◽  
pp. 916-925 ◽  
Author(s):  
Marcus Dittrich ◽  
Ingvild Birschmann ◽  
Christiane Stuhlfelder ◽  
Albert Sickmann ◽  
Sabine Herterich ◽  
...  

SummaryNew large-scale analysis techniques such as bioinformatics, mass spectrometry and SAGE data analysis will allow a new framework for understanding platelets. This review analyses some important options and tasks for these tools and examines an outline of the new, refined picture of the platelet outlined by these new techniques. Looking at the platelet-specific building blocks of genome, (active) transcriptome and proteome (notably secretome and phospho-proteome), we summarize current bioinformatical and biochemical approaches, tasks as well as their limitations. Understanding the surprisingly complex platelet regarding compartmentalization, key cascades, and pathways including clinical implications will remain an exciting and hopefully fruitful challenge for the future.


2009 ◽  
Vol 33 (3) ◽  
pp. 341-356
Author(s):  
Janet Padiak

Outmoded terminology, inconsistent usage of terms, and lack of specificity are routinely encountered in death records, making integration of past causes of death difficult. This article summarizes problems encountered during large-scale analysis of nineteenth-century causes of morbidity and mortality. Tuberculosis is likely the most problematic cause of death that is routinely encountered; the different manifestations of this disease, depending on which part of the body it infects, mean that it can have quite diverse pathologies, each accorded a separate term. Following this terminology, changes in tuberculosis among soldiers in the British army from 1830 to 1913 are investigated. Morbidity shows a large contribution by scrofula to the total tubercular diseases from 1830 to 1870. Phthisis, the pulmonary form of tuberculosis, dominates mortality.


2018 ◽  
Author(s):  
Pavel Pokhilko ◽  
Evgeny Epifanovsky ◽  
Anna I. Krylov

Using single precision floating point representation reduces the size of data and computation time by a factor of two relative to double precision conventionally used in electronic structure programs. For large-scale calculations, such as those encountered in many-body theories, reduced memory footprint alleviates memory and input/output bottlenecks. Reduced size of data can lead to additional gains due to improved parallel performance on CPUs and various accelerators. However, using single precision can potentially reduce the accuracy of computed observables. Here we report an implementation of coupled-cluster and equation-of-motion coupled-cluster methods with single and double excitations in single precision. We consider both standard implementation and one using Cholesky decomposition or resolution-of-the-identity of electron-repulsion integrals. Numerical tests illustrate that when single precision is used in correlated calculations, the loss of accuracy is insignificant and pure single-precision implementation can be used for computing energies, analytic gradients, excited states, and molecular properties. In addition to pure single-precision calculations, our implementation allows one to follow a single-precision calculation by clean-up iterations, fully recovering double-precision results while retaining significant savings.


Sign in / Sign up

Export Citation Format

Share Document