scholarly journals Automated analysis of lexical features in Frontotemporal Degeneration

Author(s):  
Sunghye Cho ◽  
Naomi Nevler ◽  
Sharon Ash ◽  
Sanjana Shellikeri ◽  
David J. Irwin ◽  
...  

AbstractWe implemented an automated analysis of lexical aspects of semi-structured speech produced by healthy elderly controls (n=37) and three patient groups with frontotemporal degeneration (FTD): behavioral variant FTD (n=74), semantic variant primary progressive aphasia (svPPA, n=42), and nonfluent/agrammatic PPA (naPPA, n=22). Based on previous findings, we hypothesized that the three patient groups and controls would differ in the counts of part-of-speech (POS) categories and several lexical measures. With a natural language processing program, we automatically tagged POS categories of all words produced during a picture description task. We further counted the number of wh-words, and we rated nouns for abstractness, ambiguity, frequency, familiarity, and age of acquisition. We also computed the cross-entropy estimation, which is a measure of word predictability, and lexical diversity for each description. We validated a subset of the POS data that were automatically tagged with the Google Universal POS scheme using gold-standard POS data tagged by a linguist, and we found that the POS categories from our automated methods were more than 90% accurate. For svPPA patients, we found fewer unique nouns than in naPPA and more pronouns and wh-words than in the other groups. We also found high abstractness, ambiguity, frequency, and familiarity for nouns and the lowest cross-entropy estimation among all groups. These measures were associated with cortical thinning in the left temporal lobe. In naPPA patients, we found increased speech errors and partial words compared to controls, and these impairments were associated with cortical thinning in the left middle frontal gyrus. bvFTD patients’ adjective production was decreased compared to controls and was correlated with their apathy scores. Their adjective production was associated with cortical thinning in the dorsolateral frontal and orbitofrontal gyri. Our results demonstrate distinct language profiles in subgroups of FTD patients and validate our automated method of analyzing FTD patients’ speech.

1967 ◽  
Vol 13 (6) ◽  
pp. 515-520 ◽  
Author(s):  
Genevieve Farese ◽  
Janice L Schmidt ◽  
Milton Mager

Abstract A completely automated analysis is described for the determination of serum calcium with glyoxal bis (2-hydroxyanil) solution (GBHA). The method is simple and precise, and the data obtained are in good agreement with results obtained by the manual GBHA procedure.


Author(s):  
Zahra Mousavi ◽  
Heshaam Faili

Nowadays, wordnets are extensively used as a major resource in natural language processing and information retrieval tasks. Therefore, the accuracy of wordnets has a direct influence on the performance of the involved applications. This paper presents a fully-automated method for extending a previously developed Persian wordnet to cover more comprehensive and accurate verbal entries. At first, by using a bilingual dictionary, some Persian verbs are linked to Princeton WordNet synsets. A feature set related to the semantic behavior of compound verbs as the majority of Persian verbs is proposed. This feature set is employed in a supervised classification system to select the proper links for inclusion in the wordnet. We also benefit from a pre-existing Persian wordnet, FarsNet, and a similarity-based method to produce a training set. This is the largest automatically developed Persian wordnet with more than 27,000 words, 28,000 PWN synsets and 67,000 word-sense pairs that substantially outperforms the previous Persian wordnet with about 16,000 words, 22,000 PWN synsets and 38,000 word-sense pairs.


2011 ◽  
Vol 17 (4) ◽  
pp. 674-681 ◽  
Author(s):  
Sietske A.M. Sikkes ◽  
Dirk L. Knol ◽  
Mark T. van den Berg ◽  
Elly S.M. de Lange-de Klerk ◽  
Philip Scheltens ◽  
...  

AbstractA decline in everyday cognitive functioning is important for diagnosing dementia. Informant questionnaires, such as the informant questionnaire on cognitive decline in the elderly (IQCODE), are used to measure this. Previously, conflicting results on the IQCODEs ability to discriminate between Alzheimer's disease (AD), mild cognitive impairment (MCI), and cognitively healthy elderly were found. We aim to investigate whether specific groups of items are more useful than others in discriminating between these patient groups. Informants of 180 AD, 59 MCI, and 89 patients with subjective memory complaints (SMC) completed the IQCODE. To investigate the grouping of questionnaire items, we used a two-dimensional graded response model (GRM).The association between IQCODE, age, gender, education, and diagnosis was modeled using structural equation modeling. The GRM with two groups of items fitted better than the unidimensional model. However, the high correlation between the dimensions (r=.90) suggested unidimensionality. The structural model showed that the IQCODE was able to differentiate between all patient groups. The IQCODE can be considered as unidimensional and as a useful addition to diagnostic screening in a memory clinic setting, as it was able to distinguish between AD, MCI, and SMC and was not influenced by gender or education. (JINS, 2011, 17, 674–681)


2020 ◽  
Vol 9 (8) ◽  
pp. 2663
Author(s):  
Seung Joo Kim ◽  
Dong Kyun Lee ◽  
Young Kyoung Jang ◽  
Hyemin Jang ◽  
Si Eun Kim ◽  
...  

White matter hyperintensity (WMH) has been recognised as a surrogate marker of small vessel disease and is associated with cognitive impairment. We investigated the dynamic change in WMH in patients with severe WMH at baseline, and the effects of longitudinal change of WMH volume on cognitive decline and cortical thinning. Eighty-seven patients with subcortical vascular mild cognitive impairment were prospectively recruited from a single referral centre. All of the patients were followed up with annual neuropsychological tests and 3T brain magnetic resonance imaging. The WMH volume was quantified using an automated method and the cortical thickness was measured using surface-based methods. Participants were classified into WMH progression and WMH regression groups based on the delta WMH volume between the baseline and the last follow-up. To investigate the effects of longitudinal change in WMH volume on cognitive decline and cortical thinning, a linear mixed effects model was used. Seventy patients showed WMH progression and 17 showed WMH regression over a three-year period. The WMH progression group showed more rapid cortical thinning in widespread regions compared with the WMH regression group. However, the rate of cognitive decline in language, visuospatial function, memory and executive function, and general cognitive function was not different between the two groups. The results of this study indicated that WMH volume changes are dynamic and WMH progression is associated with more rapid cortical thinning.


2021 ◽  
pp. 1-39 ◽  
Author(s):  
Ziqi Zhang ◽  
Winnie Tam ◽  
Andrew Cox

Previous studies of research methods in LIS lack consensus in how to define or classify research methods, and there have been no studies on automated recognition of research methods in the scientific literature of this field. This work begins to fill these gaps by studying how the scope of ‘research methods’ in LIS has evolved, and the challenges in automatically identifying the usage of research methods in LIS literature. 2,599 research articles are collected from three LIS journals. Using a combination of content analysis and text mining methods, a sample of this collection is coded into 29 different concepts of research methods and is then used to test a rule-based automated method for identifying research methods reported in scientific literature. We show that the LIS field is characterised by the use of an increasingly diverse range of methods, many of which originate outside the conventional boundaries of LIS. This implies increasing complexity in research methodology and suggests the need for a new approach towards classifying LIS research methods to capture the complex structure and relationships between different aspects of methods. Our automated method is the first of its kind in LIS, and sets an important reference for future research.


2021 ◽  
Author(s):  
Huseyin Denli ◽  
Hassan A Chughtai ◽  
Brian Hughes ◽  
Robert Gistri ◽  
Peng Xu

Abstract Deep learning has recently been providing step-change capabilities, particularly using transformer models, for natural language processing applications such as question answering, query-based summarization, and language translation for general-purpose context. We have developed a geoscience-specific language processing solution using such models to enable geoscientists to perform rapid, fully-quantitative and automated analysis of large corpuses of data and gain insights. One of the key transformer-based model is BERT (Bidirectional Encoder Representations from Transformers). It is trained with a large amount of general-purpose text (e.g., Common Crawl). Use of such a model for geoscience applications can face a number of challenges. One is due to the insignificant presence of geoscience-specific vocabulary in general-purpose context (e.g. daily language) and the other one is due to the geoscience jargon (domain-specific meaning of words). For example, salt is more likely to be associated with table salt within a daily language but it is used as a subsurface entity within geosciences. To elevate such challenges, we retrained a pre-trained BERT model with our 20M internal geoscientific records. We will refer the retrained model as GeoBERT. We fine-tuned the GeoBERT model for a number of tasks including geoscience question answering and query-based summarization. BERT models are very large in size. For example, BERT-Large has 340M trained parameters. Geoscience language processing with these models, including GeoBERT, could result in a substantial latency when all database is processed at every call of the model. To address this challenge, we developed a retriever-reader engine consisting of an embedding-based similarity search as a context retrieval step, which helps the solution to narrow the context for a given query before processing the context with GeoBERT. We built a solution integrating context-retrieval and GeoBERT models. Benchmarks show that it is effective to help geologists to identify answers and context for given questions. The prototype will also produce a summary to different granularity for a given set of documents. We have also demonstrated that domain-specific GeoBERT outperforms general-purpose BERT for geoscience applications.


1987 ◽  
Vol 33 (6) ◽  
pp. 835-837 ◽  
Author(s):  
H O Goodman ◽  
Z K Shihabi

Abstract We have developed an automated method of analysis for taurine, based on incorporating an ion-exchange chromatography column into the continuous-flow AutoAnalyzer (Technicon). After removal of proteins and peptides by dialysis, taurine is selectively eluted from an ion-exchange column and reacted with o-phthaldialdehyde to yield a fluorescent compound. The advantages of this method are: full automation with no need for sample deproteinization or cleanup; sensitivity, detecting as little as 5 mumol/L; speed (20 samples per hour); and flexibility. It can be used for assaying taurine in urine, plasma, cerebrospinal fluid, and tissue homogenates. This method can be adapted for assays of other metabolites.


2016 ◽  
Vol 52 ◽  
pp. 790-811 ◽  
Author(s):  
Delfin S. Go ◽  
Hans Lofgren ◽  
Fabian Mendez Ramos ◽  
Sherman Robinson

1989 ◽  
Vol 11 (4) ◽  
pp. 245-259 ◽  
Author(s):  
G. A. Mohr ◽  
Zvi Vered ◽  
Benico Barzilai ◽  
Julio E. Perez ◽  
Burton E. Sobel ◽  
...  

An algorithm for quantitative description of cardiac cycle dependent variation of integrated backscatter (cyclic variation) has been developed and is shown to be suitable for analysis of nonsinusoidal data typical of ultrasonic tissue characterization measurements from myocardium in vivo. The algorithm produces estimates of the magnitude of variation and of the time delay relative to the the electrocardiographically recorded QRS-complex. To validate the algorithm, 246 integrated backscatter measurements were analyzed both manually and by the automated method. The magnitude and time delay estimates from the two methods correlated closely. With a separate set of data, the algorithm produced reasonable descriptions of the cyclic variation for 89 of 101 integrated backscatter measurements. Only modest computational power is required for effective implementation of this algorithm, facilitating inclusion of online automated analysis capabilities in quantitative ultrasonic tissue characterization systems.


2016 ◽  
Author(s):  
Monika Scholz ◽  
Dylan J. Lynch ◽  
Kyung Suk Lee ◽  
Erel Levine ◽  
David Biron

We describe a scalable automated method for measuring the pharyngeal pumping of Caenorhabditis elegans in controlled environments. Our approach enables unbiased measurements for prolonged periods, a high throughput, and measurements in controlled yet dynamically changing feeding environments. The automated analysis compares well with scoring pumping by visual inspection, a common practice in the field. In addition, we observed overall low rates of pharyngeal pumping and long correlation times when food availability was oscillated.


Sign in / Sign up

Export Citation Format

Share Document