scholarly journals Automation of the Northwestern Narrative Language Analysis System

2020 ◽  
Vol 63 (6) ◽  
pp. 1835-1844 ◽  
Author(s):  
Davida Fromm ◽  
Brian MacWhinney ◽  
Cynthia K. Thompson

Purpose Analysis of spontaneous speech samples is important for determining patterns of language production in people with aphasia. To accomplish this, researchers and clinicians can use either hand coding or computer-automated methods. In a comparison of the two methods using the hand-coding NNLA (Northwestern Narrative Language Analysis) and automatic transcript analysis by CLAN (Computerized Language Analysis), Hsu and Thompson (2018) found good agreement for 32 of 51 linguistic variables. The comparison showed little difference between the two methods for coding most general (i.e., utterance length, rate of speech production), lexical, and morphological measures. However, the NNLA system coded grammatical measures (i.e., sentence and verb argument structure) that CLAN did not. Because of the importance of quantifying these aspects of language, the current study sought to implement a new, single, composite CLAN command for the full set of 51 NNLA codes and to evaluate its reliability for coding aphasic language samples. Method Eighteen manually coded NNLA transcripts from eight people with aphasia and 10 controls were converted into CHAT (Codes for the Human Analysis of Talk) files for compatibility with CLAN commands. Rules from the NNLA manual were translated into programmed rules for CLAN computation of lexical, morphological, utterance-level, sentence-level, and verb argument structure measures. Results The new C-NNLA (CLAN command to compute the full set of NNLA measures) program automatically computes 50 of the 51 NNLA measures and generates the results in a summary spreadsheet. The only measure it does not compute is the number of verb particles. Statistical tests revealed no significant difference between C-NNLA results and those generated by manual coding for 44 of the 50 measures. C-NNLA results were not comparable to manual coding for the six verb argument measures. Conclusion Clinicians and researchers can use the automatic C-NNLA to analyze important variables required for quantification of grammatical deficits in aphasia in a way that is fast, replicable, and accessible without extensive linguistic knowledge and training.

2018 ◽  
Vol 61 (2) ◽  
pp. 373-385 ◽  
Author(s):  
Chien-Ju Hsu ◽  
Cynthia K. Thompson

Purpose The purpose of this study is to compare the outcomes of the manually coded Northwestern Narrative Language Analysis (NNLA) system, which was developed for characterizing agrammatic production patterns, and the automated Computerized Language Analysis (CLAN) system, which has recently been adopted to analyze speech samples of individuals with aphasia (a) for reliability purposes to ascertain whether they yield similar results and (b) to evaluate CLAN for its ability to automatically identify language variables important for detailing agrammatic production patterns. Method The same set of Cinderella narrative samples from 8 participants with a clinical diagnosis of agrammatic aphasia and 10 cognitively healthy control participants were transcribed and coded using NNLA and CLAN. Both coding systems were utilized to quantify and characterize speech production patterns across several microsyntactic levels: utterance, sentence, lexical, morphological, and verb argument structure levels. Agreement between the 2 coding systems was computed for variables coded by both. Results Comparison of the 2 systems revealed high agreement for most, but not all, lexical-level and morphological-level variables. However, NNLA elucidated utterance-level, sentence-level, and verb argument structure–level impairments, important for assessment and treatment of agrammatism, which are not automatically coded by CLAN. Conclusions CLAN automatically and reliably codes most lexical and morphological variables but does not automatically quantify variables important for detailing production deficits in agrammatic aphasia, although conventions for manually coding some of these variables in Codes for the Human Analysis of Transcripts are possible. Suggestions for combining automated programs and manual coding to capture these variables or revising CLAN to automate coding of these variables are discussed.


2019 ◽  
Author(s):  
Steven M. Frankland ◽  
Joshua D. Greene

AbstractNatural language is notable amongst representational systems for the rich internal structure of phrase and sentence-level expressions. Here, we provide evidence from two fMRI studies that a region of the left Middle Temporal Gyrus (MTG) exhibits a surprising representational asymmetry: verbs and patients (to whom was it done?) are bound to form a representation, but verbs and agents (who did it?) are not. Within MTG, BOLD signal to novel combinations of familiar components can be modeled by combining learned verb-patient conjunctive representations with more general agent representations, but not by the converse (verb-agent + patient). This asymmetry is not predicted by an abstract propositional representation of the event (e.g., chased(dog,cat), nor by a theory which derives conjunctions from the experienced statistical co-occurences between verbs and nouns. However, this asymmetry is predicted by various linguistic accounts of the internal structure of event descriptions (e.g., Williams, 1981; Marantz,1984; Grimshaw, 1990; Kratzer, 1996). These results provide evidence for the time-varying instantiation of re-usable representations of structure in MTG, consistent with the principle of compositionality, as well as accounts of verb-argument structure.


2015 ◽  
Vol 62 (1) ◽  
Author(s):  
Hillary K. Sang

Background: The spontaneous and narrative language of Kiswahili agrammatic aphasic and non-brain-damaged speakers was analysed. The bilingual participants were also tested in English to enable comparisons of verb production in the two languages. The significance of this study was to characterise bilingual Kiswahili-English spontaneous agrammatic output. This was done by describing Kiswahili-English bilingual output data with a specific focus on the production of verbs. The description involves comparison of verb and argument production in Kiswahili and English. Methods and procedures: The participants recruited for this study were drawn from two groups of participants (six non-fluent aphasic/agrammatic speakers and six non-braindamaged). From each participant, a sample of spontaneous output was tape-recorded in English and Kiswahili based on the description and narration of the Flood rescue picture’ and the ‘Cookie theft picture’. The data elicited were compared for each subject and between the participants and relevant verb parameters have been analysed. The variables that were studied included mean length of utterance (MLU), inflectional errors, verb tokens and types, copulas and auxiliaries. Further, all verbs produced were classified as per their argument structure. Results: The results from English data supported previous findings on agrammatic output. The agrammatic participants produced utterances with shorter MLU and simpler sentence structure. However, Kiswahili data surprisingly showed reversed results, with agrammatic speakers producing longer utterances than non-brain-damaged (NBD) controls. The results also revealed selective impairment in some agrammatic speakers who made inflectional errors. The verb argument structure showed contrasting results, with agrammatic speakers preferring transitive verbs whilst the NBD speakers used more intransitive verbs.Conclusions: The study attempts for the first time to characterise English-Kiswahili bilingual spontaneous and narrative output. A quantitative analysis of verb and argument production is conducted. The results of the English data are consistent with those in the literature; agrammatic speakers produce utterances with shorter MLU and simpler sentence structure. However, Kiswahili data reveals a surprisingly reversed pattern most notably with respect to MLU with agrammatics producing longer utterances than NBD controls. Argument structure analysis revealed that agrammatics used more transitive verbs than intransitives.


Open Mind ◽  
2017 ◽  
Vol 2 (1) ◽  
pp. 1-13
Author(s):  
Melissa Kline ◽  
Laura Schulz ◽  
Edward Gibson

How do we decide what to say to ensure our meanings will be understood? The Rational Speech Act model (RSA; Frank & Goodman, 2012 ) asserts that speakers plan what to say by comparing the informativity of words in a particular context. We present the first example of an RSA model of sentence-level (who-did-what-to-whom) meanings. In these contexts, the set of possible messages must be abstracted from entities in common ground (people and objects) to possible events (Jane eats the apple, Marco peels the banana), with each word contributing unique semantic content. How do speakers accomplish the transformation from context to compositional, informative messages? In a communication game, participants described transitive events (e.g., Jane pets the dog), with only two words, in contexts where two words either were or were not enough to uniquely identify an event. Adults chose utterances matching the predictions of the RSA even when there was no possible fully “successful” utterance. Thus we show that adults’ communicative behavior can be described by a model that accommodates informativity in context, beyond the set of possible entities in common ground. This study provides the first evidence that adults’ language production is affected, at the level of argument structure, by the graded informativity of possible utterances in context, and suggests that full-blown natural speech may result from speakers who model and adapt to the listener’s needs.


2017 ◽  
Author(s):  
Melissa Kline ◽  
Laura Schulz ◽  
Edward Gibson

How do we decide what to say to ensure our meanings will be understood? The Rational Speech Act model (RSA, Frank & Goodman, 2012) asserts that speakers plan what to say by comparing the informativity of words in a particular context. We present the first example of an RSA model of sentence level (who-did-what-to-whom) meanings. In these contexts, the set of possible messages must be abstracted from entities in common ground (people and objects) to possible events (Jane eats the apple, Marco peels the banana), with each word contributing unique semantic content. How do speakers accomplish the transformation from context to compositional, informative messages? In a communication game, participants described transitive events (e.g. Jane pets the dog), with only two words, in contexts where two words either were or were not enough to uniquely identify an event. Adults chose utterances matching the predictions of the RSA even when there was no possible fully 'successful' utterance. Thus we show that adults’ communicative behavior can be described by a model that accommodates informativity in context, beyond the set of possible entities in common ground. This study provides the first evidence that adults' language production is affected, at the level of argument structure, by the graded informativity of possible utterances in context, and suggests that full- blown natural speech may result from speakers who model and adapt to the listener’s needs.


The research was conducted on the process of making Sohun Noodle in Klaten, Central Java. The manufacturing process was carried out by five workers at four work stations with 18 activities. The purpose of this study was to observe, evaluate and analyze the worker posture using The Ovako Working Analysis System (OWAS) and The Workplace Ergonomic Risk Assessment (WERA) method. The steps of the study using the OWAS method are: taking a picture of work posture, identifying the weight of the load, the process of assessing work posture, and categorizing risks. The steps of the research using the WERA method are: taking pictures of work postures, identifying body postures on the neck, shoulders, back, wrists and legs, identifying weight loads, duration of work, vibrations, contact stress, identifying risk factors, assessing work postures, and categorizing risks. The next step is processing statistical data, namely: normality test, comparative test, and correlation test using Statistical Package for the Social Science (SPSS) Version 21.0 for parts shoulders/arms, back, legs posture, weight/strength. The result of the OWAS method shows that there are two very risky activities and needs improvement now, i.e. the activity of inserting zinc into a press machine, and the activity of putting zinc containing sohun noodle into first drying. The result of the WERA method indicates that all activities are included in the medium level actions so that further investigation and change is needed. The results of statistical tests using SPSS Version 21.0 are: a comparative test on shoulders/arms and back there is a significant difference and in legs posture and weight / strength there is no significant difference. Whereas in the correlation test for shoulders/arms, back, and weight / strength there is a significant correlation between the OWAS and WERA methods.


2018 ◽  
Vol 27 (3) ◽  
pp. 1066-1072
Author(s):  
Shelley L. Bredin-Oja ◽  
Heather Fielding ◽  
Kandace K. Fleming ◽  
Steven F. Warren

Purpose The purpose of this study was to investigate the reliability of an automated language analysis system, the Language Environment Analysis (LENA), compared with a human transcriber to determine the rate of child vocalizations during recording sessions that were significantly shorter than recommended for the automated device. Method Participants were 6 nonverbal male children between the ages of 28 and 46 months. Two children had autism diagnoses, 2 had Down syndrome, 1 had a chromosomal deletion, and 1 had developmental delay. Participants were recorded by the LENA digital language processor during 14 play-based interactions with a responsive adult. Rate of child vocalizations during each of the 84 recordings was determined by both a human transcriber and the LENA software. Results A statistically significant difference between the 2 methods was observed for 4 of the 6 participants. Effect sizes were moderate to large. Variation in syllable structure did not explain the difference between the 2 methods. Vocalization rates from the 2 methods were highly correlated for 5 of the 6 participants. Conclusions Estimates of vocalization rates from nonverbal children produced by the LENA system differed from human transcription during sessions that were substantially shorter than the recommended recording length. These results confirm the recommendation of the LENA Foundation to record sessions of at least 1 hr.


Author(s):  
Petra van Alphen ◽  
Susanne Brouwer ◽  
Nina Davids ◽  
Emma Dijkstra ◽  
Paula Fikkert

Purpose This study compares online word recognition and prediction in preschoolers with (a suspicion of) a developmental language disorder (DLD) and typically developing (TD) controls. Furthermore, it investigates correlations between these measures and the link between online and off-line language scores in the DLD group. Method Using the visual world paradigm, Dutch children ages 3;6 (years;months) with (a suspicion of) DLD ( n = 51) and TD peers ( n = 31) listened to utterances such as, “Kijk, een hoed!” ( Look, a hat! ) in a word recognition task, and sentences such as, “Hé, hij leest gewoon een boek” (literal translation: Hey, he reads just a book ) in a word prediction task, while watching a target and distractor picture. Results Both groups demonstrated a significant word recognition effect that looked similar directly after target onset. However, the DLD group looked longer at the target than the TD group and shifted slower from the distractor to target pictures. Within the DLD group, word recognition was linked to off-line expressive language scores. For word prediction, the DLD group showed a smaller effect and slower shifts from verb onset compared to the TD group. Interestingly, within the DLD group, prediction behavior varied considerably, and was linked to receptive and expressive language scores. Finally, slower shifts in word recognition were related to smaller prediction effects. Conclusions While the groups' word recognition abilities looked similar, and only differed in processing speed and dwell time, the DLD group showed atypical verb-based prediction behavior. This may be due to limitations in their processing capacity and/or their linguistic knowledge, in particular of verb argument structure.


Sign in / Sign up

Export Citation Format

Share Document