scholarly journals A New View of Multi-modal Language Analysis: Audio and Video Features as Text “Styles”

Author(s):  
Zhongkai Sun ◽  
Prathusha K Sarma ◽  
Yingyu Liang ◽  
William Sethares
1982 ◽  
Vol 13 (1) ◽  
pp. 37-41
Author(s):  
Larry J. Mattes

Elicited imitation tasks are frequently used as a diagnostic tool in evaluating children with communication handicaps. This article presents a scoring procedure that can be used to obtain an in-depth descriptive analysis of responses produced on elicited imitation tasks. The Elicited Language Analysis Procedure makes it possible to systematically evaluate responses in terms of both their syntactic and semantic relationships to the stimulus sentences presented by the examiner. Response quality measures are also included in the analysis procedure.


1984 ◽  
Vol 15 (3) ◽  
pp. 154-168 ◽  
Author(s):  
Mary Ann Lively

Developmental Sentence Scoring (DSS) is a useful procedure for quantifying thegrammatic structure of children's expressive language. Like most language analysis techniques, however, DSS requires considerable study and practice to use it correctly and efficiently. Clinicians learning DSS tend to make many scoring errors at first and they display similar confusions and mistakes. This article identifies some of these common "problem" areas and provides scoring examples to assist clinicians in learning the DSS procedure.


2001 ◽  
Vol 10 (2) ◽  
pp. 180-188 ◽  
Author(s):  
Steven H. Long ◽  
Ron W. Channell

Most software for language analysis has relied on an interaction between the metalinguistic skills of a human coder and the calculating ability of the machine to produce reliable results. However, probabilistic parsing algorithms are now capable of highly accurate and completely automatic identification of grammatical word classes. The program Computerized Profiling combines a probabilistic parser with modules customized to produce four clinical grammatical analyses: MLU, LARSP, IPSyn, and DSS. The accuracy of these analyses was assessed on 69 language samples from typically developing, speech-impaired, and language-impaired children, 2 years 6 months to 7 years 10 months. Values obtained with human coding and by the software alone were compared. Results for all four analyses produced automatically were comparable to published data on the manual interrater reliability of these procedures. Clinical decisions based on cutoff scores and productivity data were little affected by the use of automatic rather than human-generated analyses. These findings bode well for future clinical and research use of automatic language analysis software.


2020 ◽  
Vol 51 (2) ◽  
pp. 479-493
Author(s):  
Jenny A. Roberts ◽  
Evelyn P. Altenberg ◽  
Madison Hunter

Purpose The results of automatic machine scoring of the Index of Productive Syntax from the Computerized Language ANalysis (CLAN) tools of the Child Language Data Exchange System of TalkBank (MacWhinney, 2000) were compared to manual scoring to determine the accuracy of the machine-scored method. Method Twenty transcripts of 10 children from archival data of the Weismer Corpus from the Child Language Data Exchange System at 30 and 42 months were examined. Measures of absolute point difference and point-to-point accuracy were compared, as well as points erroneously given and missed. Two new measures for evaluating automatic scoring of the Index of Productive Syntax were introduced: Machine Item Accuracy (MIA) and Cascade Failure Rate— these measures further analyze points erroneously given and missed. Differences in total scores, subscale scores, and individual structures were also reported. Results Mean absolute point difference between machine and hand scoring was 3.65, point-to-point agreement was 72.6%, and MIA was 74.9%. There were large differences in subscales, with Noun Phrase and Verb Phrase subscales generally providing greater accuracy and agreement than Question/Negation and Sentence Structures subscales. There were significantly more erroneous than missed items in machine scoring, attributed to problems of mistagging of elements, imprecise search patterns, and other errors. Cascade failure resulted in an average of 4.65 points lost per transcript. Conclusions The CLAN program showed relatively inaccurate outcomes in comparison to manual scoring on both traditional and new measures of accuracy. Recommendations for improvement of the program include accounting for second exemplar violations and applying cascaded credit, among other suggestions. It was proposed that research on machine-scored syntax routinely report accuracy measures detailing erroneous and missed scores, including MIA, so that researchers and clinicians are aware of the limitations of a machine-scoring program. Supplemental Material https://doi.org/10.23641/asha.11984364


1995 ◽  
Vol 40 (4) ◽  
pp. 384-385
Author(s):  
Terri Gullickson
Keyword(s):  

1991 ◽  
Vol 36 (10) ◽  
pp. 839-840
Author(s):  
William A. Yost
Keyword(s):  

2012 ◽  
Author(s):  
Frederick David Abraham ◽  
Stanley Krippner ◽  
Ruth Richards
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document