child speech
Recently Published Documents


TOTAL DOCUMENTS

236
(FIVE YEARS 74)

H-INDEX

25
(FIVE YEARS 2)

2022 ◽  
Vol 0 (0) ◽  
Author(s):  
Morgane Jourdain

Abstract Constructions marking information structure in French have been widely documented within the constructionist framework. C’est ‘it is’ clefts have been demonstrated to express the focus of the sentence. Nevertheless, it remains unclear how children are able to acquire clefts, and how they develop information structure categories. The aim of this study is to investigate the acquisition of clefts in French through the usage-based framework, to understand (i) whether IS categories emerge gradually like other linguistic categories, and (ii) how children build IS categories. For this, I analysed 256 c’est-clefts produced by three children between age 2 and 3. I show that most early clefts are produced by children with the chunk c’est moi associated with the concrete function of requesting to perform an action themselves. This chunk then becomes a frame with slot, extending the function to other human referents and discourse participants with the function of requesting adults to perform an action. Another large portion of early clefts seems to belong to a frame with slot c’est X whose function is to identify the agent who carried out an action. These findings suggest that the information structure category of focus emerges gradually.


2021 ◽  
Author(s):  
Nathan Chi ◽  
Peter Washington ◽  
Aaron Kline ◽  
Arman Husic ◽  
Cathy Hou ◽  
...  

BACKGROUND Autism spectrum disorder (ASD) is a neurodevelopmental disorder which results in altered behavior, social development, and communication patterns. In past years, autism prevalence has tripled, with 1 in 54 children now affected. Given that traditional diagnosis is a lengthy, labor-intensive process which requires the work of trained physicians, significant attention has been given to developing systems that automatically diagnose and screen for autism. OBJECTIVE Prosody abnormalities are among the most clear signs of autism, with affected children displaying speech idiosyncrasies (including echolalia, monotonous intonation, atypical pitch, and irregular linguistic stress patterns). In this work, we present a suite of machine learning approaches to detect autism in self-recorded speech audio captured from autistic and neurotypical (NT) children in home environments. METHODS We consider three methods to detect autism in child speech: first, Random Forests trained on extracted audio features (including Mel-frequency cepstral coefficients); second, convolutional neural networks (CNNs) trained on spectrograms; and third, fine-tuned wav2vec 2.0—a state-of-the-art Transformer-based speech recognition model. We train our classifiers on our novel dataset of cellphone-recorded child speech audio curated from Stanford’s Guess What? mobile game, an app designed to crowdsource videos of autistic and neurotypical children in a natural home environment. RESULTS The Random Forest classifier achieves 70% accuracy, the fine-tuned wav2vec 2.0 model achieves 77% accuracy, and the CNN achieves 79% accuracy when classifying children’s audio as either ASD or NT. We use five-fold cross-validation to evaluate model performance. CONCLUSIONS Our models were able to predict autism status when training on a varied selection of home audio clips with inconsistent recording qualities, which may be more generalizable to real world conditions. The results demonstrate that machine learning methods offer promise in detecting autism automatically from speech without specialized equipment.


2021 ◽  
Vol 12 ◽  
Author(s):  
Fei Ting Woon ◽  
Eshwaaree C. Yogarrajah ◽  
Seraphina Fong ◽  
Nur Sakinah Mohd Salleh ◽  
Shamala Sundaray ◽  
...  

With lockdowns and social distancing measures in place, research teams looking to collect naturalistic parent-child speech interactions have to develop alternatives to in-lab recordings and observational studies with long-stretch recordings. We designed a novel micro-longitudinal study, the Talk Together Study, which allowed us to create a rich corpus of parent-child speech interactions in a fully online environment (N participants = 142, N recordings = 410). In this paper, we discuss the methods we used, and the lessons learned during adapting and running the study. These lessons learned cover nine domains of research design, monitoring and feedback: Recruitment strategies, Surveys and Questionnaires, Video-call scheduling, Speech elicitation tools, Videocall protocols, Participant remuneration strategies, Project monitoring, Participant retention, and Data Quality, and may be used as a primer for teams planning to conduct remote studies in the future.


2021 ◽  
Author(s):  
Adam Hair ◽  
Guanlong Zhao ◽  
Beena Ahmed ◽  
Kirrie J. Ballard ◽  
Ricardo Gutierrez-Osuna

2021 ◽  
Author(s):  
Lucile Gelin ◽  
Thomas Pellegrini ◽  
Julien Pinquier ◽  
Morgane Daniel

2021 ◽  
Author(s):  
Fei Ting Woon ◽  
Eshwaaree C Yogarrajah ◽  
Seraphina Fong ◽  
Nur Sakinah Mohd Salleh ◽  
Shamala Sundaray ◽  
...  

With lockdowns and the implementation of social distancing measures in place, research teams looking to collect naturalistic parent-child speech interactions have to look for methods alternative to in-lab recordings and observational studies with long-stretch recordings. We designed a novel micro-longitudinal study, the Talk Together Study, which allowed us to create a rich corpus of parent-child speech interactions in a fully online environment (N participants = 142, N recordings = 414). In this paper, we discuss the novel methods we used, and the lessons learned during adapting and running the study. These lessons learned cover 10 domains of research design, monitoring and feedback: Recruitment strategies; Surveys and Questionnaires; Video-call scheduling; Speech elicitation tools; Videocall protocols; Participant remuneration strategies; Project monitoring; Participant retention; Parental feedback; and Research team feedback, and may be used as recommendations for teams who are planning to conduct remote studies in the future.


Sign in / Sign up

Export Citation Format

Share Document