SAYCam: A large, longitudinal audiovisual dataset recorded from the infant’s perspective
We introduce a new resource: the SAYCam corpus. Infants aged 6-32 months wore a head-mounted camera for approximately 2 hours per week, over the course of approximately two and a half years. The result is a large, naturalistic, longitudinal dataset of infant- and child-perspective videos. Over 200,000 words of naturalistic speech have already been transcribed. Similarly, the dataset is searchable using a number of criteria (e.g., age of participant, location, setting, objects present). The resulting dataset will be of broad use to psychologists, linguists, and computer scientists.
Keyword(s):
2018 ◽
Keyword(s):
Keyword(s):