scholarly journals Injecting Descriptive Meta-Information into Pre-Trained Language Models with Hypernetworks

Author(s):  
Wenying Duan ◽  
Xiaoxi He ◽  
Zimu Zhou ◽  
Hong Rao ◽  
Lothar Thiele
2015 ◽  
Vol 73 ◽  
pp. 64-80 ◽  
Author(s):  
Yangyang Shi ◽  
Martha Larson ◽  
Joris Pelemans ◽  
Catholijn M. Jonker ◽  
Patrick Wambacq ◽  
...  

2007 ◽  
Author(s):  
Jonathan Pfautz ◽  
Emilie Roth ◽  
Ann Bisantz ◽  
Cullen Jackson ◽  
Gina Thomas ◽  
...  

2019 ◽  
Author(s):  
Amanda Goodwin ◽  
Yaacov Petscher ◽  
Jamie Tock

Various models have highlighted the complexity of language. Building on foundational ideas regarding three key aspects of language, our study contributes to the literature by 1) exploring broader conceptions of morphology, vocabulary, and syntax, 2) operationalizing this theoretical model into a gamified, standardized, computer-adaptive assessment of language for fifth to eighth grade students entitled Monster, PI, and 3) uncovering further evidence regarding the relationship between language and standardized reading comprehension via this assessment. Multiple-group item response theory (IRT) across grades show that morphology was best fit by a bifactor model of task specific factors along with a global factor related to each skill. Vocabulary was best fit by a bifactor model that identifies performance overall and on specific words. Syntax, though, was best fit by a unidimensional model. Next, Monster, PI produced reliable scores suggesting language can be assessed efficiently and precisely for students via this model. Lastly, performance on Monster, PI explained more than 50% of variance in standardized reading, suggesting operationalizing language via Monster, PI can provide meaningful understandings of the relationship between language and reading comprehension. Specifically, considering just a subset of a construct, like identification of units of meaning, explained significantly less variance in reading comprehension. This highlights the importance of considering these broader constructs. Implications indicate that future work should consider a model of language where component areas are considered broadly and contributions to reading comprehension are explored via general performance on components as well as skill level performance.


Author(s):  
Xiaoyu Shen ◽  
Youssef Oualil ◽  
Clayton Greenberg ◽  
Mittul Singh ◽  
Dietrich Klakow

Author(s):  
Vitaly Kuznetsov ◽  
Hank Liao ◽  
Mehryar Mohri ◽  
Michael Riley ◽  
Brian Roark

2020 ◽  
Author(s):  
Grant P. Strimel ◽  
Ariya Rastrow ◽  
Gautam Tiwari ◽  
Adrien Piérard ◽  
Jon Webb

Sign in / Sign up

Export Citation Format

Share Document