Deep Reinforcement Exemplar Learning for Annotation Refinement

2021 ◽  
pp. 487-496
Author(s):  
Yuexiang Li ◽  
Nanjun He ◽  
Sixiang Peng ◽  
Kai Ma ◽  
Yefeng Zheng
Keyword(s):  
2020 ◽  
Vol 12 (3) ◽  
pp. 526-563
Author(s):  
SIRKKU LESONEN ◽  
RASMUS STEINKRAUSS ◽  
MINNA SUNI ◽  
MARJOLIJN VERSPOOR

ABSTRACTIt is assumed from a usage-based perspective that learner language constructions emerge from natural language use in social interaction through exemplar learning. In L1, young learners have been shown to develop their constructions from lexically specific, formulaic expressions into more productive, abstract schemas. A similar developmental path has been shown for L2 development, with some exceptions. The aim of the current study is to explore to what extent the default assumption holds for L2 learning. The development of two constructions was traced in four adults learning L2 Finnish. Free-response data, collected weekly over a period of 9 months, were used to investigate the productivity of the constructions. The results show that, contrary to the traditional assumption, L2 learners do not start off with only lexically specific expressions, but that both lexically specific and more productive constructions are used from the beginning. Our results therefore suggest that, for educated adult L2 learners, the schema formation can happen rather quickly and even without the repetition of a specific lexical sequence.


2015 ◽  
Vol 20 (2) ◽  
pp. 441-452 ◽  
Author(s):  
Javier García ◽  
Habib M. Fardoun ◽  
Daniyal M. Alghazzawi ◽  
José-Ramón Cano ◽  
Salvador García
Keyword(s):  

2020 ◽  
Vol 117 (20) ◽  
pp. 11167-11177
Author(s):  
Elliot Collins ◽  
Marlene Behrmann

Irrespective of whether one has substantial perceptual expertise for a class of stimuli, an observer invariably encounters novel exemplars from this class. To understand how novel exemplars are represented, we examined the extent to which previous experience with a category constrains the acquisition and nature of representation of subsequent exemplars from that category. Participants completed a perceptual training paradigm with either novel other-race faces (category of experience) or novel computer-generated objects (YUFOs) that included pairwise similarity ratings at the beginning, middle, and end of training, and a 20-d visual search training task on a subset of category exemplars. Analyses of pairwise similarity ratings revealed multiple dissociations between the representational spaces for those learning faces and those learning YUFOs. First, representational distance changes were more selective for faces than YUFOs; trained faces exhibited greater magnitude in representational distance change relative to untrained faces, whereas this trained–untrained distance change was much smaller for YUFOs. Second, there was a difference in where the representational distance changes were observed; for faces, representations that were closer together before training exhibited a greater distance change relative to those that were farther apart before training. For YUFOs, however, the distance changes occurred more uniformly across representational space. Last, there was a decrease in dimensionality of the representational space after training on YUFOs, but not after training on faces. Together, these findings demonstrate how previous category experience governs representational patterns of exemplar learning as well as the underlying dimensionality of the representational space.


2018 ◽  
Vol 38 (2) ◽  
pp. 0233001
Author(s):  
崔帅 Cui Shuai ◽  
张骏 Zhang Jun ◽  
高隽 Gao Jun

Sign in / Sign up

Export Citation Format

Share Document