scholarly journals The Implementation of an English Word Learning System Feedback System and Smartphone App

2020 ◽  
Vol 35 (3) ◽  
pp. 207-214
Author(s):  
Ye Zhang
2016 ◽  
Vol 75 (21) ◽  
pp. 13179-13192 ◽  
Author(s):  
Bong-Hyun Kim ◽  
Ki-Chan Kim ◽  
Sang-Young Oh ◽  
Sung-Eon Hong

2007 ◽  
Vol 8 (1) ◽  
pp. 53-81 ◽  
Author(s):  
Luís Seabra Lopes ◽  
Aneesh Chauhan

This paper addresses word learning for human–robot interaction. The focus is on making a robotic agent aware of its surroundings, by having it learn the names of the objects it can find. The human user, acting as instructor, can help the robotic agent ground the words used to refer to those objects. A lifelong learning system, based on one-class learning, was developed (OCLL). This system is incremental and evolves with the presentation of any new word, which acts as a class to the robot, relying on instructor feedback. A novel experimental evaluation methodology, that takes into account the open-ended nature of word learning, is proposed and applied. This methodology is based on the realization that a robot’s vocabulary will be limited by its discriminatory capacity which, in turn, depends on its sensors and perceptual capabilities. The results indicate that the robot’s representations are capable of incrementally evolving by correcting class descriptions, based on instructor feedback to classification results. In successive experiments, it was possible for the robot to learn between 6 and 12 names of real-world office objects. Although these results are comparable to those obtained by other authors, there is a need to scale-up. The limitations of the method are discussed and potential directions for improvement are pointed out.


2018 ◽  
Vol 15 (4) ◽  
pp. 314-331 ◽  
Author(s):  
Atsushi Shimada ◽  
Shin’ichi Konomi ◽  
Hiroaki Ogata

Purpose The purpose of this study is to propose a real-time lecture supporting system. The target of this study is on-site classrooms where teachers give lectures and a lot of students listen to teachers’ explanations, conduct exercises, etc. Design/methodology/approach The proposed system uses an e-learning system and an e-book system to collect teaching and learning activities from a teacher and students in real time. The collected data are immediately analyzed to provide feedback to the teacher just before the lecture starts and during the lecture. For example, the teacher can check which pages were well previewed and which pages were not previewed by students using the preview achievement graph. During the lecture, real-time analytics graphs are shown on the teacher’s PC. The teacher can easily grasp students’ status and whether or not students are following the teacher’s explanation. Findings Through the case study, the authors first confirmed the effectiveness of each tool developed in this study. Then, the authors conducted a large-scale experiment using a real-time analytics graph and investigated whether the proposed system could improve the teaching and learning in on-site classrooms. The results indicated that teachers could adjust the speed of their lecture based on the real-time feedback system, which also resulted in encouraging students to put bookmarks and highlights on keywords and sentences. Originality/value Real-time learning analytics enables teachers and students to enhance their teaching and learning during lectures. Teachers should start considering this new strategy to improve their lectures immediately.


Author(s):  
Iske Bakker-Marshall ◽  
Atsuko Takashima ◽  
Carla B. Fernandez ◽  
Gabriele Janzen ◽  
James M. McQueen ◽  
...  

Abstract This study investigated how bilingual experience alters neural mechanisms supporting novel word learning. We hypothesised that novel words elicit increased semantic activation in the larger bilingual lexicon, potentially stimulating stronger memory integration than in monolinguals. English monolinguals and Spanish–English bilinguals were trained on two sets of written Swahili–English word pairs, one set on each of two consecutive days, and performed a recognition task in the MRI-scanner. Lexical integration was measured through visual primed lexical decision. Surprisingly, no group difference emerged in explicit word memory, and priming occurred only in the monolingual group. This difference in lexical integration may indicate an increased need for slow neocortical interleaving of old and new information in the denser bilingual lexicon. The fMRI data were consistent with increased use of cognitive control networks in monolinguals and of articulatory motor processes in bilinguals, providing further evidence for experience-induced neural changes: monolinguals and bilinguals reached largely comparable behavioural performance levels in novel word learning, but did so by recruiting partially overlapping but non-identical neural systems to acquire novel words.


2019 ◽  
Vol 42 (2) ◽  
pp. 327-357 ◽  
Author(s):  
Bronson Hui

AbstractI investigated the trajectory of processing variability, as measured by coefficient of variation (CV), using an intentional word learning experiment and reanalyzing published eye-tracking data of an incidental word learning study (Elgort et al., 2018). In the word learning experiment, native English speakers (N = 35) studied Swahili-English word pairs (k = 16) before performing 10 blocks of animacy judgment tasks. Results replicated the initial CV increase reported in Solovyeva and DeKeyser (2018) and, importantly, captured a roughly inverted U-shaped development in CV. In the reanalysis of eye-tracking data, I computed CVs based on reading times on the target and control words. Results did not reveal a similar inverted U-shaped development over time but suggested more stable processing of the high-frequency control words. Taken together, these results uncovered a fuller trajectory in CV development, differences in processing demands for different aspects of word knowledge, and the potential use of CV with eye-tracking research.


Author(s):  
Melanie Grudinschi ◽  
Kyle Norland ◽  
Sang Won Lee ◽  
Sol Lim

People with visual impairments may experience difficulties in learning new physical exercises due to a lack of visual feedback. Learning and practicing yoga is especially challenging for this population as yoga requires imitation-oriented learning. A typical yoga class requires students to observe and copy poses and movements as the instructor presents them, while maintaining postural balance during the practice. Without additional, nonvisual feedback, it can be difficult for students with visual impairments to understand whether they have accurately copied a pose – and if they have not, how to fix an inaccurate pose. Therefore, there is a need for an intelligent learning system that can capture a person’s physical posture and provide additional, nonvisual feedback to guide them into a correct pose. This study is a preliminary step toward the development of a wearable inertial sensor-based virtual learning system for people who are blind or have low vision. Using hierarchical task analysis, we developed a step-by-step conceptual model of yoga poses, which can be used in constructing an effective nonvisual feedback system. We also ranked sensor locations according to their importance by analyzing postural deviations in each pose compared to the reference starting pose.


Sign in / Sign up

Export Citation Format

Share Document