scholarly journals TextMix: using NLP and APIs to generate chunked sentence scramble tasks

2021 ◽  
pp. 6-11
Author(s):  
Brendon Albertson

A Computer-Assisted Language Learning (CALL) application, TextMix, was developed as a proof-of-concept for applying Natural Language Processing (NLP) sentence chunking techniques to creating ‘sentence scramble’ learning tasks. TextMix addresses limitations of existing applications for creating sentence scrambles by using NLP to parse and scramble syntactic components of sentences, while connecting with Application Programming Interfaces (APIs) to provide repeated exposure to authentic sentences in the context of texts such as Wikipedia articles. In addition to identifying a novel application of NLP and APIs in CALL, this project highlights the need for teacher-friendly interfaces that prioritize pedagogically useful ways of chunking text.

2003 ◽  
Vol 17 (5) ◽  
Author(s):  
Anne Vandeventer Faltin

This paper illustrates the usefulness of natural language processing (NLP) tools for computer assisted language learning (CALL) through the presentation of three NLP tools integrated within a CALL software for French. These tools are (i) a sentence structure viewer; (ii) an error diagnosis system; and (iii) a conjugation tool. The sentence structure viewer helps language learners grasp the structure of a sentence, by providing lexical and grammatical information. This information is derived from a deep syntactic analysis. Two different outputs are presented. The error diagnosis system is composed of a spell checker, a grammar checker, and a coherence checker. The spell checker makes use of alpha-codes, phonological reinterpretation, and some ad hoc rules to provide correction proposals. The grammar checker employs constraint relaxation and phonological reinterpretation as diagnosis techniques. The coherence checker compares the underlying "semantic" structures of a stored answer and of the learners' input to detect semantic discrepancies. The conjugation tool is a resource with enhanced capabilities when put on an electronic format, enabling searches from inflected and ambiguous verb forms.


Author(s):  
Monica Ward

Intelligent Computer-Assisted Language Learning (ICALL) involves using tools and techniques from computational linguistics and Natural Language Processing (NLP) in the language learning process. It is an inherently complex endeavour and is multi-, inter-, and trans-disciplinary in nature. Often these tools and techniques are designed for tasks and purposes other than language learning, and this makes their adaptation and use in the CALL domain difficult. It can be even more challenging for Less-Resourced Languages (LRLs) for CALL researchers to adapt or incorporate NLP into CALL artefacts. This paper reports on how two existing NLP resources for Irish, a morphological analyser and a parser, were used to develop an app for Irish. The app, Irish Word Bricks (IWB), was adapted from an existing CALL app – Word Bricks (Mozgovoy & Efimov, 2013). Without this ‘joining the blocks together’ approach, the development of the IWB app would certainly have taken longer, may not have been as efficient or effective, and may not even have been accomplished at all.


Author(s):  
John Nerbonne

This article examines the application of natural language processing to computer-assisted language learning (CALL) including the history of work in this field over the last thirtyfive years and focuses on current developments and opportunities. It always refers to programs designed to help people learn foreign languages. CALL is a large field — much larger than computational linguistics. This article outlines the areas of CALL to which computational linguistics (CL) can be applied. CL programs process natural languages such as English and Spanish, and the techniques are therefore often referred to as natural language processing (NLP). NLP is enlisted in several ways in CALL to provide lemmatized access to corpora for advanced learners seeking subtleties unavailable in grammars and dictionaries. It also provides morphological analysis and subsequent dictionary access for words unknown to readers and to parse user input and diagnose morphological and syntactic errors.


ReCALL ◽  
1991 ◽  
Vol 3 (4) ◽  
pp. 2-4
Author(s):  
David Shaw

After attending the 1989 Exeter CALL Conference, David Shaw and John Partridge, two teachers from the University of Kent, recommended to the School of European and Modem Language Studies that the School should establish its own Computer Assisted Language Learning Laboratory. Several of us had been ‘keeping an eye’ on CALL for quite a few years, from the days when BBC micros were innovative marvels. The Applied Languages Board had acquired a BBC and some software and had gained some experience with it in postgraduate courses. David Shaw had been supervising practical programming projects for MSc students in Computing in the area of CALL and natural language processing. Our recommendation was that, with a new generation of microcomputers supplanting the trusty but limited BBC micro, a point had been reached where it would be realistic for the School to establish a CALL teaching laboratory on a more ambitious scale.


2020 ◽  
Vol 0 (0) ◽  
Author(s):  
Fridah Katushemererwe ◽  
Andrew Caines ◽  
Paula Buttery

AbstractThis paper describes an endeavour to build natural language processing (NLP) tools for Runyakitara, a group of four closely related Bantu languages spoken in western Uganda. In contrast with major world languages such as English, for which corpora are comparatively abundant and NLP tools are well developed, computational linguistic resources for Runyakitara are in short supply. First therefore, we need to collect corpora for these languages, before we can proceed to the design of a spell-checker, grammar-checker and applications for computer-assisted language learning (CALL). We explain how we are collecting primary data for a new Runya Corpus of speech and writing, we outline the design of a morphological analyser, and discuss how we can use these new resources to build NLP tools. We are initially working with Runyankore–Rukiga, a closely-related pair of Runyakitara languages, and we frame our project in the context of NLP for low-resource languages, as well as CALL for the preservation of endangered languages. We put our project forward as a test case for the revitalization of endangered languages through education and technology.


2017 ◽  
Vol 13 (1) ◽  
Author(s):  
Ewa Rudnicka ◽  
Francis Bond ◽  
Łukasz Grabowski ◽  
Maciej Piasecki ◽  
Tadeusz Piotrowski

AbstractThe paper focuses on the issue of creating equivalence links in the domain of bilingual computational lexicography. The existing interlingual links between plWordNet and Princeton WordNet synsets (sets of synonymous lexical units – lemma and sense pairs) are re-analysed from the perspective of equivalence types as defined in traditional lexicography and translation. Special attention is paid to cognitive and translational equivalents. A proposal of mapping lexical units is presented. Three types of links are defined: super-strong equivalence, strong equivalence and weak implied equivalence. The strong equivalences have a common set of formal, semantic and usage features, with some of their values slightly loosened for strong equivalence. These will be introduced manually by trained lexicographers. The sense-mapping will partly draw on the results of the existing synset mapping. The lexicographers will analyse lists of pairs of synsets linked by interlingual relations such as synonymy, partial synonymy, hyponymy and hypernymy. They will also consult bilingual dictionaries and check translation probabilities in a parallel corpus. The results of the proposed mapping have great application potential in the area of natural language processing, translation and language learning.


Sign in / Sign up

Export Citation Format

Share Document