scholarly journals Classroom-oriented research: Processing Instruction (findings and implications)

2017 ◽  
Vol 52 (3) ◽  
pp. 343-359 ◽  
Author(s):  
Alessandro Benati

This paper firstly presents and examines the pedagogical intervention called Processing Instruction (PI). Secondly, it reviews and discusses the main findings of the empirical research conducted to measure the relative effects of PI. Current research trends within the PI research framework will be outlined. Experimental research investigating the effects of this pedagogical intervention in language teaching, and grammar instruction in particular, has primarily used listening and reading measures (so-called ‘off-line measures’) to elicit how learners comprehend and process sentences. On-line measurements, such as eye tracking and self-paced reading, have now been incorporated into PI research to measure language processing more directly. Finally, this paper provides specific guidelines and procedures for teachers on when and how to use PI.

2020 ◽  
Vol 4 (2) ◽  
Author(s):  
Paul A. Malovrh ◽  
James F. Lee ◽  
Stephen Doherty ◽  
Alecia Nichols

The present study measured the effects of guided-inductive (GI) versus deductive computer-delivered instruction on the processing and retention of the Spanish true passive using a self-paced reading design. Fifty-four foreign language learners of Spanish participated in the study, which operationalised guided-inductive and deductive approaches using an adaptation of the PACE model and processing instruction (PI), respectively. Results revealed that each experimental group significantly improved after the pedagogical intervention, and that the GI group outperformed the PI group in terms of accuracy on an immediate post-test. Differences between the groups, however, were not durative; at the delayed post-test, each group performed the same. Additional analyses revealed that the GI group spent over twice as much time on task during instruction than the PI group, with no long-term advantages on processing, calling into question the pedagogical justification for implementing GI at a curricular level.


2011 ◽  
Author(s):  
A. Siyanova-Chanturia ◽  
F. Pesciarelli ◽  
C. Cacciari

2020 ◽  
Author(s):  
Kun Sun

Expectations or predictions about upcoming content play an important role during language comprehension and processing. One important aspect of recent studies of language comprehension and processing concerns the estimation of the upcoming words in a sentence or discourse. Many studies have used eye-tracking data to explore computational and cognitive models for contextual word predictions and word processing. Eye-tracking data has previously been widely explored with a view to investigating the factors that influence word prediction. However, these studies are problematic on several levels, including the stimuli, corpora, statistical tools they applied. Although various computational models have been proposed for simulating contextual word predictions, past studies usually preferred to use a single computational model. The disadvantage of this is that it often cannot give an adequate account of cognitive processing in language comprehension. To avoid these problems, this study draws upon a massive natural and coherent discourse as stimuli in collecting the data on reading time. This study trains two state-of-art computational models (surprisal and semantic (dis)similarity from word vectors by linear discriminative learning (LDL)), measuring knowledge of both the syntagmatic and paradigmatic structure of language. We develop a `dynamic approach' to compute semantic (dis)similarity. It is the first time that these two computational models have been merged. Models are evaluated using advanced statistical methods. Meanwhile, in order to test the efficiency of our approach, one recently developed cosine method of computing semantic (dis)similarity based on word vectors data adopted is used to compare with our `dynamic' approach. The two computational and fixed-effect statistical models can be used to cross-verify the findings, thus ensuring that the result is reliable. All results support that surprisal and semantic similarity are opposed in the prediction of the reading time of words although both can make good predictions. Additionally, our `dynamic' approach performs better than the popular cosine method. The findings of this study are therefore of significance with regard to acquiring a better understanding how humans process words in a real-world context and how they make predictions in language cognition and processing.


Author(s):  
Yonatan Belinkov ◽  
James Glass

The field of natural language processing has seen impressive progress in recent years, with neural network models replacing many of the traditional systems. A plethora of new models have been proposed, many of which are thought to be opaque compared to their feature-rich counterparts. This has led researchers to analyze, interpret, and evaluate neural networks in novel and more fine-grained ways. In this survey paper, we review analysis methods in neural language processing, categorize them according to prominent research trends, highlight existing limitations, and point to potential directions for future work.


1998 ◽  
Vol 34 (1) ◽  
pp. 73-124 ◽  
Author(s):  
RUTH KEMPSON ◽  
DOV GABBAY

This paper informally outlines a Labelled Deductive System for on-line language processing. Interpretation of a string is modelled as a composite lexically driven process of type deduction over labelled premises forming locally discrete databases, with rules of database inference then dictating their mode of combination. The particular LDS methodology is illustrated by a unified account of the interaction of wh-dependency and anaphora resolution, the so-called ‘cross-over’ phenomenon, currently acknowledged to resist a unified explanation. The shift of perspective this analysis requires is that interpretation is defined as a proof structure for labelled deduction, and assignment of such structure to a string is a dynamic left-right process in which linearity considerations are ineliminable.


2021 ◽  
pp. 146735842199389
Author(s):  
Aaron Tham ◽  
Vikki Schaffer ◽  
Laura Sinay

This study probes the ethics of intrusive technologies for experimental research in tourism, through the lens of collaborative ethnography. Amidst the increasing uptake of technology to assess participant responses, the role of ethics in an experimental setting has received scant attention in tourism and hospitality. While intrusive technologies such as eye tracking, skin sensors and neuroscience headgear become more ubiquitous, the ethical boundaries of using such equipment are increasingly blurred and inconsistently approved. Seeking convergence of ethics concerning intrusive technologies is complicated when framing political spaces, target audiences and management of data obtained. Rather than view the role of intrusive technologies as a dichotomous outcome of ethical or unethical approaches, this paper argues that ethics needs to be contextually embedded with increased collaboration and co-creation in the application preparation and approval process.


2020 ◽  
Vol 11 (2) ◽  
pp. 41-47
Author(s):  
Amandeep Kaur ◽  
Madhu Dhiman ◽  
Mansi Tonk ◽  
Ramneet Kaur

Artificial Intelligence is the combination of machine and human intelligence, which are in research trends from the last many years. Different Artificial Intelligence programs have become capable of challenging humans by providing Expert Systems, Neural Networks, Robotics, Natural Language Processing, Face Recognition and Speech Recognition. Artificial Intelligence brings a bright future for different technical inventions in various fields. This review paper shows the general concept of Artificial Intelligence and presents an impact of Artificial Intelligence in the present and future world.


Author(s):  
Sandeep Mathias ◽  
Diptesh Kanojia ◽  
Abhijit Mishra ◽  
Pushpak Bhattacharya

Gaze behaviour has been used as a way to gather cognitive information for a number of years. In this paper, we discuss the use of gaze behaviour in solving different tasks in natural language processing (NLP) without having to record it at test time. This is because the collection of gaze behaviour is a costly task, both in terms of time and money. Hence, in this paper, we focus on research done to alleviate the need for recording gaze behaviour at run time. We also mention different eye tracking corpora in multiple languages, which are currently available and can be used in natural language processing. We conclude our paper by discussing applications in a domain - education - and how learning gaze behaviour can help in solving the tasks of complex word identification and automatic essay grading.


Sign in / Sign up

Export Citation Format

Share Document