scholarly journals What counts as “responding”? Contingency on previous speaker contribution as a feature of interactional competence

2018 ◽  
Vol 35 (3) ◽  
pp. 377-401 ◽  
Author(s):  
Daniel M. K. Lam

The ability to interact with others has gained recognition as part of the L2 speaking construct in the assessment literature and in high- and low-stakes speaking assessments. This paper first presents a review of the literature on interactional competence (IC) in L2 learning and assessment. It then discusses a particular feature – producing responses contingent on previous speakers’ contributions – that emerged as a de facto construct feature of IC, oriented to both candidates and examiners within the school-based group speaking assessment in the Hong Kong Diploma of Secondary Education (HKDSE) English Language Examination. Previous studies have, similarly, argued for the importance of responding to or linking one’s own talk to previous speakers’ contributions as a way of demonstrating comprehension of co-participants’ talk. However, what counts as such a response has yet to be explored systematically. This paper presents a conversation analytic study of the candidate discourse in the assessed group interactions, identifying three conversational actions through which student-candidates construct contingent responses to co-participants. The thick description about the nature of contingent responses lays the groundwork for further empirical investigations on the relevance of this IC feature and its proficiency implications.

2009 ◽  
Vol 43 (1) ◽  
pp. 108-112
Author(s):  
XIE Qin ◽  
Stephen Andrews

The Language and Literature Division (LLD) is the largest of the six divisions of the Faculty of Education, University of Hong Kong (HKU). It is currently home to 34 academic staff, who specialize either in the fields of Chinese Language, English Language and/or Literature Education, and to 60 full-time and 28 part-time doctoral students, who are researching a wide range of topics including subjects as diverse as corpus-aided language learning, task-based language teaching in primary schools, the English writing of Chinese undergraduates, and the impact of school-based assessment. Staff are very active in conducting their own research, much of which is rooted in classrooms and focuses on issues that directly concern the teaching and learning of languages, such as reading literacy, school-based assessment and assessment for learning in English Language, the teaching of Chinese characters, and good practices in English Language Teaching in Hong Kong secondary schools (see http://good-practices.edb.hkedcity.net/). Colleagues in the English Language area have played important roles in the HKU Strategic Research Theme ‘Language in education and assessment’. This initiative brought together staff from a range of disciplines in various forms of language-related research collaboration, culminating in two large and highly successful international conferences in June 2008: one focusing on language awareness and the other on language issues in English-medium universities (see http://www.hku.hk/clear/).


ReCALL ◽  
2006 ◽  
Vol 18 (2) ◽  
pp. 193-211 ◽  
Author(s):  
DAVID CONIAM

This paper describes an English language listening test intended as computer-based testing material for secondary school students in Hong Kong, where considerable attention is being invested in online and computer-based testing. As well as providing a school-based testing facility, the study aims to contribute to the knowledge base regarding the efficacy and reliability of computer-based testing. The paper describes the construction of an item bank of over 400 short listening items calibrated on item response theory principles. Items from this bank were used to form a traditional paper-based listening test, and an adaptive computer-based test. Both forms of the test were administered to two Hong Kong Grade 11 and Grade 12 classes. Descriptive test statistics indicated that both test types discriminated effectively between school grades. In terms of comparability between test types, there was significant difference between the Grade 11 classes’ performance although not with that of Grade 12. Test takers generally performed better on the computer-based test than on the paper-based test, confirming earlier research. Interviews with test takers after taking both tests indicated an even split in terms of preference, with boys opting for the computer-based test and girls the paper-based test. Correlations between test takers’ performance on the two test types were high enough to indicate the computer-based test’s potential as a low-stakes test (its intended purpose as a school-based testing facility), although not as a high-stakes test (for example, as a territory-wide test replacing a traditional paper-based test).


Sign in / Sign up

Export Citation Format

Share Document