Proceedings of the tenth annual conference on International computing education research - ICER '14

2014 ◽  
Author(s):  
Marian Petre ◽  
Kate Sanders ◽  
Robert McCartney ◽  
Marzieh Ahmadzadeh ◽  
Cornelia Connolly ◽  
...  

2021 ◽  
Vol 53 (1) ◽  
pp. 6-6
Author(s):  
Amy Ko ◽  
Renée McCauley ◽  
Jan Vahrenhold ◽  
Matthias Hauswirth

You are warmly invited to the 17th Annual ACM International Computing Education Research (ICER) conference (https://icer2021.acm.org), held online the week of 16 August 2021.


2022 ◽  
Vol 22 (1) ◽  
pp. 1-46
Author(s):  
Sarah Heckman ◽  
Jeffrey C. Carver ◽  
Mark Sherriff ◽  
Ahmed Al-zubidy

Context. Computing Education Research (CER) is critical to help the computing education community and policy makers support the increasing population of students who need to learn computing skills for future careers. For a community to systematically advance knowledge about a topic, the members must be able to understand published work thoroughly enough to perform replications, conduct meta-analyses, and build theories. There is a need to understand whether published research allows the CER community to systematically advance knowledge and build theories. Objectives. The goal of this study is to characterize the reporting of empiricism in Computing Education Research literature by identifying whether publications include content necessary for researchers to perform replications, meta-analyses, and theory building. We answer three research questions related to this goal: (RQ1) What percentage of papers in CER venues have some form of empirical evaluation? (RQ2) Of the papers that have empirical evaluation, what are the characteristics of the empirical evaluation? (RQ3) Of the papers that have empirical evaluation, do they follow norms (both for inclusion and for labeling of information needed for replication, meta-analysis, and, eventually, theory-building) for reporting empirical work? Methods. We conducted a systematic literature review of the 2014 and 2015 proceedings or issues of five CER venues: Technical Symposium on Computer Science Education (SIGCSE TS), International Symposium on Computing Education Research (ICER), Conference on Innovation and Technology in Computer Science Education (ITiCSE), ACM Transactions on Computing Education (TOCE), and Computer Science Education (CSE). We developed and applied the CER Empiricism Assessment Rubric to the 427 papers accepted and published at these venues over 2014 and 2015. Two people evaluated each paper using the Base Rubric for characterizing the paper. An individual person applied the other rubrics to characterize the norms of reporting, as appropriate for the paper type. Any discrepancies or questions were discussed between multiple reviewers to resolve. Results. We found that over 80% of papers accepted across all five venues had some form of empirical evaluation. Quantitative evaluation methods were the most frequently reported. Papers most frequently reported results on interventions around pedagogical techniques, curriculum, community, or tools. There was a split in papers that had some type of comparison between an intervention and some other dataset or baseline. Most papers reported related work, following the expectations for doing so in the SIGCSE and CER community. However, many papers were lacking properly reported research objectives, goals, research questions, or hypotheses; description of participants; study design; data collection; and threats to validity. These results align with prior surveys of the CER literature. Conclusions. CER authors are contributing empirical results to the literature; however, not all norms for reporting are met. We encourage authors to provide clear, labeled details about their work so readers can use the study methodologies and results for replications and meta-analyses. As our community grows, our reporting of CER should mature to help establish computing education theory to support the next generation of computing learners.


Author(s):  
Alessio Gaspar ◽  
Sarah Langevin ◽  
Naomi Boyer

This chapter discusses a case study of the application of technology to facilitate undergraduate students’ learning of computer programming in an Information Technology department. The authors review the evolution of the didactic of introductory programming courses along with the learning barriers traditionally encountered by novice programmers. The growing interest of the computing education research community in a transition from instructivist to constructivist strategies is then illustrated by several recent approaches. The authors discuss how these have been enabled through the use of appropriate technologies in introductory and intermediate programming courses, delivered both online and face to face. They conclude by discussing how the integration of technology, and the switch to online environments, has the potential to enable authentic student-driven programming pedagogies as well as facilitate formal computing education research or action research in this field.


Sign in / Sign up

Export Citation Format

Share Document