A Systematic Map of Evaluation Criteria Applicable for Evaluating E-Portfolio Systems

Author(s):  
Gary F. McKenna ◽  
Gavin J. Baxter

This chapter examines the literature on evaluation methods within e-learning with respect to their applicability to evaluate e-portfolio systems within higher education as evaluation criteria for reviewing e-portfolio provisions do not currently exist in the literature. The appr­oach taken is to initiate two extensive literature searches and reviews. The first search was undertaken in 2009 involved reviewing over 600 articles by abstract dating from 1995 to 2010 to develop evaluation criteria suitable for Blackboard LMS e-portfolio systems evaluation. The second search undertaken in 2013 involved extending the search criteria to include further terminology and databases and returned over 4107 articles, which were read by title and abstract dating from 2009 to 2013, in order to systematically map evaluation methods used within e-learning to assess their quality and applicability for evaluating e-portfolio systems. The implications of the research undertaken provide a starting-point for further research into the development of robust e-portfolio evaluation models and frameworks. The lack of evidence uncovered in the 2009 and 2013 searches of the literature justify the need for further research into the design, development, and testing of evaluation methods for the evaluation of e-portfolio systems.

2016 ◽  
pp. 60-106
Author(s):  
Gary F. McKenna ◽  
Gavin J. Baxter

This chapter examines the literature on evaluation methods within e-learning with respect to their applicability to evaluate e-portfolio systems within higher education as evaluation criteria for reviewing e-portfolio provisions do not currently exist in the literature. The appr­oach taken is to initiate two extensive literature searches and reviews. The first search was undertaken in 2009 involved reviewing over 600 articles by abstract dating from 1995 to 2010 to develop evaluation criteria suitable for Blackboard LMS e-portfolio systems evaluation. The second search undertaken in 2013 involved extending the search criteria to include further terminology and databases and returned over 4107 articles, which were read by title and abstract dating from 2009 to 2013, in order to systematically map evaluation methods used within e-learning to assess their quality and applicability for evaluating e-portfolio systems. The implications of the research undertaken provide a starting-point for further research into the development of robust e-portfolio evaluation models and frameworks. The lack of evidence uncovered in the 2009 and 2013 searches of the literature justify the need for further research into the design, development, and testing of evaluation methods for the evaluation of e-portfolio systems.


Author(s):  
Gary F. McKenna ◽  
Mark Stansfield

The purpose of this paper is to develop e-portfolio evaluation criteria which will be used to review the Blackboard LMS e-portfolio being used at one Higher Education (HE) institution in the UK as evaluation criteria for reviewing e-portfolio provision does not exist in the literature. The approach taken was to initiate a wide literature search which involved reviewing over 600 articles by their abstract dating from 1995 to 2010. The findings show that little has been written about the development of e-portfolio effective practice frameworks. Therefore e-learning effective practice frameworks were used as a basis from which to design and develop an e-portfolio evaluation framework and then apply it to the university case which uses a Blackboard e-portfolio to support Personal Development Plans. The research provides a starting-point for further research into the development of robust e-portfolio evaluation models and frameworks.


Author(s):  
Eugenijus Kurilovas ◽  
Valentina Dagiene

The main research objective of the chapter is to provide an analysis of the technological quality evaluation models and make a proposal for a method suitable for the multiple criteria evaluation (decision making) and optimization of the components of e-learning systems (i.e. learning software), including Learning Objects, Learning Object Repositories, and Virtual Learning Environments. Both the learning software ‘internal quality’ and ‘quality in use’ technological evaluation criteria are analyzed in the chapter and are incorporated into comprehensive quality evaluation models. The learning software quality evaluation criteria are further investigated in terms of their optimal parameters, and an additive utility function based on experts’ judgements, including multicriteria evaluation, numerical ratings, and weights, is applied to optimize the learning software according to particular learners’ needs.


Author(s):  
Gary F. McKenna ◽  
Gavin J. Baxter ◽  
Thomas Hainey

An important part of educational effective practice is performing evaluations to optimise learning. Applying evaluation criteria to virtual and personal learning environments enables educators to assess whether the technologies used are producing the intended effect. As online educational technologies become more sophisticated so does the need to evaluate them. This chapter suggests that traditional educational evaluation frameworks for evaluating e-Learning are insufficient for application to LMS e-portfolios. To address this problem we have developed evaluation criteria designed to assess the usability of LMS e-portfolios used within higher education. One of the main problems with evaluating the usability of LMS e-portfolio is that there is a distinct lack of empirical evidence of evaluation criteria designed and developed for evaluating e-portfolios. This chapter describes the results of applying newly developed LMS e-portfolio evaluation criteria within one UK higher education institution.


Author(s):  
Rina Wijayanti ◽  
Siti Napfiah

This research aims to produce products such as module, which supporter statistics courses in IKIP Budi Utomo Malang institutions. It can be used to enhance the students' ability to solve problems. This research development method using ADDIE models which include analysis, design, development, implementation, evaluation. Methods of data collection in this study include legibility test. Results of this study we concluded the step wrote statistics module that determining competency standards, specify the title of the module, arrange the contents of the module, cover design, legibility test, revision and production. Based on the conclusion that thelegibility test for aspects of language, presentation, and graphic declared valid while the worthiness aspect of the module is very valid. So based on student legibility test module does not need to be revised. 


Author(s):  
I Made Dwi Indrasanjaya . ◽  
I Made Agus Wirawan, S.Kom, M.Cs . ◽  
I Ketut Resika Arthana, S.T., M.Kom .

DIL (Dynamic Intellectual Learning) merupakan sebuah prototype pembelajaran online sebagai sebuah perubahan E-learning menuju adaptive learning. Tahapan penelitian dimulai dari analisis masalah yaitu penyebaran survey. Hasil survey berupa angket yang peneliti sebarkan ke mahasiswa Pendidikan Teknik Informatika menunjukkan bahwa sebagian besar mahasiswa setuju dengan pembelajaran menggunakan media dibandingkan pembelajaran konvensional. Kemudian tahapan penelitian dilanjutkan dengan mendesain sistem, mengembangkan coding sistem, mengimplementasi sistem, dan mengevaluasi sistem dengan uji blackbox, uji whitebox, uji ahli media, dan uji respon pengguna. Pengembangan Aplikasi Mobile Dynamic Intellectual Learning Berbasis Android adalah solusi dari permasalahan tersebut. Tujuan penelitian ini adalah untuk merancang dan mengimplementasikan “Aplikasi Mobile Dynamic Intellectual Learning Berbasis Android”. Pengembangan aplikasi Mobile Dynamic Intellectual Learning Berbasis Android menggunakan siklus hidup pengembangan perangkat lunak ADDIE (Analysis, Design, Development, Implementation and Evaluations). Bahasa pemrograman yang digunakan adalah Java, dengan software pengembangan Eclipse. Hasil dari penelitian ini yaitu perancangan dan implementasi “Pengembangan Aplikasi Mobile Dynamic Intellectual Learning Berbasis Android” dan uji blackbox, uji whitebox, uji ahli media, uji respon pengguna yaitu dari mahasiswa telah berhasil dilakukan. Seluruh kebutuhan fungsional telah berhasil diimplementasikan sesuai dengan rancangan.Kata Kunci : DIL, Android, E-Learning, Eclipse, Respon DIL (Dynamic Intellectual Learning) is a prototype of online learning as a change E-learning to adaptive learning. Stages of research starting from the analysis of the problem, namely the spread of the survey. The results of a questionnaire survey that researchers disclose to students of Informatic Technic of Education showed that most students agree with the learning using media than conventional learning. Later stages of the research continued with designing a system, developing a coding system, implement the system, and evaluate the system with blackbox testing, whitebox testing, media expert test, and user response test. Development of Mobile Dynamic Intellectual Learning Android Based Application is the solution to these problems. The purpose of this research is to design and implement "Mobile Dynamic Intellectual Learning Android Based Application". Development of Mobile Dynamic Intellectual Learning Android Based application use the software development life cycle ADDIE (Analysis, Design, Development, Implementation and Evaluations). The programming language used is Java, with Eclipse as software development. The results of this analysis is design and implementation of “Mobile Dynamic Intellectual Learning Android Based Application" and blackbox testing, whitebox testing, media expert test, and user response test. of the user, is students have been successfully performed. The entire functional requirements have been successfully implemented in accordance with the design.keyword : DIL, Android, E-Learning, Eclipse, Response


Author(s):  
T. B. Larina

The development of e-learning, both in distance and mixed forms, becomes especially relevant in the modern educational process. A high-quality e-learning course is developed through the efforts of two parties: the teacher, who creates the methodological content, and the programmer, who creates the electronic shell of the course. The article substantiates the importance of quality issues in the development of a user interface for electronic educational resources, since the user of an electronic course deals with the direct implementation of educational material. The indicators for assessing the quality of software products in accordance with international and Russian standards and their applicability for assessing user interfaces of electronic educational resources are analyzed. The conclusion is made about the importance of the indicator “practicality” in relation to this type of software product as an indicator of an individual evaluation of the use of a product by a certain user or circle of users. The classical methods for assessing the quality of the human-machine interaction interface and the applicability of experimental and formal methods for assessing quality are considered. The analysis of modern approaches to the design of user interfaces based on UX/UI design is given. An assessment of the requirements and criteria for assessing the user interface from the standpoint of modern design is given. The tasks and features of the UX and UI components of the design process are analyzed. The essence of the modern term “usability” as an indicator of the interface evaluation is explained, and the qualitative evaluation criteria for this indicator are considered. The concept of UX testing is given, the main stages of this process are considered. The importance of taking into account the subjective psychological factors of interface perception is substantiated. The indicators for assessing the quality of user interfaces, based on the cognitive factors of its perception by a person, are analyzed.


Author(s):  
Stella Sylaiou ◽  
Martin White ◽  
Fotis Liarokapis

This chapter describes the evaluation methods conducted for a digital heritage system, called ARCO (Augmented Representation of Cultural Objects), which examines the tools and methods used for its evaluation. The case study describes the knowledge acquired from several user requirement assessments, and further describes how to use this specific knowledge to provide a general framework for a holistic virtual museum evaluation. This approach will facilitate designers to determine the flaws of virtual museum environments, fill the gap between the technologies they use and those the users prefer and improve them in order to provide interactive and engaging virtual museums. The proposed model used not only quantitative, but also qualitative evaluation methods, and it is based on the extensive evaluations of the ARCO system by simple end-users, usability experts and domain experts. The main evaluation criteria were usability, presence, and learning.


Sign in / Sign up

Export Citation Format

Share Document