scholarly journals Using Learning Analytics to Assess Student Learning in Online Courses

2016 ◽  
Vol 13 (3) ◽  
pp. 110-130 ◽  
Author(s):  
Florence Martin ◽  
◽  
Abdou Ndoye ◽  

Learning analytics can be used to enhance student engagement and performance in online courses. Using learning analytics, instructors can collect and analyze data about students and improve the design and delivery of instruction to make it more meaningful for them. In this paper, the authors review different categories of online assessments and identify data sets that can be collected and analyzed for each of them. Two different data analytics and visualization tools were used: Tableau for quantitative data and Many Eyes for qualitative data. This paper has implications for instructors, instructional designers, administrators, and educational researchers who use online assessments.

2016 ◽  
Vol 45 (2) ◽  
pp. 165-187 ◽  
Author(s):  
Florence Martin ◽  
Abdou Ndoye ◽  
Patricia Wilkins

Quality Matters is recognized as a rigorous set of standards that guide the designer or instructor to design quality online courses. We explore how Quality Matters standards guide the identification and analysis of learning analytics data to monitor and improve online learning. Descriptive data were collected for frequency of use, time spent, and performance and analyzed to identify patterns and trends on how students interact with online course components based on the Quality Matters standards. Major findings of this article provide a framework and guidance for instructors on how data might be collected and analyzed to improve online learning effectiveness.


2016 ◽  
Vol 2 (3) ◽  
pp. 111-143 ◽  
Author(s):  
Simon Knight ◽  
Karen Littleton

Accounts of the nature and role of productive dialogue in fostering educational outcomes are now well established in the learning sciences and are underpinned by bodies of strong empirical research and theorising. Allied to this there has been longstanding interest in fostering computer-supported collaborative learning (CSCL) in support of such dialogue. Learning analytic environments such as massive open online courses (moocs) and online learning environments (such as virtual learning environments, VLEs and learning management systems, LMSs) provide ripe potential spaces for learning dialogue. In prior research, preliminary steps have been taken to detect occurrences of productive dialogue automatically through the use of automated analysis techniques. Such advances have the potential to foster effective dialogue through the use of learning analytic techniques that scaffold, give feedback on, and provide pedagogic contexts promoting, such dialogue. However, the translation of learning science research to the online context is complex, requiring the operationalization of constructs theorized in different contexts (often face to face), and based on different data-sets and structures (often spoken dialogue).. In this paper we explore what could constitute the effective analysis of this kind of productive dialogue, arguing that it requires consideration of three key facets of the dialogue: features indicative of productive dialogue; the unit of segmentation; and the interplay of features and segmentation with the temporal underpinning of learning contexts. We begin by outlining what we mean by ‘productive educational dialogue’, before going on to discuss prior work that has been undertaken to date on its manual and automated analysis. We then highlight ongoing challenges for the development of computational analytic approaches to such data, discussing the representation of features, segments, and temporality in computational modelling. The paper thus foregrounds, to both learning-science-oriented and computationally-oriented researchers, key considerations in respect of the analysis dialogue data in emerging learning analytics environments. The paper provides a novel, conceptually driven, stance on the state of the contemporary analytic challenges faced in the treatment of dialogue as a form of data across on and offline sites of learning.


2016 ◽  
Vol 15 (2) ◽  
pp. 49-55
Author(s):  
Pala SuriyaKala ◽  
Ravi Aditya

Human resources is traditionally an area subject to measured changes but with Big data, data analytics, Human capital Management, Talent acquisition and performance metrics as new trends, there is bound to be a sea change in this function. This paper is conceptual and tries to introspect and outline the challenges that HRM faces in Big Data. Big Data is as one knows the world of enormous generation which is revolutionizing the world with data sets at exabytes. This has been the driving force behind how governments, companies and functions will come to perform in the decades to come. The immense amount of information if properly utilized can lead to efficiency in various fields like never before. But to do this the cloud of suspicion, fear and uncertainty regarding the use of Big Data has to be removed from those who can use it to the benefit of their respective areas of application.HR traditionally has never been very data centric in the analysis of its decisions unlike marketing, finance, etc.


2016 ◽  
Vol 14 ◽  
pp. 8
Author(s):  
Mayara Lustosa de Oliveira ◽  
Eduardo Galembeck

INTRODUCTION: In recent decades, biological sciences had undergone an unprecedented revolution. The major focus of biology remains unchanged, but breakthrough discoveries have changed the nature of questions asked. Although the changes in the practice of biology as a science occur fast, in the curriculum they occur slowly. In order to transform this scenario, the Vision and Change project brings proposals that combine the use of student-centered learning methodologies, the alignment of learning objectives with evaluations and the use of assessments data to improve the teaching process. The Massive Open Online Courses (MOOCs) associated with learning analytics tools on a large scale provide great opportunities for achieving these goals. OBJECTIVES: To develop a MOOC that allows a student-centered experience and to assess aspects related to learning, such as: user’s behavior flow, grades, time spent answering questions and engagement in each activity. MATERIALS AND METHODS: The MOOC was built as a mobile application, named “The Cell”. Data here presented were collected from the “Chemical Composition of the Cell” module. We used two tools to treat user’s data: its own database, and Google Analytics. DISCUSSION AND RESULTS: We mapped users behavior to identify their learning strategies and performance. It was possible to identify students who were guessing and those who were seriously answering questions. Furthermore, it was also possible to verify which questions the students were missing most frequently. Learning analytics tools reduce the time to tabulate the data and enable specific and real time intervention by the instructors. CONCLUSION: The association between MOOCs and learning analytics tools are promising and effective to help teachers. They provide student’s behavior and performance indicators that allow intervening on identified weaknesses, to provide continuous feedback of student’s progress and the use of assessment data to enhance the learning process.


2012 ◽  
Vol 16 (3) ◽  
Author(s):  
Laurie P Dringus

This essay is written to present a prospective stance on how learning analytics, as a core evaluative approach, must help instructors uncover the important trends and evidence of quality learner data in the online course. A critique is presented of strategic and tactical issues of learning analytics. The approach to the critique is taken through the lens of questioning the current status of applying learning analytics to online courses. The goal of the discussion is twofold: (1) to inform online learning practitioners (e.g., instructors and administrators) of the potential of learning analytics in online courses and (2) to broaden discussion in the research community about the advancement of learning analytics in online learning. In recognizing the full potential of formalizing big data in online coures, the community must address this issue also in the context of the potentially "harmful" application of learning analytics.


2020 ◽  
Vol 0 (0) ◽  
Author(s):  
Geraldine Cáceres Sepúlveda ◽  
Silvia Ochoa ◽  
Jules Thibault

AbstractDue to the highly competitive market and increasingly stringent environmental regulations, it is paramount to operate chemical processes at their optimal point. In a typical process, there are usually many process variables (decision variables) that need to be selected in order to achieve a set of optimal objectives for which the process will be considered to operate optimally. Because some of the objectives are often contradictory, Multi-objective optimization (MOO) can be used to find a suitable trade-off among all objectives that will satisfy the decision maker. The first step is to circumscribe a well-defined Pareto domain, corresponding to the portion of the solution domain comprised of a large number of non-dominated solutions. The second step is to rank all Pareto-optimal solutions based on some preferences of an expert of the process, this step being performed using visualization tools and/or a ranking algorithm. The last step is to implement the best solution to operate the process optimally. In this paper, after reviewing the main methods to solve MOO problems and to select the best Pareto-optimal solution, four simple MOO problems will be solved to clearly demonstrate the wealth of information on a given process that can be obtained from the MOO instead of a single aggregate objective. The four optimization case studies are the design of a PI controller, an SO2 to SO3 reactor, a distillation column and an acrolein reactor. Results of these optimization case studies show the benefit of generating and using the Pareto domain to gain a deeper understanding of the underlying relationships between the various process variables and performance objectives.


2020 ◽  
Vol 10 (24) ◽  
pp. 9148
Author(s):  
Germán Moltó ◽  
Diana M. Naranjo ◽  
J. Damian Segrelles

Cloud computing instruction requires hands-on experience with a myriad of distributed computing services from a public cloud provider. Tracking the progress of the students, especially for online courses, requires one to automatically gather evidence and produce learning analytics in order to further determine the behavior and performance of students. With this aim, this paper describes the experience from an online course in cloud computing with Amazon Web Services on the creation of an open-source data processing tool to systematically obtain learning analytics related to the hands-on activities carried out throughout the course. These data, combined with the data obtained from the learning management system, have allowed the better characterization of the behavior of students in the course. Insights from a population of more than 420 online students through three academic years have been assessed, the dataset has been released for increased reproducibility. The results corroborate that course length has an impact on online students dropout. In addition, a gender analysis pointed out that there are no statistically significant differences in the final marks between genders, but women show an increased degree of commitment with the activities planned in the course.


Author(s):  
A Salman Avestimehr ◽  
Seyed Mohammadreza Mousavi Kalan ◽  
Mahdi Soltanolkotabi

Abstract Dealing with the shear size and complexity of today’s massive data sets requires computational platforms that can analyze data in a parallelized and distributed fashion. A major bottleneck that arises in such modern distributed computing environments is that some of the worker nodes may run slow. These nodes a.k.a. stragglers can significantly slow down computation as the slowest node may dictate the overall computational time. A recent computational framework, called encoded optimization, creates redundancy in the data to mitigate the effect of stragglers. In this paper, we develop novel mathematical understanding for this framework demonstrating its effectiveness in much broader settings than was previously understood. We also analyze the convergence behavior of iterative encoded optimization algorithms, allowing us to characterize fundamental trade-offs between convergence rate, size of data set, accuracy, computational load (or data redundancy) and straggler toleration in this framework.


2008 ◽  
Vol 44-46 ◽  
pp. 871-878 ◽  
Author(s):  
Chu Yang Luo ◽  
Jun Jiang Xiong ◽  
R.A. Shenoi

This paper outlines a new technique to address the paucity of data in determining fatigue life and performance based on reliability concepts. Two new randomized models are presented for estimating the safe life and pS-N curve, by using the standard procedure for statistical analysis and dealing with small sample numbers of incomplete data. The confidence level formulations for the safe and p-S-N curve are also given. The concepts are then applied for the determination of the safe life and p-S-N curve. Two sets of fatigue tests for the safe life and p-S-N curve are conducted to validate the presented method, demonstrating the practical use of the proposed technique.


2020 ◽  
Vol 45 (s1) ◽  
pp. 535-559
Author(s):  
Christian Pentzold ◽  
Lena Fölsche

AbstractOur article examines how journalistic reports and online comments have made sense of computational politics. It treats the discourse around data-driven campaigns as its object of analysis and codifies four main perspectives that have structured the debates about the use of large data sets and data analytics in elections. We study American, British, and German sources on the 2016 United States presidential election, the 2017 United Kingdom general election, and the 2017 German federal election. There, groups of speakers maneuvered between enthusiastic, skeptical, agnostic, or admonitory stances and so cannot be clearly mapped onto these four discursive positions. Coming along with the inconsistent accounts, public sensemaking was marked by an atmosphere of speculation about the substance and effects of computational politics. We conclude that this equivocality helped journalists and commentators to sideline prior reporting on the issue in order to repeatedly rediscover the practices they had already covered.


Sign in / Sign up

Export Citation Format

Share Document