scholarly journals Extraction of Relevant Terms and Learning Outcomes from Online Courses

Author(s):  
Isabel Guitart ◽  
Jordi Conesa ◽  
David Baneres ◽  
Joaquim Moré ◽  
Jordi Duran ◽  
...  

Nowadays, universities (on-site and online) have a large competition in order to attract more students. In this panorama, learning analytics can be a very useful tool since it allows instructors (and university managers) to get a more thorough view of their context, to better understand the environment, and to identify potential improvements. In order to perform analytics efficiently, it is necessary to have as much information as possible about the instructional context. The paper proposes a novel approach to gather information from different aspects within courses. In particular, the approach applies natural language processing (NLP) techniques to analyze the course’s materials and discover what concepts are taught, their relevancy in the course and their alignment with the learning outcomes of the course. The contribution of the paper is a semi-automatic system that allows obtaining a better understanding of courses. A validation experiment on a master of the Open University of Catalonia is presented in order to show the quality of the results. The system can be used to analyze the suitability of course’s materials and to enrich and contextualize other analytical processes.

2021 ◽  
Vol 48 (4) ◽  
pp. 41-44
Author(s):  
Dena Markudova ◽  
Martino Trevisan ◽  
Paolo Garza ◽  
Michela Meo ◽  
Maurizio M. Munafo ◽  
...  

With the spread of broadband Internet, Real-Time Communication (RTC) platforms have become increasingly popular and have transformed the way people communicate. Thus, it is fundamental that the network adopts traffic management policies that ensure appropriate Quality of Experience to users of RTC applications. A key step for this is the identification of the applications behind RTC traffic, which in turn allows to allocate adequate resources and make decisions based on the specific application's requirements. In this paper, we introduce a machine learning-based system for identifying the traffic of RTC applications. It builds on the domains contacted before starting a call and leverages techniques from Natural Language Processing (NLP) to build meaningful features. Our system works in real-time and is robust to the peculiarities of the RTP implementations of different applications, since it uses only control traffic. Experimental results show that our approach classifies 5 well-known meeting applications with an F1 score of 0.89.


Author(s):  
D. Thammi Raju ◽  
G. R. K. Murthy ◽  
S. B. Khade ◽  
B. Padmaja ◽  
B. S. Yashavanth ◽  
...  

Building an effective online course requires an understanding of learning analytics. The study assumes significance in the COVID 19 pandemic situation as there is a sudden surge in online courses. Analysis of the online course using the data generated from the Moodle Learning Management System (LMS), Google Forms and Google Analytics was carried out to understand the tenants of an effective online course. About 515 learners participated in the initial pre-training needs & expectations’ survey and 472 learners gave feedback at the end, apart from the real-time data generated from LMS and Google Analytics during the course period. This case study analysed online learning behaviour and the supporting learning environment and suggest critical factors to be at the centre stage in the design and development of online courses; leads to the improved online learning experience and thus the quality of education. User needs, quality of resources and effectiveness of online courses are equally important in taking further online courses.


Vector representations for language have been shown to be useful in a number of Natural Language Processing tasks. In this paper, we aim to investigate the effectiveness of word vector representations for the problem of Sentiment Analysis. In particular, we target three sub-tasks namely sentiment words extraction, polarity of sentiment words detection, and text sentiment prediction. We investigate the effectiveness of vector representations over different text data and evaluate the quality of domain-dependent vectors. Vector representations has been used to compute various vector-based features and conduct systematically experiments to demonstrate their effectiveness. Using simple vector based features can achieve better results for text sentiment analysis of APP.


Author(s):  
Julia Chen ◽  
Dennis Foung

This chapter explores the possibility of adopting a data-driven approach to connecting teacher-made assessments with course learning outcomes. The authors begin by describing several key concepts, such as outcome-based education, curriculum alignment, and teacher-made assessments. Then, the context of the research site and the subject in question are described and the use of structural equation modeling (SEM) in this curriculum alignment study is explained. After that, the results of these SEM analyses are presented, and the various models derived from the analyses are discussed. In particular, the authors highlight how a data-driven curriculum model can benefit from input by curriculum leaders and how SEM provides insights into course development and enhancement. The chapter concludes with recommendations for curriculum leaders and front-line teachers to improve the quality of teacher-made assessments.


Author(s):  
Irene Li ◽  
Alexander R. Fabbri ◽  
Robert R. Tung ◽  
Dragomir R. Radev

Recent years have witnessed the rising popularity of Natural Language Processing (NLP) and related fields such as Artificial Intelligence (AI) and Machine Learning (ML). Many online courses and resources are available even for those without a strong background in the field. Often the student is curious about a specific topic but does not quite know where to begin studying. To answer the question of “what should one learn first,”we apply an embedding-based method to learn prerequisite relations for course concepts in the domain of NLP. We introduce LectureBank, a dataset containing 1,352 English lecture files collected from university courses which are each classified according to an existing taxonomy as well as 208 manually-labeled prerequisite relation topics, which is publicly available 1. The dataset will be useful for educational purposes such as lecture preparation and organization as well as applications such as reading list generation. Additionally, we experiment with neural graph-based networks and non-neural classifiers to learn these prerequisite relations from our dataset.


2020 ◽  
Vol 8 ◽  
Author(s):  
Majed Al-Jefri ◽  
Roger Evans ◽  
Joon Lee ◽  
Pietro Ghezzi

Objective: Many online and printed media publish health news of questionable trustworthiness and it may be difficult for laypersons to determine the information quality of such articles. The purpose of this work was to propose a methodology for the automatic assessment of the quality of health-related news stories using natural language processing and machine learning.Materials and Methods: We used a database from the website HealthNewsReview.org that aims to improve the public dialogue about health care. HealthNewsReview.org developed a set of criteria to critically analyze health care interventions' claims. In this work, we attempt to automate the evaluation process by identifying the indicators of those criteria using natural language processing-based machine learning on a corpus of more than 1,300 news stories. We explored features ranging from simple n-grams to more advanced linguistic features and optimized the feature selection for each task. Additionally, we experimented with the use of pre-trained natural language model BERT.Results: For some criteria, such as mention of costs, benefits, harms, and “disease-mongering,” the evaluation results were promising with an F1 measure reaching 81.94%, while for others the results were less satisfactory due to the dataset size, the need of external knowledge, or the subjectivity in the evaluation process.Conclusion: These used criteria are more challenging than those addressed by previous work, and our aim was to investigate how much more difficult the machine learning task was, and how and why it varied between criteria. For some criteria, the obtained results were promising; however, automated evaluation of the other criteria may not yet replace the manual evaluation process where human experts interpret text senses and make use of external knowledge in their assessment.


2021 ◽  
Author(s):  
Sena Chae ◽  
Jiyoun Song ◽  
Marietta Ojo ◽  
Maxim Topaz

The goal of this natural language processing (NLP) study was to identify patients in home healthcare with heart failure symptoms and poor self-management (SM). The preliminary lists of symptoms and poor SM status were identified, NLP algorithms were used to refine the lists, and NLP performance was evaluated using 2.3 million home healthcare clinical notes. The overall precision to identify patients with heart failure symptoms and poor SM status was 0.86. The feasibility of methods was demonstrated to identify patients with heart failure symptoms and poor SM documented in home healthcare notes. This study facilitates utilizing key symptom information and patients’ SM status from unstructured data in electronic health records. The results of this study can be applied to better individualize symptom management to support heart failure patients’ quality-of-life.


Author(s):  
Rahul Sharan Renu ◽  
Gregory Mocko

The objective of this research is to investigate the requirements and performance of parts-of-speech tagging of assembly work instructions. Natural Language Processing of assembly work instructions is required to perform data mining with the objective of knowledge reuse. Assembly work instructions are key process engineering elements that allow for predictable assembly quality of products and predictable assembly lead times. Authoring of assembly work instructions is a subjective process. It has been observed that most assembly work instructions are not grammatically complete sentences. It is hypothesized that this can lead to false parts-of-speech tagging (by Natural Language Processing tools). To test this hypothesis, two parts-of-speech taggers are used to tag 500 assembly work instructions (obtained from the automotive industry). The first parts-of-speech tagger is obtained from Natural Language Processing Toolkit (nltk.org) and the second parts-of-speech tagger is obtained from Stanford Natural Language Processing Group (nlp.stanford.edu). For each of these taggers, two experiments are conducted. In the first experiment, the assembly work instructions are input to the each tagger in raw form. In the second experiment, the assembly work instructions are preprocessed to make them grammatically complete, and then input to the tagger. It is found that the Stanford Natural Language Processing tagger with the preprocessed assembly work instructions produced the least number of false parts-of-speech tags.


2021 ◽  
Author(s):  
Anahita Davoudi ◽  
Hegler Tissot ◽  
Abigail Doucette ◽  
Peter E Gabriel ◽  
Ravi B. Parikh ◽  
...  

One core measure of healthcare quality set forth by the Institute of Medicine is whether care decisions match patient goals. High-quality "serious illness communication" about patient goals and prognosis is required to support patient-centered decision-making, however current methods are not sensitive enough to measure the quality of this communication or determine whether care delivered matches patient priorities. Natural language processing offers an efficient method for identification and evaluation of documented serious illness communication, which could serve as the basis for future quality metrics in oncology and other forms of serious illness. In this study, we trained NLP algorithms to identify and characterize serious illness communication with oncology patients.


Sign in / Sign up

Export Citation Format

Share Document