scholarly journals Content analytics for curriculum review: A learning analytics use case for exploration of learner context

Author(s):  
Leah P. Macfadyen

Curriculum analysis is a core component of curriculum renewal. Traditional approaches to curriculum analysis are manual, slow and subjective, but some studies have suggested that text analysis might usefully be employed for exploration of curriculum. This concise paper outlines a pilot use case of content analytics to support curriculum review and analysis. I have co-opted Quantext – a relatively user-friendly text analysis tool designed to help educators explore student writing – for analysis of the text content of the 17 courses in our online master’s program. Quantext computed descriptive metrics and readability indices for each course and identified top keywords and ngrams per course. Compilation and comparison of these revealed frequent curricular topics and networks of thematic relationships between courses, in ways that both individual educators and curriculum committees can interpret and use for decision-making. Future Quantext features will allow even more sophisticated identification of curricular gaps and redundancies.

Crisis ◽  
2016 ◽  
Vol 37 (2) ◽  
pp. 140-147 ◽  
Author(s):  
Michael J. Egnoto ◽  
Darrin J. Griffin

Abstract. Background: Identifying precursors that will aid in the discovery of individuals who may harm themselves or others has long been a focus of scholarly research. Aim: This work set out to determine if it is possible to use the legacy tokens of active shooters and notes left from individuals who completed suicide to uncover signals that foreshadow their behavior. Method: A total of 25 suicide notes and 21 legacy tokens were compared with a sample of over 20,000 student writings for a preliminary computer-assisted text analysis to determine what differences can be coded with existing computer software to better identify students who may commit self-harm or harm to others. Results: The results support that text analysis techniques with the Linguistic Inquiry and Word Count (LIWC) tool are effective for identifying suicidal or homicidal writings as distinct from each other and from a variety of student writings in an automated fashion. Conclusion: Findings indicate support for automated identification of writings that were associated with harm to self, harm to others, and various other student writing products. This work begins to uncover the viability or larger scale, low cost methods of automatic detection for individuals suffering from harmful ideation.


2002 ◽  
Vol 34 (1) ◽  
pp. 93-107 ◽  
Author(s):  
Eduardo Vidal-Abarca ◽  
Héctor Reyes ◽  
Ramiro Gilabert ◽  
Javier Calpe ◽  
Emilio Soria ◽  
...  

2021 ◽  
Author(s):  
Ram Isaac Orr ◽  
michael gilead

Attribution of mental states to self and others, i.e., mentalizing, is central to human life. Current measures are lacking in ability to directly gauge the extent of individuals engage in spontaneous mentalizing. Focusing on natural language use as an expression of inner psychological processes, we developed the Mental-Physical Verb Norms (MPVN). These norms are participant-derived ratings of the extent to which common verbs reflect mental (opposite physical) activities and occurrences, covering ~80% of all verbs appearing within a given English text. Content validity was assessed against existing expert-compiled dictionaries of mental states and cognitive processes, as well as against normative ratings of verb concreteness. Criterion Validity was assessed through natural text analysis of internet comments relating to mental health vs. physical health. Results showcase the unique contribution of the MPVN ratings as a measure of the degree to which individuals adopt the intentional stance in describing targets, by describing both self and others in mental, opposite physical terms. We discuss potential uses for future research across various psychological and neurocognitive disciplines.


Author(s):  
Gunnar Völkel ◽  
Simon Laban ◽  
Axel Fürstberger ◽  
Silke D Kühlwein ◽  
Nensi Ikonomi ◽  
...  

Abstract Motivation Cancer is a complex and heterogeneous disease involving multiple somatic mutations that accumulate during its progression. In the past years, the wide availability of genomic data from patients’ samples opened new perspectives in the analysis of gene mutations and alterations. Hence, visualizing and further identifying genes mutated in massive sets of patients are nowadays a critical task that sheds light on more personalized intervention approaches. Results Here, we extensively review existing tools for visualization and analysis of alteration data. We compare different approaches to study mutual exclusivity and sample coverage in large-scale omics data. We complement our review with the standalone software AVAtar (‘analysis and visualization of alteration data’) that integrates diverse aspects known from different tools into a comprehensive platform. AVAtar supplements customizable alteration plots by a multi-objective evolutionary algorithm for subset identification and provides an innovative and user-friendly interface for the evaluation of concurrent solutions. A use case from personalized medicine demonstrates its unique features showing an application on vaccination target selection. Availability AVAtar is available at: https://github.com/sysbio-bioinf/avatar Contact [email protected], phone: +49 (0) 731 500 24 500, fax: +49 (0) 731 500 24 502


2002 ◽  
Vol 11 (03) ◽  
pp. 369-387 ◽  
Author(s):  
PETRI MYLLYMÄKI ◽  
TOMI SILANDER ◽  
HENRY TIRRI ◽  
PEKKA URONEN

B-Course is a free web-based online data analysis tool, which allows the users to analyze their data for multivariate probabilistic dependencies. These dependencies are represented as Bayesian network models. In addition to this, B-Course also offers facilities for inferring certain type of causal dependencies from the data. The software uses a novel "tutorial stylerdquo; user-friendly interface which intertwines the steps in the data analysis with support material that gives an informal introduction to the Bayesian approach adopted. Although the analysis methods, modeling assumptions and restrictions are totally transparent to the user, this transparency is not achieved at the expense of analysis power: with the restrictions stated in the support material, B-Course is a powerful analysis tool exploiting several theoretically elaborate results developed recently in the fields of Bayesian and causal modeling. B-Course can be used with most web-browsers (even Lynx), and the facilities include features such as automatic missing data handling and discretization, a flexible graphical interface for probabilistic inference on the constructed Bayesian network models (for Java enabled browsers), automatic prettyHyphen;printed layout for the networks, exportation of the models, and analysis of the importance of the derived dependencies. In this paper we discuss both the theoretical design principles underlying the B-Course tool, and the pragmatic methods adopted in the implementation of the software.


2016 ◽  
Vol 21 (1) ◽  
pp. 105-115 ◽  
Author(s):  
Michael Barlow

In this article, I provide a brief introduction to the operation and motivation behind the text analysis tool WordSkew. This program, currently available for Windows, is a variant of a typical concordance program. The distinguishing feature of the software is that it allows the user to specify the units of discourse and apposite ways of segmenting the discourse. The results of a search query are then given with respect to each segment. For example, sentences might be divided into ten segments (based on word counts) and the frequency of the search term is then provided for each segment. This process is repeated as required for other textual units.


2021 ◽  
Author(s):  
Russell J Jarvis ◽  
Patrick M. McGurrin ◽  
Rebecca Featherston ◽  
Marc Skov Madsen ◽  
Shivam Bansal ◽  
...  

Here we present a new text analysis tool that consists of a text analysis service and an author search service. These services were created by using or extending many existing Free and Open Source tools, including streamlit, requests, WordCloud, TextStat, and The Natural Language Tool Kit. The tool has the capability to retrieve journal hosting links and journal article content from APIs and journal hosting websites. Together, these services allow the user to review the complexity of a scientist’s published work relative to other online-based text repositories. Rather than providing feedback as to the complexity of a single text as previous tools have done, the tool presented here shows the relative complexity across many texts from the same author, while also comparing the readability of the author’s body of work to a variety of other scientific and lay text types. The goal of this work is to apply a more data-driven approach that provides established academic authors with statistical insights into their body of published peer reviewed work. By monitoring these readability metrics, scientists may be able to cater their writing to reach broader audiences, contributing to an improved global communication and understanding of complex topics.


Author(s):  
Anna Azulai

Qualitative Inquiry and Research Design: Choosing Among Five Approaches (3rd ed.) is an informative, engaging and user-friendly book by J. W. Creswell (2012) that is focused on practical application of qualitative research methods in social inquiry. The author provided a useful comparison of the five types of qualitative inquiry (narrative, phenomenology, ethnography, grounded theory, and case study) and discussed foundational and methodological aspects of the five traditional approaches. Creswell also effectively demonstrated how the type of the approach of qualitative inquiry shaped the design or procedures of a study. This book could be particularly useful to novice researchers and graduate students who are new to qualitative research, as well as to educators teaching qualitative methods of inquiry.


2021 ◽  
Author(s):  
Stefan Buck ◽  
Lukas Pekarek ◽  
Neva Caliskan

Optical tweezers is a single-molecule technique that allows probing of intra- and intermolecular interactions that govern complex biological processes involving molecular motors, protein-nucleic acid interactions and protein/RNA folding. Recent developments in instrumentation eased and accelerated optical tweezers data acquisition, but analysis of the data remains challenging. Here, to enable high-throughput data analysis, we developed an automated python-based analysis pipeline called POTATO (Practical Optical Tweezers Analysis TOol). POTATO automatically processes the high-frequency raw data generated by force-ramp experiments and identifies (un)folding events using predefined parameters. After segmentation of the force-distance trajectories at the identified (un)folding events, sections of the curve can be fitted independently to worm-like chain and freely-jointed chain models, and the work applied on the molecule can be calculated by numerical integration. Furthermore, the tool allows plotting of constant force data and fitting of the Gaussian distance distribution over time. All these features are wrapped in a user-friendly graphical interface (https://github.com/REMI-HIRI/POTATO), which allows researchers without programming knowledge to perform sophisticated data analysis.


Sign in / Sign up

Export Citation Format

Share Document