scholarly journals A Diagnostic Framework for the Empirical Evaluation of Learning Maps

2022 ◽  
Vol 6 ◽  
Author(s):  
W. Jake Thompson ◽  
Brooke Nash

Learning progressions and learning map structures are increasingly being used as the basis for the design of large-scale assessments. Of critical importance to these designs is the validity of the map structure used to build the assessments. Most commonly, evidence for the validity of a map structure comes from procedural evidence gathered during the learning map creation process (e.g., research literature, external reviews). However, it is also important to provide support for the validity of the map structure with empirical evidence by using data gathered from the assessment. In this paper, we propose a framework for the empirical validation of learning maps and progressions using diagnostic classification models. Three methods are proposed within this framework that provide different levels of model assumptions and types of inferences. The framework is then applied to the Dynamic Learning Maps® alternate assessment system to illustrate the utility and limitations of each method. Results show that each of the proposed methods have some limitations, but they are able to provide complementary information for the evaluation of the proposed structure of content standards (Essential Elements) in the Dynamic Learning Maps assessment.

2020 ◽  
Author(s):  
Albert A Gayle

Year-to-year emergence of West Nile virus has been sporadic and notoriously hard to predict. In Europe, 2018 saw a dramatic increase in the number of cases and locations affected. In this work, we demonstrate a novel method for predicting outbreaks and understanding what drives them. This method creates a simple model for each region that directly explains how each variable affects risk. Behind the scenes, each local explanation model is produced by a state-of-the-art AI engine. This engine unpacks and restructures output from an XGBoost machine learning ensemble. XGBoost, well-known for its predictive accuracy, has always been considered a "black box" system. Not any more. With only minimal data curation and no "tuning", our model predicted where the 2018 outbreak would occur with an AUC of 97%. This model was trained using data from 2010-2016 that reflected many domains of knowledge. Climate, sociodemographic, economic, and biodiversity data were all included. Our model furthermore explained the specific drivers of the 2018 outbreak for each affected region. These effect predictions were found to be consistent with the research literature in terms of priority, direction, magnitude, and size of effect. Aggregation and statistical analysis of local effects revealed strong cross-scale interactions. From this, we concluded that the 2018 outbreak was driven by large-scale climatic anomalies enhancing the local effect of mosquito vectors. We also identified substantial areas across Europe at risk for sudden outbreak, similar to that experienced in 2018. Taken as a whole, these findings highlight the role of climate in the emergence and transmission of West Nile virus. Furthermore, they demonstrate the crucial role that the emerging "eXplainable AI" (XAI) paradigm will have in predicting and controlling disease.


2021 ◽  
Vol 9 (1) ◽  
Author(s):  
Frank Goldhammer ◽  
Carolin Hahnel ◽  
Ulf Kroehne ◽  
Fabian Zehner

AbstractInternational large-scale assessments such as PISA or PIAAC have started to provide public or scientific use files for log data; that is, events, event-related attributes and timestamps of test-takers’ interactions with the assessment system. Log data and the process indicators derived from it can be used for many purposes. However, the intended uses and interpretations of process indicators require validation, which here means a theoretical and/or empirical justification that inferences about (latent) attributes of the test-taker’s work process are valid. This article reviews and synthesizes measurement concepts from various areas, including the standard assessment paradigm, the continuous assessment approach, the evidence-centered design (ECD) framework, and test validation. Based on this synthesis, we address the questions of how to ensure the valid interpretation of process indicators by means of an evidence-centered design of the task situation, and how to empirically challenge the intended interpretation of process indicators by developing and implementing correlational and/or experimental validation strategies. For this purpose, we explicate the process of reasoning from log data to low-level features and process indicators as the outcome of evidence identification. In this process, contextualizing information from log data is essential in order to reduce interpretative ambiguities regarding the derived process indicators. Finally, we show that empirical validation strategies can be adapted from classical approaches investigating the nomothetic span and construct representation. Two worked examples illustrate possible validation strategies for the design phase of measurements and their empirical evaluation.


2018 ◽  
Vol 9 (04) ◽  
pp. 20318-20344
Author(s):  
Dr. Felicia Sawyer ◽  
Dr. Bobbie Little ◽  
Dr. Darlene Cantey ◽  
Principal Lionel Martin

The purpose of this study is to analyze student progress after the frequent usage of a computerized reading program that provides phonics instruction and gives students independent practice in basic reading skills. Further, the study observes and analyzes the correlation between student progress in Lexia to progress report grades, report card grades, attendance, office referrals for poor behavior, the Fountas and Pinnell Benchmark Assessment System (BAS) scores, Kindergarten Readiness Assessment (KRA) language and social scores, and the Reading Inventory scores (RI).     


2020 ◽  
Author(s):  
Amy K. Clark ◽  
Meagan Karvonen

Alternate assessments based on alternate achievement standards (AA-AAS) have historically lacked broad validity evidence and an overall evaluation of the extent to which evidence supports intended uses of results. An expanding body of validation literature, the funding of two AA-AAS consortia, and advances in computer-based assessment have supported improvements in AA-AAS validation. This paper describes the validation approach used with the Dynamic Learning Maps® alternate assessment system, including development of the theory of action, claims, and interpretive argument; examples of evidence collected; and evaluation of the evidence in light of the maturity of the assessment system. We focus especially on claims and sources of evidence unique to AA-AAS and especially the Dynamic Learning Maps system design. We synthesize the evidence to evaluate the degree to which it supports the intended uses of assessment results for the targeted population. Considerations are presented for subsequent data collection efforts.


NASPA Journal ◽  
1998 ◽  
Vol 35 (4) ◽  
Author(s):  
Jackie Clark ◽  
Joan Hirt

The creation of small communities has been proposed as a way of enhancing the educational experience of students at large institutions. Using data from a survey of students living in large and small residences at a public research university, this study does not support the common assumption that small-scale social environments are more conducive to positive community life than large-scale social environments.


2021 ◽  
pp. 095679762097751
Author(s):  
Li Zhao ◽  
Jiaxin Zheng ◽  
Haiying Mao ◽  
Xinyi Yu ◽  
Jiacheng Ye ◽  
...  

Morality-based interventions designed to promote academic integrity are being used by educational institutions around the world. Although many such approaches have a strong theoretical foundation and are supported by laboratory-based evidence, they often have not been subjected to rigorous empirical evaluation in real-world contexts. In a naturalistic field study ( N = 296), we evaluated a recent research-inspired classroom innovation in which students are told, just prior to taking an unproctored exam, that they are trusted to act with integrity. Four university classes were assigned to a proctored exam or one of three types of unproctored exam. Students who took unproctored exams cheated significantly more, which suggests that it may be premature to implement this approach in college classrooms. These findings point to the importance of conducting ecologically valid and well-controlled field studies that translate psychological theory into practice when introducing large-scale educational reforms.


Author(s):  
Paul Oehlmann ◽  
Paul Osswald ◽  
Juan Camilo Blanco ◽  
Martin Friedrich ◽  
Dominik Rietzel ◽  
...  

AbstractWith industries pushing towards digitalized production, adaption to expectations and increasing requirements for modern applications, has brought additive manufacturing (AM) to the forefront of Industry 4.0. In fact, AM is a main accelerator for digital production with its possibilities in structural design, such as topology optimization, production flexibility, customization, product development, to name a few. Fused Filament Fabrication (FFF) is a widespread and practical tool for rapid prototyping that also demonstrates the importance of AM technologies through its accessibility to the general public by creating cost effective desktop solutions. An increasing integration of systems in an intelligent production environment also enables the generation of large-scale data to be used for process monitoring and process control. Deep learning as a form of artificial intelligence (AI) and more specifically, a method of machine learning (ML) is ideal for handling big data. This study uses a trained artificial neural network (ANN) model as a digital shadow to predict the force within the nozzle of an FFF printer using filament speed and nozzle temperatures as input data. After the ANN model was tested using data from a theoretical model it was implemented to predict the behavior using real-time printer data. For this purpose, an FFF printer was equipped with sensors that collect real time printer data during the printing process. The ANN model reflected the kinematics of melting and flow predicted by models currently available for various speeds of printing. The model allows for a deeper understanding of the influencing process parameters which ultimately results in the determination of the optimum combination of process speed and print quality.


2021 ◽  
Author(s):  
Parsoa Khorsand ◽  
Fereydoun Hormozdiari

Abstract Large scale catalogs of common genetic variants (including indels and structural variants) are being created using data from second and third generation whole-genome sequencing technologies. However, the genotyping of these variants in newly sequenced samples is a nontrivial task that requires extensive computational resources. Furthermore, current approaches are mostly limited to only specific types of variants and are generally prone to various errors and ambiguities when genotyping complex events. We are proposing an ultra-efficient approach for genotyping any type of structural variation that is not limited by the shortcomings and complexities of current mapping-based approaches. Our method Nebula utilizes the changes in the count of k-mers to predict the genotype of structural variants. We have shown that not only Nebula is an order of magnitude faster than mapping based approaches for genotyping structural variants, but also has comparable accuracy to state-of-the-art approaches. Furthermore, Nebula is a generic framework not limited to any specific type of event. Nebula is publicly available at https://github.com/Parsoa/Nebula.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Bohan Liu ◽  
Pan Liu ◽  
Lutao Dai ◽  
Yanlin Yang ◽  
Peng Xie ◽  
...  

AbstractThe pandemic of Coronavirus Disease 2019 (COVID-19) is causing enormous loss of life globally. Prompt case identification is critical. The reference method is the real-time reverse transcription PCR (RT-PCR) assay, whose limitations may curb its prompt large-scale application. COVID-19 manifests with chest computed tomography (CT) abnormalities, some even before the onset of symptoms. We tested the hypothesis that the application of deep learning (DL) to 3D CT images could help identify COVID-19 infections. Using data from 920 COVID-19 and 1,073 non-COVID-19 pneumonia patients, we developed a modified DenseNet-264 model, COVIDNet, to classify CT images to either class. When tested on an independent set of 233 COVID-19 and 289 non-COVID-19 pneumonia patients, COVIDNet achieved an accuracy rate of 94.3% and an area under the curve of 0.98. As of March 23, 2020, the COVIDNet system had been used 11,966 times with a sensitivity of 91.12% and a specificity of 88.50% in six hospitals with PCR confirmation. Application of DL to CT images may improve both efficiency and capacity of case detection and long-term surveillance.


Cancers ◽  
2021 ◽  
Vol 13 (13) ◽  
pp. 3247
Author(s):  
Petar Brlek ◽  
Anja Kafka ◽  
Anja Bukovac ◽  
Nives Pećina-Šlaus

Diffuse gliomas are a heterogeneous group of tumors with aggressive biological behavior and a lack of effective treatment methods. Despite new molecular findings, the differences between pathohistological types still require better understanding. In this in silico analysis, we investigated AKT1, AKT2, AKT3, CHUK, GSK3β, EGFR, PTEN, and PIK3AP1 as participants of EGFR-PI3K-AKT-mTOR signaling using data from the publicly available cBioPortal platform. Integrative large-scale analyses investigated changes in copy number aberrations (CNA), methylation, mRNA transcription and protein expression within 751 samples of diffuse astrocytomas, anaplastic astrocytomas and glioblastomas. The study showed a significant percentage of CNA in PTEN (76%), PIK3AP1 and CHUK (75% each), EGFR (74%), AKT2 (39%), AKT1 (32%), AKT3 (19%) and GSK3β (18%) in the total sample. Comprehensive statistical analyses show how genomics and epigenomics affect the expression of examined genes differently across various pathohistological types and grades, suggesting that genes AKT3, CHUK and PTEN behave like tumor suppressors, while AKT1, AKT2, EGFR, and PIK3AP1 show oncogenic behavior and are involved in enhanced activity of the EGFR-PI3K-AKT-mTOR signaling pathway. Our findings contribute to the knowledge of the molecular differences between pathohistological types and ultimately offer the possibility of new treatment targets and personalized therapies in patients with diffuse gliomas.


Sign in / Sign up

Export Citation Format

Share Document