validation framework
Recently Published Documents


TOTAL DOCUMENTS

207
(FIVE YEARS 79)

H-INDEX

17
(FIVE YEARS 4)

Author(s):  
Amir Antonie ◽  
Andrew Mathus

As a result of the parallel element setting, performance assessment and model construction are constrained. Component functions should be observable without direct connections to programming language, for example. As a result of this, solutions that are constituted interactively at program execution necessitate recyclable performance-monitoring interactions. As a result of these restrictions, a quasi, coarse-grained Performance Evaluation (PE) approach is described in this paper. A performance framework for the application system can be polymerized from these data. To validate the evaluation and model construction techniques included in the validation framework, simplistic elements with well-known optimization models are employed.


2021 ◽  
Author(s):  
Teresa Ober ◽  
Cheng Liu ◽  
Matt Carter ◽  
Alison Cheng

We develop and present validity evidence for a new 12-item self-report measure of test anxiety, the Trait Test Anxiety Inventory - Short (TTAI-S), following Kane’s validation framework. Data were collected from three independent samples of college students in the U.S. (N=552; Mage=22.25 years). Scoring validity was evidenced by good internal consistency and confirmed structure as a single factor. Generalization validity was evidenced by scalar measurement invariance based on the sample (Internet v. community) and subgroups (i.e., gender, race/ethnicity, and parental educational attainment). Extrapolation validity was evidenced by significant associations between the TTAI-S score and two theoretically relevant constructs (state test anxiety, self-efficacy). These findings support the psychometric integrity of the TTAI-S, which may be used to investigate trait test anxiety in a variety of contexts.


Epidemics ◽  
2021 ◽  
pp. 100514
Author(s):  
Sayan Dasgupta ◽  
Mia R. Moore ◽  
Dobromir T. Dimitrov ◽  
James P. Hughes

2021 ◽  
Vol 12 (5) ◽  
pp. 21-40
Author(s):  
Abdelsalam M. Maatuk ◽  
Sohil F. Alshareef ◽  
Tawfig M. Abdelaziz

Requirements engineering is a discipline of software engineering that is concerned with the identification and handling of user and system requirements. Aspect-Oriented Requirements Engineering (AORE) extends the existing requirements engineering approaches to cope with the issue of tangling and scattering resulted from crosscutting concerns. Crosscutting concerns are considered as potential aspects and can lead to the phenomena “tyranny of the dominant decomposition”. Requirements-level aspects are responsible for producing scattered and tangled descriptions of requirements in the requirements document. Validation of requirements artefacts is an essential task in software development. This task ensures that requirements are correct and valid in terms of completeness and consistency, hence, reducing the development cost, maintenance and establish an approximately correct estimate of effort and completion time of the project. In this paper, we present a validation framework to validate the aspectual requirements and the crosscutting relationship of concerns that are resulted from the requirements engineering phase. The proposed framework comprises a high-level and low-level validation to implement on software requirements specification (SRS). The high-level validation validates the concerns with stakeholders, whereas the low-level validation validates the aspectual requirement by requirements engineers and analysts using a checklist. The approach has been evaluated using an experimental study on two AORE approaches. The approaches are viewpoint-based called AORE with ArCaDe and lexical analysis based on Theme/Doc approach. The results obtained from the study demonstrate that the proposed framework is an effective validation model for AORE artefacts.


Author(s):  
Silja Rohr-Mentele ◽  
Sarah Forster-Heinzer

AbstractCompetence development and measurement are of great interest to vocational education and training (VET). Although there are many instruments available for measuring competence in diverse settings, in many cases, the completed steps of validation are neither documented nor made transparent in a comprehensible manner. Understanding what an instrument actually measures is extremely important, inter alia, for evaluating test results, for conducting replication studies and for enforcing adaptation intentions. Therefore, more thorough and qualitative validation studies are required. This paper presents an approach to facilitate validation studies using the example of the simuLINCA test. The approach to validation applied in this study was developed in the field of medicine; nevertheless, it provides a promising means of assessing the validity of (computer-based) instruments in VET. We present the approach in detail along a newly developed computer-based simulation (simuLINCA) that measures basic commercial knowledge and skills of apprentices in Switzerland. The strength of the presented approach is that it provides practical guidelines that help perform the measurement process and support an increase in transparency. Still, it is flexible enough to allow different concepts to test development and validity. The approach applied proved to be practicable for VET and the measurement of occupational competence. After extending and slightly modifying the approach, a practical validation framework, including the description of each step and questions to support the application of it, is available for the VET context. The computer-based test instrument, simuLINCA, provides insights into how a computer-based test for measuring competence in various occupational fields can be developed and validated. SimuLINCA showed satisfying evidence for a valid measurement instrument. It could, however, be further developed, revised and extended.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Hung Tan Ha

AbstractThe Listening Vocabulary Levels Test (LVLT) created by McLean et al. Language Teaching Research 19:741-760, 2015 filled an important gap in the field of second language assessment by introducing an instrument for the measurement of phonological vocabulary knowledge. However, few attempts have been made to provide further validity evidence for the LVLT and no Vietnamese version of the test has been created to date. The present study describes the development and validation of the Vietnamese version of the LVLT. Data was collected from 311 Vietnamese university students and then analyzed based on the Rasch model using several aspects of Messick’s, Educational Measurement, 1989; American Psychologist 50:741–749, 1995 validation framework. Supportive evidence for the test’s validity was provided. First, the test items showed very good fit to the Rasch model and presented a sufficient spread of difficulty. Second, the items displayed sound unidimensionality and were locally independent. Finally, the Vietnamese version of the LVLT showed a high degree of generalizability and was found to positively correlate with the IELTS listening test at 0.65.


2021 ◽  
Author(s):  
Chen Jiang ◽  
Zhen Hu ◽  
Yixuan Liu ◽  
Zissimos Mourelatos ◽  
David Gorsich ◽  
...  

Author(s):  
Andreea Bonea ◽  
Cristian Patachia-Sultanoiu ◽  
Marius Iordache ◽  
Ioan Constantin ◽  
Andrei Radulescu ◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document