empirical evaluation
Recently Published Documents


TOTAL DOCUMENTS

3646
(FIVE YEARS 885)

H-INDEX

95
(FIVE YEARS 10)

2022 ◽  
Vol 22 (1) ◽  
pp. 1-46
Author(s):  
Sarah Heckman ◽  
Jeffrey C. Carver ◽  
Mark Sherriff ◽  
Ahmed Al-zubidy

Context. Computing Education Research (CER) is critical to help the computing education community and policy makers support the increasing population of students who need to learn computing skills for future careers. For a community to systematically advance knowledge about a topic, the members must be able to understand published work thoroughly enough to perform replications, conduct meta-analyses, and build theories. There is a need to understand whether published research allows the CER community to systematically advance knowledge and build theories. Objectives. The goal of this study is to characterize the reporting of empiricism in Computing Education Research literature by identifying whether publications include content necessary for researchers to perform replications, meta-analyses, and theory building. We answer three research questions related to this goal: (RQ1) What percentage of papers in CER venues have some form of empirical evaluation? (RQ2) Of the papers that have empirical evaluation, what are the characteristics of the empirical evaluation? (RQ3) Of the papers that have empirical evaluation, do they follow norms (both for inclusion and for labeling of information needed for replication, meta-analysis, and, eventually, theory-building) for reporting empirical work? Methods. We conducted a systematic literature review of the 2014 and 2015 proceedings or issues of five CER venues: Technical Symposium on Computer Science Education (SIGCSE TS), International Symposium on Computing Education Research (ICER), Conference on Innovation and Technology in Computer Science Education (ITiCSE), ACM Transactions on Computing Education (TOCE), and Computer Science Education (CSE). We developed and applied the CER Empiricism Assessment Rubric to the 427 papers accepted and published at these venues over 2014 and 2015. Two people evaluated each paper using the Base Rubric for characterizing the paper. An individual person applied the other rubrics to characterize the norms of reporting, as appropriate for the paper type. Any discrepancies or questions were discussed between multiple reviewers to resolve. Results. We found that over 80% of papers accepted across all five venues had some form of empirical evaluation. Quantitative evaluation methods were the most frequently reported. Papers most frequently reported results on interventions around pedagogical techniques, curriculum, community, or tools. There was a split in papers that had some type of comparison between an intervention and some other dataset or baseline. Most papers reported related work, following the expectations for doing so in the SIGCSE and CER community. However, many papers were lacking properly reported research objectives, goals, research questions, or hypotheses; description of participants; study design; data collection; and threats to validity. These results align with prior surveys of the CER literature. Conclusions. CER authors are contributing empirical results to the literature; however, not all norms for reporting are met. We encourage authors to provide clear, labeled details about their work so readers can use the study methodologies and results for replications and meta-analyses. As our community grows, our reporting of CER should mature to help establish computing education theory to support the next generation of computing learners.


2022 ◽  
Vol 25 (1) ◽  
pp. 1-36
Author(s):  
Savvas Savvides ◽  
Seema Kumar ◽  
Julian James Stephen ◽  
Patrick Eugster

With the advent of the Internet of things (IoT), billions of devices are expected to continuously collect and process sensitive data (e.g., location, personal health factors). Due to the limited computational capacity available on IoT devices, the current de facto model for building IoT applications is to send the gathered data to the cloud for computation. While building private cloud infrastructures for handling large amounts of data streams can be expensive, using low-cost public (untrusted) cloud infrastructures for processing continuous queries including sensitive data leads to strong concerns over data confidentiality. This article presents C3PO, a confidentiality-preserving, continuous query processing engine, that leverages the public cloud. The key idea is to intelligently utilize partially homomorphic and property-preserving encryption to perform as many computationally intensive operations as possible—without revealing plaintext—in the untrusted cloud. C3PO provides simple abstractions to the developer to hide the complexities of applying complex cryptographic primitives, reasoning about the performance of such primitives, deciding which computations can be executed in an untrusted tier, and optimizing cloud resource usage. An empirical evaluation with several benchmarks and case studies shows the feasibility of our approach. We consider different classes of IoT devices that differ in their computational and memory resources (from a Raspberry Pi 3 to a very small device with a Cortex-M3 microprocessor) and through the use of optimizations, we demonstrate the feasibility of using partially homomorphic and property-preserving encryption on IoT devices.


Author(s):  
Ali Bou Nassif ◽  
Abdollah Masoud Darya ◽  
Ashraf Elnagar

This work presents a detailed comparison of the performance of deep learning models such as convolutional neural networks, long short-term memory, gated recurrent units, their hybrids, and a selection of shallow learning classifiers for sentiment analysis of Arabic reviews. Additionally, the comparison includes state-of-the-art models such as the transformer architecture and the araBERT pre-trained model. The datasets used in this study are multi-dialect Arabic hotel and book review datasets, which are some of the largest publicly available datasets for Arabic reviews. Results showed deep learning outperforming shallow learning for binary and multi-label classification, in contrast with the results of similar work reported in the literature. This discrepancy in outcome was caused by dataset size as we found it to be proportional to the performance of deep learning models. The performance of deep and shallow learning techniques was analyzed in terms of accuracy and F1 score. The best performing shallow learning technique was Random Forest followed by Decision Tree, and AdaBoost. The deep learning models performed similarly using a default embedding layer, while the transformer model performed best when augmented with araBERT.


2022 ◽  
Vol 12 (2) ◽  
pp. 858
Author(s):  
Kentaro Imai ◽  
Takashi Hashimoto ◽  
Yuta Mitobe ◽  
Tatsuo Masuta ◽  
Narumi Takahashi ◽  
...  

Tsunami-related fires may occur in the inundation area during a huge tsunami disaster, and woody debris produced by the tsunami can cause the fires to spread. To establish a practical method for evaluating tsunami-related fire predictions, we previously developed a method for evaluating the tsunami debris thickness distribution that uses tsunami computation results and static parameters for tsunami numerical analysis. We then used this evaluation method to successfully reproduce the tsunami debris accumulation trend. We then developed an empirical building fragility function that relates the production of debris not only to inundation depth but also to the topographic gradient and the proportion of robust buildings. Using these empirical evaluation models, along with conventional tsunami numerical analysis data, we carried out a practical tsunami debris prediction for Owase City, Mie Prefecture, a potential disaster area for a Nankai Trough mega-earthquake. This prediction analysis method can reveal hazards which go undetected by a conventional tsunami inundation analysis. These results indicate that it is insufficient to characterize the tsunami hazard by inundation area and inundation depth alone when predicting the hazard of a huge tsunami; moreover, more practically, it is necessary to predict the hazard based on the effect of tsunami debris.


2022 ◽  
Vol 6 ◽  
Author(s):  
W. Jake Thompson ◽  
Brooke Nash

Learning progressions and learning map structures are increasingly being used as the basis for the design of large-scale assessments. Of critical importance to these designs is the validity of the map structure used to build the assessments. Most commonly, evidence for the validity of a map structure comes from procedural evidence gathered during the learning map creation process (e.g., research literature, external reviews). However, it is also important to provide support for the validity of the map structure with empirical evidence by using data gathered from the assessment. In this paper, we propose a framework for the empirical validation of learning maps and progressions using diagnostic classification models. Three methods are proposed within this framework that provide different levels of model assumptions and types of inferences. The framework is then applied to the Dynamic Learning Maps® alternate assessment system to illustrate the utility and limitations of each method. Results show that each of the proposed methods have some limitations, but they are able to provide complementary information for the evaluation of the proposed structure of content standards (Essential Elements) in the Dynamic Learning Maps assessment.


2022 ◽  
pp. 1635-1651
Author(s):  
Abhishek Pandey ◽  
Soumya Banerjee

Software testing is essential for providing error-free software. It is a well-known fact that software testing is responsible for at least 50% of the total development cost. Therefore, it is necessary to automate and optimize the testing processes. Search-based software engineering is a discipline mainly focussed on automation and optimization of various software engineering processes including software testing. In this article, a novel approach of hybrid firefly and a genetic algorithm is applied for test data generation and selection in regression testing environment. A case study is used along with an empirical evaluation for the proposed approach. Results show that the hybrid approach performs well on various parameters that have been selected in the experiments.


2022 ◽  
Vol 18 (1) ◽  
pp. 0-0

Purpose of this paper is to identify factors influencing the intention to use and develop a model for measuring the intention to use public e-participation services. As a added value, paper is examining the structure of needs for different levels of public e-participation services. As for the methodology, this paper provides an empirical evaluation of Davis's Technology Acceptance Model extended with non-technical constructs of the Planned Behavior Theory and Trust Model. Validity and hypotheses of the newly proposed multidimensional structural model were tested using Partial Least Squares Structural Equation Modeling. PLS-SEM research results significantly confirmed three out of seven hypotheses. There is a positive and statistically significant correlation between “Expected usefulness”, “Expected behaviour control” and “Trust in the Internet” with the intention to use public e-participation services (p<0.05). Concerning demand-side, research results demonstrate that the majority of the respondents prefers public e-participation services of a higher level of complexity.


2022 ◽  
pp. 37-55
Author(s):  
Oliver Robinson ◽  
Ilham Sebah ◽  
Ana A. Avram

The Resilience Enhancement Programme for Students (REP-S) is an intervention that has been designed to boost resilience in students. The current study involved the remote delivery of the REP-S via an online platform to students, and an empirical evaluation of the intervention via a pre-post one-group quantitative design over one month and a post-intervention qualitative element. Fifty-six students from the University of Greenwich qualified for inclusion in the study. Results indicated that perceived stress and trait neuroticism decreased over the month of the study, while resilience increased. Engagement with the intervention also predicted a reduction in neuroticism. Students reported experiencing a complex range of difficulties over the duration of the pandemic and that 80% of participants found the workshop to be effective in addressing these problems. Overall, participants found more positives than negatives in the online delivery of the workshop. If rolled out on a wider basis, the REP-S has the potential to improve wellbeing and mental health across the higher education sector.


Author(s):  
Md. Hafiz Iqbal ◽  
Shamsun Akhter Siddiqie ◽  
Shamsun Naher

Purpose: Continuing Professional Development (CPD) is a fundamental issue for knowledge management in teaching. Teachers get more benefits from it because of the opportunities for participation in training, workshops, seminars, symposiums, mentoring programs, research work, coaching, and others.  This study explores college teachers’ perceptions about CPD at the college level for knowledge management and lifelong learning and identifies the factors that contribute to designing CPD. Methodology: An organizational case study with mixed methods and a multistage cluster sampling technique were applied to carry out this research. Because of the COVID-19 pandemic, college teachers’ face-to-face appointments were converted to e-mail communication to capture data. Findings: Of the 63 scheduled appointments, 37 (58.73%) respondents sent their responses via e-mail. For proper empirical evaluation, we used the non-parametric Mann–Whitney and Shapiro–Wilk tests. Tested and confirmed result of the study suggested that age, subject, length of service, gender, in-house training, necessary skills, administrative support, networking capacity, and online facility are the important contributors to CPD and knowledge management. Implications of the study: The facts and findings of our study are very important for policymakers and stakeholders to formulate appropriate policies. 


Sign in / Sign up

Export Citation Format

Share Document