programming error
Recently Published Documents


TOTAL DOCUMENTS

28
(FIVE YEARS 13)

H-INDEX

4
(FIVE YEARS 0)

2021 ◽  
Vol 12 ◽  
Author(s):  
Zihe Zhou ◽  
Shijuan Wang ◽  
Yizhou Qian

Error messages provided by the programming environments are often cryptic and confusing to learners. This study explored the effectiveness of enhanced programming error messages (EPEMs) in a Python-based introductory programming course. Participants were two groups of middle school students. The control group only received raw programming error messages (RPEMs) and had 35 students. The treatment group received EPEMs and had 33 students. During the class, students used an automated assessment tool called Mulberry to practice their programming skill. Mulberry automatically collected all the solutions students submitted when solving programming problems. Data analysis was based on 6339 student solutions collected by Mulberry. Our results showed that EPEMs did not help to reduce student errors or improve students’ performance in debugging. The ineffectiveness of EPEMs may result from reasons such as the inaccuracy of the interpreter’s error messages or students not reading the EPEMs. However, the viewpoint of productive failure may provide a better explanation of the ineffectiveness of EPEMs. The failures in coding and difficulties in debugging can be resources for learning. We recommend that researchers reconsider the role of errors in code and investigate whether and how failures and debugging contribute to the learning of programming.


2021 ◽  
Author(s):  
Randal Mulder

Abstract A major customer had been returning devices for nonvolatile memory (NVM) data retention bit failures. The ppm level was low but the continued fallout at the customer location was causing a quality and reliability concern. The customer wanted a resolution as to the cause of the failures and for a corrective action. An NVM bit data retention failure occurs when a programmed bit loses it programmed data state over time and flips to the opposite data state (0 -> 1 or 1 -> 0) causing a programming error. Previous failure analysis results on several failing devices with a single NVM bit data retention failure was inconclusive. TEM analysis showed no difference between the failing bit and neighboring passing bit. The lack of results led to the questioning of the accuracy of the bit map documentation and if the TEM analysis was being performed at the correct bit location. Bit map documentation takes the failing bit's electrical address and converts it to a physical address location. If the bit map documentation is incorrect, locating the failing bit is not possible and physical failure analysis will not be performed at the correct bit location. This paper will demonstrate how Atomic Force Probe (AFP) nanoprobe analysis was used to first verify the bit map documentation by determining the programming of bits at specific locations through bit cell characterization; and then characterize the failing bit location to verify the programming error and determine the possible failure mechanism based on its electrical signature followed by the appropriate physical analysis to determine the failure mechanism.


2021 ◽  
Author(s):  
Steven Marc Weisberg ◽  
Victor Roger Schinazi ◽  
Andrea Ferrario ◽  
Nora Newcombe

Relying on shared tasks and stimuli to conduct research can enhance the replicability of findings and allow a community of researchers to collect large data sets across multiple experiments. This approach is particularly relevant for experiments in spatial navigation, which often require the development of unfamiliar large-scale virtual environments to test participants. One challenge with shared platforms is that undetected technical errors, rather than being restricted to individual studies, become pervasive across many studies. Here, we discuss the discovery of a programming error (a bug) in a virtual environment platform used to investigate individual differences in spatial navigation: Virtual Silcton. The bug resulted in storing the absolute value of an angle in a pointing task rather than the signed angle. This bug was difficult to detect for several reasons, and it rendered the original sign of the angle unrecoverable. To assess the impact of the error on published findings, we collected a new data set for comparison. Our results revealed that the effect of the error on published data is likely to be minimal, partially explaining the difficulty in detecting the bug over the years. We also used the new data set to develop a tool that allows researchers who have previously used Virtual Silcton to evaluate the impact of the bug on their findings. We summarize the ways that shared open materials, shared data, and collaboration can pave the way for better science to prevent errors in the future.


Author(s):  
Paul Denny ◽  
James Prather ◽  
Brett A. Becker ◽  
Catherine Mooney ◽  
John Homer ◽  
...  
Keyword(s):  

Author(s):  
Brett A. Becker ◽  
Paul Denny ◽  
James Prather ◽  
Raymond Pettit ◽  
Robert Nix ◽  
...  
Keyword(s):  

2021 ◽  
pp. 401-412
Author(s):  
Bolun Yao ◽  
Wei Chen ◽  
Yeyun Gong ◽  
Bartuer Zhou ◽  
Jin Xie ◽  
...  

2020 ◽  
Vol 35 (6) ◽  
pp. 1040-1040
Author(s):  
Macallister W ◽  
Vasserman M ◽  
Fay-Mcclymont T ◽  
Mish S ◽  
Medlin C ◽  
...  

Abstract Objective The WISC-V can now be administered in paper format or digitally. Though most subtests are comparable, Processing Speed Index (PSI) subtests, Coding and Symbol Search required complete redesign for digital presentation. We initially collected data to assess comparability of paper versus digital PSI tasks for future use. However, in March of 2020, Pearson issued an alert stating that, due to a programming error, Coding scores may be inflated secondary to timing inaccuracy; they advised against further use of digital Coding. We refocused our analyses to assess the degree to which inaccurate digital Coding impacted overall test results. Method Children with neurological disorders (N=104) received both versions of the PSI subtests (order randomized). Correlational analyses assessed relations between versions, t-tests assessed for administration order effects, and Kappa coefficients assessed agreement across platforms. Results Correlations between paper and digital subtests (r=.570 to .853) and composites (r=.848 to .987) were robust. As expected, Coding was higher digitally (difference=1.91, p < .01, d=.52), but Symbol Search, PSI, and FSIQ were comparable (p>.05). Given evident practice effects, subsequent analyses considered “first administered” versions and score range agreement was best when PSI tasks were administered digitally first (Kappa=.452, p < .001) versus paper first (Kappa=.153, p=.023). Agreement was strong for FSIQ regardless of order (Kappa≥.760, p < .001). Importantly, in highest stakes evaluations (i.e., presence versus absence of intellectual disability), agreement was extraordinarily strong (Kappa≥.93, p < .001). Conclusions Digital Coding scores are inflated in comparison to traditional paper version, but the impact of this programming error was minimal at the level of PSI and FSIQ.


Sign in / Sign up

Export Citation Format

Share Document