scholarly journals Code Analysis Tool to Detect Extract Class Refactoring Activity in Vb.Net Classes

Author(s):  
Sharif, K.Y Et.al

Code changes due to software change requests and bug fixing are inevitable in software lifecycle. The code modification may slowly deviate the code structure from its original structure that leads to unreadable code. Even though the code structure does not affect the software behaviour, it affects the code understandability and maintainability of software. Code refactoring is typically conducted to enhance the code structure; however, this task needs a lot of developers’ effort. Thus, this paper aims at developing a tool that will help programmers identify possible code refactoring.Weconsider two aspects of refactoring:(i) refactoring activities, and (ii) refactoring prediction model. In terms of refactoring activity, we focus on Extract Class. The object-oriented metrics are used to predict the possibility of code refactoring. The combination of two refactoring aspects recommends the possible refactoring effort and identify classes that are involved. As a result, we managed to get 79% percent of accuracy based on the 11 correct results out of 14 that the tool correctly detected. On top of supporting programmers in improving codes, this work also may give more insight into how refactoring improvessystems.

2022 ◽  
Vol 14 (2) ◽  
pp. 896
Author(s):  
Vítor Gouveia ◽  
João P. Duarte ◽  
Hugo Sarmento ◽  
José Freitas ◽  
Ricardo Rebelo-Gonçalves ◽  
...  

Set pieces are important for the success of football teams, with the corner kick being one of the most game defining events. The aim of this research was twofold: (1) to analyze the corner kicks of a senior football amateur team, and (2) to compare the corner kicks of successful and unsuccessful teams (of the 2020/21 sporting season). In total, 500 corners were observed using a bespoke notational analysis tool, using a specific observational instrument tool (8 criteria; 25 categories). Out of the 500 corner kicks, 6% resulted in a goal. A greater number of direct corners using inswing trajectories were performed (n = 54%). Corners were delivered to central and front post areas most frequently (n = 79%). Five attackers were most predominantly used for offensive corners (n = 58%), but defenders won the ball more frequently (n = 44%). Attempts at goal occurred following a corner most commonly from outside of the box (n = 7%). Goals were scored most frequently with the foot (n = 16%) and head (n = 15%). Successful teams are more effective at reaching the attackers and score more goals directly from corners. Unsuccessful teams deliver more corner kicks out of play, the first touch is more frequently from the opposition defenders, and fewe goals are scored from corner kicks. The study provides an insight into the determining factors and patterns that influence corner kicks and success in football matches. This information should be used by coaches to prepare teams for both offensive and defensive corner kicks to increase team success and match outcomes.


Author(s):  
Pedro Furtado

Self-tuning physical database organization involves tools that determine automatically the best solution concerning partitioning, placement, creation and tuning of auxiliary structures (e.g. indexes), based on the workload. To the best of our knowledge, no tool has focused on a relevant issue in parallel databases and in particular data warehouses running on common off-the-shelf hardware in a sharednothing configuration: determining the adequate tradeoff for balancing load and availability with costs (storage and loading costs). In previous work, we argued that effective load and availability balancing over partitioned datasets can be obtained through chunk-wise placement and replication, together with on-demand processing. In this work, we propose ChunkSim, a simulator for system size planning, performance analysis against replication degree and availability analysis. We apply the tool to illustrate the kind of results that can be obtained by it. The whole discussion in the chapter provides very important insight into data allocation and query processing over shared-nothing data warehouses and how a good simulation analysis tool can be built to predict and analyze actual systems and intended deployments.


2019 ◽  
Vol 19 (1) ◽  
Author(s):  
R. G. Singotani ◽  
F. Karapinar ◽  
C. Brouwers ◽  
C. Wagner ◽  
M. C. de Bruijne

Abstract Background Several literature reviews have been published focusing on the prevalence and/or preventability of hospital readmissions. To our knowledge, none focused on the different causes which have been used to evaluate the preventability of readmissions. Insight into the range of causes is crucial to understand the complex nature of readmissions. With this review we aim to: 1) evaluate the range of causes of unplanned readmissions in a patient journey, and 2) present a cause classification framework that can support future readmission studies. Methods A literature search was conducted in PUBMED and EMBASE using “readmission” and “avoidability” or “preventability” as key terms. Studies that specified causes of unplanned readmissions were included. The causes were classified into eight preliminary root causes: Technical, Organization (integrated care), Organization (hospital department level), Human (care provider), Human (informal caregiver), Patient (self-management), Patient (disease), and Other. The root causes were based on expert opinions and the root cause analysis tool of PRISMA (Prevention and Recovery Information System for Monitoring and Analysis). The range of different causes were analyzed using Microsoft Excel. Results Forty-five studies that reported 381 causes of readmissions were included. All studies reported causes related to organization of care at the hospital department level. These causes were often reported as preventable. Twenty-two studies included causes related to patient’s self-management and 19 studies reported causes related to patient’s disease. Studies differed in which causes were seen as preventable or unpreventable. None reported causes related to technical failures and causes due to integrated care issues were reported in 18 studies. Conclusions This review showed that causes for readmissions were mainly evaluated from a hospital perspective. However, causes beyond the scope of the hospital can also play a major role in unplanned readmissions. Opinions regarding preventability seem to depend on contextual factors of the readmission. This study presents a cause classification framework that could help future readmission studies to gain insight into a broad range of causes for readmissions in a patient journey. In conclusion, we aimed to: 1) evaluate the range of causes for unplanned readmissions, and 2) present a cause classification framework for causes related to readmissions.


2020 ◽  
Vol V (III) ◽  
pp. 174-180
Author(s):  
Naveed Jhamat ◽  
Zeeshan Arshad ◽  
Kashif Riaz

Software reusability encourages developers to heavily rely on a variety of third-party libraries and packages, resulting in dependent software products. Often ignored by developers due to the risk of breakage but dependent software have to adopt security and performance updates in their external dependencies. Existing work advocates a shift towards Automatic updation of dependent software code to implement update dependencies. Emerging automatic dependency management tools notify the availability of new updates, detect their impacts on dependent software and identify potential breakages or other vulnerabilities. However, support for automatic source code refactoring to fix potential breaking changes (to the best of my current knowledge) is missing from these tools. This paper presents a prototyping tool, DepRefactor, that assist in the programmed refactoring of software code caused by automatic updating of their dependencies. To measure the accuracy and effectiveness of DepRefactor, we test it on various students project developed in C#.


2004 ◽  
Vol 72 ◽  
pp. 9-22
Author(s):  
Anke Herder

In the context of recent studies on writing to learn, concept maps are constructed in an attempt to make knowledge structures and conceptual change explicit. These graphic representations are based on the concepts and semantic relations in a student's text. However, a concept map does not give insight into the rhetorical text structure and other rhetorical features, nor does it show the way concepts are located and connected in this structure. Since the dialectic between content knowledge and rhetorical knowledge is essential in the process of 'knowledge transforming', and consequently conceptual change, an analysis tool that integrates both analysis of rhetorical text structure and of semantic structures in text is needed. In a pilot study of a forthcoming research project about writing to learn in the content areas in primary education, an instrument was designed for integrated text analysis and graphic representation. The analysis and representations were demonstrated with data collected from ll-to-12 year old students, who wrote an explanatory text for younger students about a climate issue. Revision was triggered by asking the student whether he expected a younger pupil to understand the written explanation. An analysis and graphic representations of two texts written by two different students focused on location and use of concepts, expansions of meaning of these concepts, and connections between concepts through coherence relations, all embedded in the rhetorical text structure. It was concluded that the analysis tool proposed here makes it possible to compare students' knowledge structures and accordingly can provide insight into conceptual changes, relative to writing.


Author(s):  
M. J. Ditz

Abstract A model has been developed accurately predicting the temperature along a length of wire heated by passing a current through it. The wire is modeled as a transmission line with its thermal properties mapped onto the appropriate electronic analogs. A program describing the model circuit has been written in PSPICE such that time versus temperature curves can be generated for any wire given the material, diameter, length, and stressing current. The model has been dynamically temperature corrected, and its accuracy has been demonstrated for two types of fuses and one type of bond wire. This model is useful for predicting fusing conditions of fuse elements or bond wires, assessing the reliability of a material for given operating conditions (by knowing the temperature the material would attain), derating for design, and as a failure analysis tool by providing insight into an overstress condition in which a wire has melted open.


2017 ◽  
Vol 10 (22) ◽  
pp. 1-7
Author(s):  
Babita Pathik ◽  
Meena Sharma ◽  
◽  
Keyword(s):  

2021 ◽  
pp. rapm-2020-102201
Author(s):  
Greg Ogrinc

Misalignment of measures, measurement and analysis with the goals and methods of quality improvement efforts in healthcare may create confusion and decrease effectiveness. In healthcare, measurement is used for accountability, research, and quality improvement, so distinguishing between these is an important first step. Using a case vignette, this paper focuses on using measurement for improvement to gain insight into the dynamic nature of healthcare systems and to assess the impact of interventions. This involves an understanding of the variation in the data over time. Statistical process control (SPC) charting is an effective and powerful analysis tool for this. SPC provides ongoing assessment of system functioning and enables an improvement team to assess the impact of its own interventions and external forces on the system. Once improvement work is completed, the Standards for Quality Improvement Reporting Excellence (SQUIRE) guidelines is a valuable tool to describe the rationale, context, and study of the interventions. SQUIRE can be used to plan improvement work as well as structure a manuscript for publication in peer-reviewed journals.


2021 ◽  
Vol 14 (3) ◽  
pp. 58-69
Author(s):  
Madanjit Singh ◽  
Munish Saini ◽  
Manevpreet Kaur

This paper has statically investigated the source code of open source software (OSS) projects to uncover the presence of vulnerabilities in the code. The conducted research emphasizes that the presence of vulnerabilities has adverse effects on the overall software quality. The authors found the increasing trends in the vulnerabilities as the lines of code (LOC) increases during the software evolution. This signifies the fact that the addition of new features or change requests into the OSS project may cause an increase in vulnerability. Further, the relation between software vulnerabilities and popularity is also examined. This research does not find the existence of any relationship among software vulnerabilities and popularity. This research will provide significant implications to the developers and project managers to better understand the present state of the software.


2014 ◽  
pp. 1673-1694
Author(s):  
Philip M. McCarthy ◽  
Shinobu Watanabe ◽  
Travis A. Lamkin

Natural language processing tools, such as Coh-Metrix and LIWC, have been tremendously successful in offering insight into quantifiable differences between text types. Such quantitative assessments have certainly been highly informative in terms of evaluating theoretical linguistic and psychological categories that distinguish text types (e.g., referential overlap, lexical diversity, positive emotion words, and so forth). Although these identifications are extremely important in revealing ability deficiencies, knowledge gaps, comprehension failures, and underlying psychological phenomena, such assessments can be difficult to interpret because they do not explicitly inform readers and researchers as to which specific linguistic features are driving the text type identification (i.e., the words and word clusters of the text). For example, a tool such as Coh-Metrix informs us that expository texts are more cohesive than narrative texts in terms of sentential referential overlap (McNamara, Louwerse, & Graesser, in press; McCarthy, 2010), but it does not tell us which words (or word clusters) are driving that cohesion. That is, we do not learn which actual words tend to be indicative of the text type differences. These actual words may tend to cluster around certain psychological, cultural, or generic differences, and, as a result, researchers and materials designers who might wish to create or modify text, so as to better meet the needs of readers, are left somewhat in the dark as to which specific language to use. What is needed is a textual analysis tool that offers qualitative output (in addition to quantitative output) that researchers and materials designers might use as a guide to the lexical characteristics of the texts under analysis. The Gramulator is such a tool.


Sign in / Sign up

Export Citation Format

Share Document