software code
Recently Published Documents


TOTAL DOCUMENTS

276
(FIVE YEARS 101)

H-INDEX

16
(FIVE YEARS 3)

2021 ◽  
Vol 4 (4) ◽  
pp. 354-365
Author(s):  
Vitaliy S. Yakovyna ◽  
◽  
Ivan I. Symets

This article is focused on improving static models of software reliability based on using machine learning methods to select the software code metrics that most strongly affect its reliability. The study used a merged dataset from the PROMISE Software Engineering repository, which contained data on testing software modules of five programs and twenty-one code metrics. For the prepared sampling, the most important features that affect the quality of software code have been selected using the following methods of feature selection: Boruta, Stepwise selection, Exhaustive Feature Selection, Random Forest Importance, LightGBM Importance, Genetic Algorithms, Principal Component Analysis, Xverse python. Basing on the voting on the results of the work of the methods of feature selection, a static (deterministic) model of software reliability has been built, which establishes the relationship between the probability of a defect in the software module and the metrics of its code. It has been shown that this model includes such code metrics as branch count of a program, McCabe’s lines of code and cyclomatic complexity, Halstead’s total number of operators and operands, intelligence, volume, and effort value. A comparison of the effectiveness of different methods of feature selection has been put into practice, in particular, a study of the effect of the method of feature selection on the accuracy of classification using the following classifiers: Random Forest, Support Vector Machine, k-Nearest Neighbors, Decision Tree classifier, AdaBoost classifier, Gradient Boosting for classification. It has been shown that the use of any method of feature selection increases the accuracy of classification by at least ten percent compared to the original dataset, which confirms the importance of this procedure for predicting software defects based on metric datasets that contain a significant number of highly correlated software code metrics. It has been found that the best accuracy of the forecast for most classifiers was reached using a set of features obtained from the proposed static model of software reliability. In addition, it has been shown that it is also possible to use separate methods, such as Autoencoder, Exhaustive Feature Selection and Principal Component Analysis with an insignificant loss of classification and prediction accuracy


2021 ◽  
Vol 17 (11) ◽  
pp. e1009481
Author(s):  
Haley Hunter-Zinck ◽  
Alexandre Fioravante de Siqueira ◽  
Váleri N. Vásquez ◽  
Richard Barnes ◽  
Ciera C. Martinez

Functional, usable, and maintainable open-source software is increasingly essential to scientific research, but there is a large variation in formal training for software development and maintainability. Here, we propose 10 “rules” centered on 2 best practice components: clean code and testing. These 2 areas are relatively straightforward and provide substantial utility relative to the learning investment. Adopting clean code practices helps to standardize and organize software code in order to enhance readability and reduce cognitive load for both the initial developer and subsequent contributors; this allows developers to concentrate on core functionality and reduce errors. Clean coding styles make software code more amenable to testing, including unit tests that work best with modular and consistent software code. Unit tests interrogate specific and isolated coding behavior to reduce coding errors and ensure intended functionality, especially as code increases in complexity; unit tests also implicitly provide example usages of code. Other forms of testing are geared to discover erroneous behavior arising from unexpected inputs or emerging from the interaction of complex codebases. Although conforming to coding styles and designing tests can add time to the software development project in the short term, these foundational tools can help to improve the correctness, quality, usability, and maintainability of open-source scientific software code. They also advance the principal point of scientific research: producing accurate results in a reproducible way. In addition to suggesting several tips for getting started with clean code and testing practices, we recommend numerous tools for the popular open-source scientific software languages Python, R, and Julia.


2021 ◽  
Author(s):  
Sourobh Ghosh ◽  
Andy Wu

An innovating organization faces the challenge of how to prioritize distinct goals of novelty and value, both of which underlie innovation. Popular practitioner frameworks like Agile management suggest that organizations can adopt an iterative approach of frequent meetings to prioritize between these goals, a practice we refer to as iterative coordination. Despite iterative coordination’s widespread use in innovation management, its effects on novelty and value in innovation remain unknown. With the information technology firm Google, we embed a field experiment within a hackathon software development competition to identify the effect of iterative coordination on innovation. We find that iterative coordination causes firms to implicitly prioritize value in innovation: Although iteratively coordinating firms develop more valuable products, these products are simultaneously less novel. Furthermore, by tracking software code, we find that iteratively coordinating firms favor integration at the cost of knowledge-creating specialization. A follow-on laboratory study documents that increasing the frequency and opportunities to reprioritize goals in iterative coordination meetings reinforces value and integration, while reducing novelty and specialization. This article offers three key contributions: highlighting how processes to prioritize among multiple performance goals may implicitly favor certain outcomes; introducing a new empirical methodology of software code version tracking for measuring the innovation process; and leveraging the emergent phenomenon of hackathons to study new methods of organizing.


Author(s):  
Hector G. Perez-Gonzalez ◽  
Alberto S. Nunez-Varela ◽  
Francisco E. Martinez-Perez ◽  
Sandra E. Nava-Munoz ◽  
Cesar Guerra Garcia ◽  
...  

2021 ◽  
Vol 15 (3) ◽  
pp. 397-404
Author(s):  
Alireza Alikhani ◽  
Hamid Reza Hamidi

A smart contract is a digital protocol (software code) that enables automated monitoring and executing contract’s provisions without the need for intermediaries. Blockchain technology allows implementing smart contracts through a distributed ledger, but has no reliable way of enforcing legal rules. For example, in networks such as Bitcoin, it is possible to engage in illegal activities such as money laundering and dealing in weapons. In addition, it is impossible to enforce and audit legal costs such as taxes and duties. This research has devised a plan that allows official institutions to enforce the rules and audits efficiently during automatic execution process of smart contracts. This article discusses five important challenges in applying legal rules to Blockchain: the accreditation to the contracting parties’ and the goods’ nature, collecting legal costs, enforcing territorial laws and auditing. We present “Hyper Smart Contract”, a method for regulating Blockchain-based smart contracts and assess the limitations of the current generation of smart contracts on Ethereum to ensure a proper implementation of this plan. The performance of proposed method evaluated on a motivation application.


Author(s):  
Nathan Rambukkana ◽  
Gemma De Verteuil

The study of platforms is on the rise in communication studies, science and technology studies (STS), game studies, internet studies, and the study of human-machine communication (HMC). While originally platform studies emerged from hardware studies as an integrated attempt to study the hardware, software, code, marketing, and use of computational technologies—especially, early on, videogame consoles, but never limited to them—its use has been broadened to include the study of software platforms, such as social media sites, and their user affordances, algorithmic decision making, terms of service, background code environments, and embeddedness in neoliberal capitalism: selling user data, acting as advertising mediums, etc. While a fruitful field with much work developed, there is a noticeable dearth of methodological theorising on the topic, even as there are numerous theoretical explorations. How exactly does one $2 platform studies? We propose a multidimensional approach to platform studies, in which work may be located along at least three major axes: computational—sociotechnical, pragmatic—critical, and interpersonal—structural. These three dimensions of platform studies are combinable, provisional, and subject to extension. While the three dimensions offered up for discussion here cannot speak to the entirely of what platform studies $2 or $2 , together and as a starting point these initial three define the shape of platform studies, track the work it has already done, and offer a solid framework and model for future investigations.


2021 ◽  
Author(s):  
Amaninder Singh Gil ◽  
Chiradeep Sen

Abstract This paper presents the development of logic rules for evaluating the fitness of function models synthesized by an evolutionary algorithm. A set of 65 rules for twelve different function verbs are developed. The rules are abstractions of the definitions of the verbs in their original vocabularies and are stated as constraints on the quantity, type, and topology of flows connected to the functions. The rules serve as an objective and unambiguous basis of evaluating the fitness of function models developed by a genetic algorithm. The said algorithm and the rules are implemented in software code, which is used to both demonstrate and validate the efficacy of the rule-based approach of converging function model synthesis using GAs.


2021 ◽  
Author(s):  
Leyla Jael Garcia Castro ◽  
Corinne Martin ◽  
Georgi Lazarov ◽  
Dana Cernoskova ◽  
Terue Takatsuki ◽  
...  

One of the recurring questions when it comes to BioHackathons is how to measure their impact, especially when funded and/or supported by the public purse (e.g., research agencies, research infrastructures, grants). In order to do so, we first need to understand the outcomes from a BioHackathon, which can include software, code, publications, new or strengthened collaborations, along with more intangible effects such as accelerated progress and professional and personal outcomes. In this manuscript, we report on three complementary approaches to assess outcomes of three BioHackathon Europe events: survey-based, publication-based and GitHub-based measures. We found that post-event surveys bring very useful insights into what participants feel they achieved during the hackathon, including progressing much faster on their hacking projects, broadening their professional network and improving their understanding of other technical fields and specialties. With regards to published outcomes, manual tracking of publications from specific servers is straightforward and useful to highlight the scientific legacy of the event, though there is much scope to automate this via text-mining. Finally, GitHub-based measures bring insights on some of the software and data best practices (e.g., license usage) but also on how the hacking activities evolve in time (e.g., activities observed in GitHub repositories prior, during and after the event). Altogether, these three approaches were found to provide insightful preliminary evidence of outcomes, thereby supporting the value of financing such large-scale events with public funds.


2021 ◽  
Vol 23 (07) ◽  
pp. 924-929
Author(s):  
Dr. Kiran V ◽  
◽  
Akshay Narayan Pai ◽  
Gautham S ◽  
◽  
...  

Cloud computing is a technique for storing and processing data that makes use of a network of remote servers. Cloud computing is gaining popularity due to its vast storage capacity, ease of access, and diverse variety of services. When cloud computing advanced and technologies such as virtual machines appeared, virtualization entered the scene. When customers’ computing demands for storage and servers increased, however, virtual machines were unable to match those expectations due to scalability and resource allocation limits. As a consequence, containerization became a reality. Containerization is the process of packaging software code along with all of its essential components, including frameworks, libraries, and other dependencies, such that they may be separated or separated in their own container. The program operating in containers may execute reliably in any environment or infrastructure. Containers provide OS-level virtualization, which reduces the computational load on the host machine and enables programs to run much faster and more reliably. Performance analysis is very important in comparing the throughput of both VM-based and Container-based designs. To analyze it same web application is running in both the designs. CPU usage and RAM usage in both designs were compared. Results obtained are tabulated and a Proper conclusion has been given.


Sign in / Sign up

Export Citation Format

Share Document