unit tests
Recently Published Documents


TOTAL DOCUMENTS

169
(FIVE YEARS 28)

H-INDEX

20
(FIVE YEARS 1)

2021 ◽  
Vol 15 ◽  
pp. 130-137
Author(s):  
Tugkan Tuglular ◽  
Deniz Egemen Coşkun ◽  
Ömer Gülen ◽  
Arman Okluoğlu ◽  
Kaan Algan

As the number of microservice applications rises, different development methodologies for them are under consideration. In this manuscript, we propose a behavior-driven development method for microservice applications. The proposed method starts with writing end-to-end tests at the system or application level and then moves down to the microservice level, where component and unit tests are written. Next, code that passes these tests is developed one by one for each level. Once user stories are covered, our method loops again to integrate negative tests to achieve holistic testing for the microservices and the application. Finally, the proposed method is validated with an application with five microservices. Results confirm that the proposed method matches with the generally accepted test pyramid.


2021 ◽  
Vol 17 (11) ◽  
pp. e1009481
Author(s):  
Haley Hunter-Zinck ◽  
Alexandre Fioravante de Siqueira ◽  
Váleri N. Vásquez ◽  
Richard Barnes ◽  
Ciera C. Martinez

Functional, usable, and maintainable open-source software is increasingly essential to scientific research, but there is a large variation in formal training for software development and maintainability. Here, we propose 10 “rules” centered on 2 best practice components: clean code and testing. These 2 areas are relatively straightforward and provide substantial utility relative to the learning investment. Adopting clean code practices helps to standardize and organize software code in order to enhance readability and reduce cognitive load for both the initial developer and subsequent contributors; this allows developers to concentrate on core functionality and reduce errors. Clean coding styles make software code more amenable to testing, including unit tests that work best with modular and consistent software code. Unit tests interrogate specific and isolated coding behavior to reduce coding errors and ensure intended functionality, especially as code increases in complexity; unit tests also implicitly provide example usages of code. Other forms of testing are geared to discover erroneous behavior arising from unexpected inputs or emerging from the interaction of complex codebases. Although conforming to coding styles and designing tests can add time to the software development project in the short term, these foundational tools can help to improve the correctness, quality, usability, and maintainability of open-source scientific software code. They also advance the principal point of scientific research: producing accurate results in a reproducible way. In addition to suggesting several tips for getting started with clean code and testing practices, we recommend numerous tools for the popular open-source scientific software languages Python, R, and Julia.


Author(s):  
Óscar Soto-Sánchez ◽  
Michel Maes-Bermejo ◽  
Micael Gallego ◽  
Francisco Gortázar

AbstractEnd-to-end tests present many challenges in the industry. The long-running times of these tests make it unsuitable to apply research work on test case prioritization or test case selection, for instance, on them, as most works on these two problems are based on datasets of unit tests. These ones are fast to run, and time is not usually a considered criterion. This is because there is no dataset of end-to-end tests, due to the infrastructure needs for running this kind of tests, the complexity of the setup and the lack of proper characterization of the faults and their fixes. Therefore, running end-to-end tests for any research work is hard and time-consuming, and the availability of a dataset containing regression bugs, documentation and logs for these tests might foster the usage of end-to-end tests in research works. This paper presents a) a dataset for this kind of tests, including six well-documented manually injected regression bugs and their corresponding fixes in three web applications built using Java and the Spring framework; b) tools for easing the execution of these tests no matter the infrastructure; and c) a comparative study with two well-known datasets of unit tests. The comparative study shows that there are important differences between end-to-end and unit tests, such as their execution time and the amount of resources they consume, which are much higher in the end-to-end tests. End-to-end testing deserves some attention from researchers. Our dataset is a first effort toward easing the usage of end-to-end tests in research works.


2021 ◽  
Vol 5 (ICFP) ◽  
pp. 1-30
Author(s):  
Yannick Zakowski ◽  
Calvin Beck ◽  
Irene Yoon ◽  
Ilia Zaichuk ◽  
Vadim Zaliva ◽  
...  

This paper presents a novel formal semantics, mechanized in Coq, for a large, sequential subset of the LLVM IR. In contrast to previous approaches, which use relationally-specified operational semantics, this new semantics is based on monadic interpretation of interaction trees, a structure that provides a more compositional approach to defining language semantics while retaining the ability to extract an executable interpreter. Our semantics handles many of the LLVM IR's non-trivial language features and is constructed modularly in terms of event handlers, including those that deal with nondeterminism in the specification. We show how this semantics admits compositional reasoning principles derived from the interaction trees equational theory of weak bisimulation, which we extend here to better deal with nondeterminism, and we use them to prove that the extracted reference interpreter faithfully refines the semantic model. We validate the correctness of the semantics by evaluating it on unit tests and LLVM IR programs generated by HELIX.


Author(s):  
Vaishali Baviskar
Keyword(s):  

Generating question papers is a tedious and time-consuming operation. This study uses Python to construct a fuzzy logic-based model for autonomous paper creation. With the aid of a huge number of questions saved in the database, college officials may create a fully personalised question paper. The programme may be used to produce question papers based on the examination level, which may include unit tests. It enables the authorities to pick and choose from the syllabus's chapters. The programme makes use of a database to use the question paper, which might include hundreds of questions. The programme generates a collection of random questions on which the question does not appear. The programme is a great tool for quickly generating question papers and so saving time and effort.


Author(s):  
Freark I. van der Berg

AbstractMulti-threaded unit tests for high-performance thread-safe data structures typically do not test all behaviour, because only a single scheduling of threads is witnessed per invocation of the unit tests. Model checking such unit tests allows to verify all interleavings of threads. These tests could be written in or compiled to LLVM IR. Existing LLVM IR model checkers like divine and Nidhugg, use an LLVM IR interpreter to determine the next state. This paper introduces llmc, a multi-core explicit-state model checker of multi-threaded LLVM IR that translates LLVM IR to LLVM IR that is executed instead of interpreted. A test suite of 24 tests, stressing data structures, shows that on average llmc clearly outperforms the state-of-the-art tools divine and Nidhugg.


Author(s):  
Mathieu Nassif ◽  
Alexa Hernandez ◽  
Ashvitha Sridharan ◽  
Martin P. Robillard
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document