Leveraging the Cloud for Large-Scale Software Testing

2015 ◽  
pp. 1175-1203
Author(s):  
Anjan Pakhira ◽  
Peter Andras

Testing is a critical phase in the software life-cycle. While small-scale component-wise testing is done routinely as part of development and maintenance of large-scale software, the system level testing of the whole software is much more problematic due to low level of coverage of potential usage scenarios by test cases and high costs associated with wide-scale testing of large software. Here, the authors investigate the use of cloud computing to facilitate the testing of large-scale software. They discuss the aspects of cloud-based testing and provide an example application of this. They describe the testing of the functional importance of methods of classes in the Google Chrome software. The methods that we test are predicted to be functionally important with respect to a functionality of the software. The authors use network analysis applied to dynamic analysis data generated by the software to make these predictions. They check the validity of these predictions by mutation testing of a large number of mutated variants of the Google Chrome. The chapter provides details of how to set up the testing process on the cloud and discusses relevant technical issues.

Author(s):  
Anjan Pakhira ◽  
Peter Andras

Testing is a critical phase in the software life-cycle. While small-scale component-wise testing is done routinely as part of development and maintenance of large-scale software, the system level testing of the whole software is much more problematic due to low level of coverage of potential usage scenarios by test cases and high costs associated with wide-scale testing of large software. Here, the authors investigate the use of cloud computing to facilitate the testing of large-scale software. They discuss the aspects of cloud-based testing and provide an example application of this. They describe the testing of the functional importance of methods of classes in the Google Chrome software. The methods that we test are predicted to be functionally important with respect to a functionality of the software. The authors use network analysis applied to dynamic analysis data generated by the software to make these predictions. They check the validity of these predictions by mutation testing of a large number of mutated variants of the Google Chrome. The chapter provides details of how to set up the testing process on the cloud and discusses relevant technical issues.


Author(s):  
Adrian Jackson ◽  
Michèle Weiland

This chapter describes experiences using Cloud infrastructures for scientific computing, both for serial and parallel computing. Amazon’s High Performance Computing (HPC) Cloud computing resources were compared to traditional HPC resources to quantify performance as well as assessing the complexity and cost of using the Cloud. Furthermore, a shared Cloud infrastructure is compared to standard desktop resources for scientific simulations. Whilst this is only a small scale evaluation these Cloud offerings, it does allow some conclusions to be drawn, particularly that the Cloud can currently not match the parallel performance of dedicated HPC machines for large scale parallel programs but can match the serial performance of standard computing resources for serial and small scale parallel programs. Also, the shared Cloud infrastructure cannot match dedicated computing resources for low level benchmarks, although for an actual scientific code, performance is comparable.


Author(s):  
Sanjay P. Ahuja

The proliferation of public cloud providers and services offered necessitate that end users have benchmarking-related information that help compare the properties of the cloud computing environment being provided. System-level benchmarks are used to measure the performance of overall system or subsystem. This chapter surveys the system-level benchmarks that are used for traditional computing environments that can also be used to compare cloud computing environments. Amazon's EC2 Service is one of the leading public cloud service providers and offers many different levels of service. The research in this chapter focuses on system-level benchmarks and looks into evaluating the memory, CPU, and I/O performance of two different tiers of hardware offered through Amazon's EC2. Using three distinct types of system benchmarks, the performance of the micro spot instance and the M1 small instance are measured and compared. In order to examine the performance and scalability of the hardware, the virtual machines are set up in a cluster formation ranging from two to eight nodes.


2018 ◽  
Vol 170 ◽  
pp. 08003
Author(s):  
L. Berge ◽  
N. Estre ◽  
D. Tisseur ◽  
E. Payan ◽  
D. Eck ◽  
...  

The future PLINIUS-2 platform of CEA Cadarache will be dedicated to the study of corium interactions in severe nuclear accidents, and will host innovative large-scale experiments. The Nuclear Measurement Laboratory of CEA Cadarache is in charge of real-time high-energy X-ray imaging set-ups, for the study of the corium-water and corium-sodium interaction, and of the corium stratification process. Imaging such large and high-density objects requires a 15 MeV linear electron accelerator coupled to a tungsten target creating a high-energy Bremsstrahlung X-ray flux, with corresponding dose rate about 100 Gy/min at 1 m. The signal is detected by phosphor screens coupled to high-framerate scientific CMOS cameras. The imaging set-up is established using an experimentally-validated home-made simulation software (MODHERATO). The code computes quantitative radiographic signals from the description of the source, object geometry and composition, detector, and geometrical configuration (magnification factor, etc.). It accounts for several noise sources (photonic and electronic noises, swank and readout noise), and for image blur due to the source spot-size and to the detector unsharpness. In a view to PLINIUS-2, the simulation has been improved to account for the scattered flux, which is expected to be significant. The paper presents the scattered flux calculation using the MCNP transport code, and its integration into the MODHERATO simulation. Then the validation of the improved simulation is presented, through confrontation to real measurement images taken on a small-scale equivalent set-up on the PLINIUS platform. Excellent agreement is achieved. This improved simulation is therefore being used to design the PLINIUS-2 imaging set-ups (source, detectors, cameras, etc.).


Author(s):  
M. Santise ◽  
K. Thoeni ◽  
R. Roncella ◽  
S. W. Sloan ◽  
A. Giacomini

This paper presents preliminary tests of a new low-cost photogrammetric system for 4D modelling of large scale areas for civil engineering applications. The system consists of five stand-alone units. Each of the units is composed of a Raspberry Pi 2 Model B (RPi2B) single board computer connected to a PiCamera Module V2 (8 MP) and is powered by a 10 W solar panel. The acquisition of the images is performed automatically using Python scripts and the OpenCV library. Images are recorded at different times during the day and automatically uploaded onto a FTP server from where they can be accessed for processing. Preliminary tests and outcomes of the system are discussed in detail. The focus is on the performance assessment of the low-cost sensor and the quality evaluation of the digital surface models generated by the low-cost photogrammetric systems in the field under real test conditions. Two different test cases were set up in order to calibrate the low-cost photogrammetric system and to assess its performance. First comparisons with a TLS model show a good agreement.


Author(s):  
Wagner Al Alam ◽  
Francisco Carvalho Junior

The efforts to make cloud computing suitable for the requirements of HPC applications have motivated us to design HPC Shelf, a cloud computing platform of services for building and deploying parallel computing systems for large-scale parallel processing. We introduce Alite, the system of contextual contracts of HPC Shelf, aimed at selecting component implementations according to requirements of applications, features of targeting parallel computing platforms (e.g. clusters), QoS (Quality-of-Service) properties and cost restrictions. It is evaluated through a small-scale case study employing a componentbased framework for matrix-multiplication based on the BLAS library.


2020 ◽  
Vol 8 (11) ◽  
pp. 892
Author(s):  
Laura Brakenhoff ◽  
Reinier Schrijvershof ◽  
Jebbe van der Werf ◽  
Bart Grasmeijer ◽  
Gerben Ruessink ◽  
...  

Bedform-related roughness affects both water movement and sediment transport, so it is important that it is represented correctly in numerical morphodynamic models. The main objective of the present study is to quantify for the first time the importance of ripple- and megaripple-related roughness for modelled hydrodynamics and sediment transport on the wave- and tide-dominated Ameland ebb-tidal delta in the north of the Netherlands. To do so, a sensitivity analysis was performed, in which several types of bedform-related roughness predictors were evaluated using a Delft3D model. Also, modelled ripple roughness was compared to data of ripple heights observed in a six-week field campaign on the Ameland ebb-tidal delta. The present study improves our understanding of how choices in model set-up influence model results. By comparing the results of the model scenarios, it was found that the ripple and megaripple-related roughness affect the depth-averaged current velocity, mainly over the shallow areas of the delta. The small-scale ripples are also important for the suspended load sediment transport, both indirectly through the affected flow and directly. While the current magnitude changes by 10–20% through changes in bedform roughness, the sediment transport magnitude changes by more than 100%.


1995 ◽  
Vol 35 (1) ◽  
pp. 436 ◽  
Author(s):  
G.T. Cooper

The Eastern Otway Basin exhibits two near-or-thogonal structural grains, specifically NE-SW and WNW-ESE trending structures dominating the Otway Ranges, Colac Trough and Torquay Embayment. The relative timing of these structures is poorly constrained, but dip analysis data from offshore seismic lines in the Torquay Embayment show that two distinct structural provinces developed during two separate extensional episodes.The Snail Terrace comprises the southern structural province of the Torquay Embayment and is characterised by the WNW-ESE trending basin margin fault and a number of small scale NW-SE trending faults. The Torquay Basin Deep makes up the northern structural province and is characterised by the large scale, cuspate Snail Fault which trends ENE-WSW with a number of smaller NE-SW trending faults present.Dip analysis of basement trends shows a bimodal population in the Torquay Embayment. The Snail Terrace data show extension towards the SSW (193°), but this trend changes abruptly to the NE across a hinge zone. Dip data in the Torquay Basin Deep and regions north of the hinge zone show extension towards the SSE (150°). Overall the data show the dominance of SSE extension with a mean vector of 166°.Seismic data show significant growth of the Crayfish Group on the Snail Terrace and a lesser growth rate in the Torquay Basin Deep. Dip data from the Snail Terrace are therefore inferred to represent the direction of basement rotation during the first phase of continental extension oriented towards the SSW during the Berriasian-Barremian? (146-125 Ma). During this phase the basin margin fault formed as well as NE-SW trending ?transtensional structures in the Otway Ranges and Colac Trough, probably related to Palaeozoic features.Substantial growth along the Snail Fault during the Aptian-Albian? suggests that a second phase of extension affected the area. The Colac Trough, Otway Ranges, Torquay Embayment and Strzelecki Ranges were significantly influenced by this Bassian phase of SSE extension which probably persisted during the Aptian-Albian? (125-97 Ma). This phase of extension had little effect in the western Otway Basin, west of the Sorrel Fault Zone, and was largely concentrated in areas within the northern failed Bass Strait Rift. During the mid-Cretaceous parts of the southern margin were subjected to uplift and erosion. Apatite fission track and vitrinite reflectance analyses show elevated palaeotemperatures associated with uplift east of the Sorell Fault Zone.


2010 ◽  
Vol 133-134 ◽  
pp. 497-502 ◽  
Author(s):  
Alvaro Quinonez ◽  
Jennifer Zessin ◽  
Aissata Nutzel ◽  
John Ochsendorf

Experiments may be used to verify numerical and analytical results, but large-scale model testing is associated with high costs and lengthy set-up times. In contrast, small-scale model testing is inexpensive, non-invasive, and easy to replicate over several trials. This paper proposes a new method of masonry model generation using three-dimensional printing technology. Small-scale models are created as an assemblage of individual blocks representing the original structure’s geometry and stereotomy. Two model domes are tested to collapse due to outward support displacements, and experimental data from these tests is compared with analytical predictions. Results of these experiments provide a strong understanding of the mechanics of actual masonry structures and can be used to demonstrate the structural capacity of masonry structures with extensive cracking. Challenges for this work, such as imperfections in the model geometry and construction problems, are also addressed. This experimental method can provide a low-cost alternative for the collapse analysis of complex masonry structures, the safety of which depends primarily on stability rather than material strength.


2019 ◽  
Vol 76 (6) ◽  
pp. 1601-1609 ◽  
Author(s):  
Tania Mendo ◽  
Sophie Smout ◽  
Tommaso Russo ◽  
Lorenzo D’Andrea ◽  
Mark James

Abstract Analysis of data from vessel monitoring systems and automated identification systems in large-scale fisheries is used to describe the spatial distribution of effort, impact on habitats, and location of fishing grounds. To identify when and where fishing activities occur, analysis needs to take account of different fishing practices in different fleets. Small-scale fisheries (SSFs) vessels have generally been exempted from positional reporting requirements, but recent developments of compact low-cost systems offer the potential to monitor them effectively. To characterize the spatial distribution of fishing activities in SSFs, positions should be collected with sufficient frequency to allow detection of different fishing behaviours, while minimizing demands for data transmission, storage, and analysis. This study sought to suggest optimal rates of data collection to characterize fishing activities at appropriate spatial resolution. In a SSF case study, on-board observers collected Global Navigation Satellite System (GNSS) position and fishing activity every second during each trip. In analysis, data were re-sampled to lower temporal resolutions to evaluate the effect on the identification of number of hauls and area fished. The effect of estimation at different spatial resolutions was also explored. Consistent results were found for polling intervals <60 s in small vessels and <120 in medium and large vessels. Grid cell size of 100 × 100 m resulted in best estimations of area fished. Remote collection and analysis of GNSS or equivalent data at low cost and sufficient resolution to infer small-scale fisheries activities. This has significant implications globally for sustainable management of these fisheries, many of which are currently unregulated.


Sign in / Sign up

Export Citation Format

Share Document