scholarly journals Measuring Software Delivery Performance Using the Four Key Metrics of DevOps

Author(s):  
Marc Sallin ◽  
Martin Kropp ◽  
Craig Anslow ◽  
James W. Quilty ◽  
Andreas Meier

Abstract The Four Key Metrics of DevOps have become very popular for measuring IT-performance and DevOps adoption. However, the measurement of the four metrics deployment frequency, lead time for change, time to restore service and change failure rate is often done manually and through surveys - with only few data points. In this work we evaluated how the Four Key Metrics can be measured automatically and developed a prototype for the automatic measurement of the Four Key Metrics. We then evaluated if the measurement is valuable for practitioners in a company. The analysis shows that the chosen measurement approach is both suitable and the results valuable for the team with respect to measuring and improving the software delivery performance.

IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Hongbin Ma ◽  
Shuyuan Yang ◽  
Guangjun He ◽  
Ruowu Wu ◽  
Xiaojun Hao ◽  
...  

Author(s):  
Peter Temin

This chapter discusses how there is little of what economists call data on markets in Roman times, despite lots of information about prices and transactions. Data, as economists consider it, consist of a set of uniform prices that can be compared with each other. According to scholars, extensive markets existed in the late Roman Republic and early Roman Empire. Even though there is a lack of data, there are enough observations for the price of wheat, the most extensively traded commodity, to perform a test. The problem is that there is only a little bit of data by modern standards. Consequently, the chapter explains why statistics are useful in interpreting small data sets and how one deals with various problems that arise when there are only a few data points.


2017 ◽  
Vol 25 (0) ◽  
pp. 901-911 ◽  
Author(s):  
Yasuo Namioka ◽  
Daisuke Nakai ◽  
Kazuya Ohara ◽  
Takuya Maekawa

1980 ◽  
Vol 34 (3) ◽  
pp. 351-360 ◽  
Author(s):  
R. J. Noll ◽  
A. Pires

In this paper a new fitting algorithm which works with Voigt functions is discussed. The fitting algorithm used is an extension of the rapidly convergent gradient method of Fletcher and Powell, who claim faster convergence than the Newton-Raphson method which has been used by Chang and Shaw for fitting Lorentz line widths. The Fletcher and Powell algorithm involves the effects of second derivatives although second derivatives are not explicitly calculated. In our algorithm, first and second derivatives are computed not numerically, but analytically via a modification to Drayson's Voigt function subroutine. This algorithm provides rapid convergence even when there are few data points. Profiles have been fitted with as few as five data points. Our typical line fits involve 40 points. The run time of the algorithm has been compared with the shrinking cube algorithm of Hillman and found to be at least 10 times faster under identical starting conditions. Sample single line and single line plus background are shown illustrating the speed and efficiency of the new algorithm, as well as the importance of good zero-order estimates to start the iterations.


Author(s):  
L. Angiolini ◽  
D. P. F. Darbyshire ◽  
M. H. Stephenson ◽  
M. J. Leng ◽  
T. S. Brewer ◽  
...  

ABSTRACTThe Lower Permian of the Haushi basin, Interior Oman (Al Khlata Formation to Saiwan Formation/lower Gharif member) records climate change from glaciation, through marine sedimentation in the Haushi sea, to subtropical desert. To investigate the palaeoclimatic evolution of the Haushi Sea we used O, C, and Sr isotopes from 31 brachiopod shells of eight species collected bed by bed within the type-section of the Saiwan Formation. We assessed diagenesis by scanning electron microscopy of ultrastructure, cathodoluminescence, and geochemistry, and rejected fifteen shells not meeting specific preservation criteria. Spiriferids and spiriferinids show better preservation of the fibrous secondary layer than do orthotetids and productids and are therefore more suitable for isotopic analysis. δ18O of −3·7 to −3·1℅ from brachiopods at the base of the Saiwan Formation are probably related to glacial meltwater. Above this, an increase in δ18O may indicate ice accumulation elsewhere in Gondwana or more probably that the Haushi sea was an evaporating embayment of the Neotethys Ocean. δ13C varies little and is within the range of published data: its trend towards heavier values is consistent with increasing aridity and oligotrophy. Saiwan Sr isotope signatures are less radiogenic than those of the Sakmarian LOWESS seawater curve, which is based on extrapolation between few data points. In the scenario of evaporation in a restricted Haushi basin, the variation in Sr isotope composition may reflect a fluvial component.


2010 ◽  
Vol 114 (1161) ◽  
pp. 681-688
Author(s):  
T. van der Laan ◽  
F. van Dalen ◽  
B. Vermeulen

Abstract In recent years increasingly pressure has been applied on aircraft component suppliers to reduce design lead-time and design cost of aircraft components. To achieve this reduction in lead-time and cost, advanced automation tools that automate part of the engineering process can be used. Knowledge based engineering (KBE) tools are one such automation tool type and automate part of the engineering process based on existing knowledge within a company. In this paper the development process and use of a KBE application to design machined ribs in an industrial setting is discussed. The development of the KBE tool has resulted in the standardisation of the design methodology. Furthermore it has reduced machined rib design lead-time in a project to develop a bussines jet empennage by 40%.


Author(s):  
HORNG-LIN SHIEH ◽  
CHENG-CHIEN KUO

This paper proposes a new validity index for the subtractive clustering (SC) algorithm. The subtractive clustering algorithm proposed by Chiu is an effective and simple method for identifying the cluster centers of sampling data based on the concept of a density function. The SC algorithm continually produces the cluster centers until the final potential compared with the original is less than a predefined threshold. The procedure is terminated when there are only a few data points around the most recent cluster. The choice of the threshold is an important factor affecting the clustering results: if it is too large, then too few data points will be accepted as cluster centers; if it is too small, then too many cluster centers will be generated. In this paper, a modified SC algorithm for data clustering based on a cluster validity index is proposed to obtain the optimal number of clusters. Six examples show that the proposed index achieves better performance results than other cluster validities do.


Author(s):  
Marlies Holkje Barendrecht ◽  
Alberto Viglione ◽  
Heidi Kreibich ◽  
Sergiy Vorogushyn ◽  
Bruno Merz ◽  
...  

Abstract. Socio-hydrological modelling studies that have been published so far show that dynamic coupled human-flood models are a promising tool to represent the phenomena and the feedbacks in human-flood systems. So far these models are mostly generic and have not been developed and calibrated to represent specific case studies. We believe that applying and calibrating these type of models to real world case studies can help us to further develop our understanding about the phenomena that occur in these systems. In this paper we propose a method to estimate the parameter values of a socio-hydrological model and we test it by applying it to an artificial case study. We postulate a model that describes the feedbacks between floods, awareness and preparedness. After simulating hypothetical time series with a given combination of parameters, we sample few data points for our variables and try to estimate the parameters given these data points using Bayesian Inference. The results show that, if we are able to collect data for our case study, we would, in theory, be able to estimate the parameter values for our socio-hydrological flood model.


2019 ◽  
Vol 7 (2) ◽  
pp. 113-128
Author(s):  
Dwi Putriana Nuramanah Kinding ◽  
Wahyu Budi Priatna ◽  
Lukman M. Baga

Knowing the performance of a company is needed in order to be able to determine the extent to which goals have been achieved. The final objective of this research was to analyze the performance of Al-Ittifaq vegetable supply chain for each of its members in order to achieve a common goal, by maximizing the resources they have with their best practices. The analytical method used in this research was the Supply Chain Operational Reference (SCOR) model by considering the internal and external attributes of the foodSCOR card. The four attributes used in this study were reliability, responsiveness, agility ,and assets. The results of measuring internal performance in the supply chain at all levels in the responsiveness and agility attributes had achieved superior performance positions on the foods card. The value of Al-Ittifaq vegetable supply chain performance on reliability attributes in conformity performance with the standards was still in the advantage position, while the delivery performance and order fulfillment were already in a superior position. The internal performance of the Al-Ittifaq vegetable supply chain in each section for the cash to cash cycle time attribute had reached a superior position. The daily inventory performance was still in the advantage position, therefore Al-Ittifaq it still needs improvement in performance by not doing a daily inventory to reduce storage costs and to always provide fresh vegetables.


2021 ◽  
Vol 14 (2) ◽  
pp. 189-203
Author(s):  
Rahmawati Berlyan ◽  
Wawan Kurniawan ◽  
Indah Permata Sari

XYZ is a company engaged in fabrication. Quality is very important for the company because it can affect customer satisfaction. In the production process there are still defect problems found on the top side of the structural construction and it affects the quality of the part. The failure rate on the topside product has a 5:28% percentage exceeds the standards set by the company which is 2%. Based on the percentage of defects that have been obtained, the problem can be solved by fishbone analysis and FMEA. In the first stage using CTQ which aims to identify failures that occur and determine quality characteristics. The next step after identifying the failure that occurred can be calculated DPMO and Sigma Level. After getting the DPMO value and the Sigma level, identification of the root causes of the failure occurred. The causes of problems that have been analyzed using fishbone analysis can be solved using FMEA.  Keywords : DPMO, Tingkat Sigma, Fishbone analysis, FMEA


Sign in / Sign up

Export Citation Format

Share Document