GitWaterFlow: a successful branching model and tooling, for achieving continuous delivery with multiple version branches

Author(s):  
Rayene Ben Rayana ◽  
Sylvain Killian ◽  
Nicolas Trangez ◽  
Arnaud Calmettes
2021 ◽  
Vol 2 (5) ◽  
Author(s):  
Tuomas Granlund ◽  
Vlad Stirbu ◽  
Tommi Mikkonen

AbstractAgile software development embraces change and manifests working software over comprehensive documentation and responding to change over following a plan. The ability to continuously release software has enabled a development approach where experimental features are put to use, and, if they stand the test of real use, they remain in production. Examples of such features include machine learning (ML) models, which are usually pre-trained, but can still evolve in production. However, many domains require more plan-driven approach to avoid hazard to environment and humans, and to mitigate risks in the process. In this paper, we start by presenting continuous software engineering practices in a regulated context, and then apply the results to the emerging practice of MLOps, or continuous delivery of ML features. Furthermore, as a practical contribution, we present a case study regarding Oravizio, first CE-certified medical software for assessing the risks of joint replacement surgeries. Towards the end of the paper, we also reflect the Oravizio experiences to MLOps in regulatory context.


Author(s):  
Carmine Vassallo ◽  
Fiorella Zampetti ◽  
Daniele Romano ◽  
Moritz Beller ◽  
Annibale Panichella ◽  
...  

2019 ◽  
pp. 59-69
Author(s):  
Eric Carter ◽  
Matthew Hurst
Keyword(s):  

2018 ◽  
Vol 50 (2) ◽  
pp. 543-564 ◽  
Author(s):  
Loïc Chaumont ◽  
Thi Ngoc Anh Nguyen

AbstractThe forest of mutations associated to a multitype branching forest is obtained by merging together all vertices in each of its clusters and by preserving connections between them. (Here, by cluster, we mean a maximal connected component of the forest in which all vertices have the same type.) We first show that the forest of mutations of any multitype branching forest is itself a branching forest. Then we give its progeny distribution and we describe some of its crucial properties in terms of the initial progeny distribution. We also obtain the limiting behaviour of the number of mutations both when the total number of individuals tends to ∞ and when the number of roots tends to ∞. The continuous-time case is then investigated by considering multitype branching forests with edge lengths. When mutations are nonreversible, we give a representation of their emergence times which allows us to describe the asymptotic behaviour of the latter, under certain conditions on the mutation rates. These results have potential relevance for emergence of mutations in population cells, particularly for genetic evolution of cancer or development of infectious diseases.


2015 ◽  
Vol 24 (03) ◽  
pp. 1541001 ◽  
Author(s):  
Johannes Wettinger ◽  
Uwe Breitenbücher ◽  
Frank Leymann

Leading paradigms to develop, deploy, and operate applications such as continuous delivery, configuration management, and the merge of development and operations (DevOps) are the foundation for various techniques and tools to implement automated deployment. To make such applications available for users and customers, these approaches are typically used in conjunction with Cloud computing to automatically provision and manage underlying resources such as storage and virtual servers. A major class of these automation approaches follow the idea of converging toward a desired state of a resource (e.g. a middleware component deployed on a virtual machine). This is achieved by repeatedly executing idempotent scripts to reach the desired state. Because of major drawbacks of this approach, we discuss an alternative deployment automation approach based on compensation and fine-grained snapshots using container virtualization. We perform an evaluation comparing both approaches in terms of difficulties at design time and performance at runtime. Moreover, we discuss concepts, strategies, and implementations to effectively combine different deployment automation approaches.


Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 128
Author(s):  
Tomasz Górski

Ensuring a production-ready state of the application under development is the imminent feature of the Continuous Delivery (CD) approach. In a blockchain network, nodes communicate and store data in a distributed manner. Each node executes the same business application but operates in a distinct execution environment. The literature lacks research focusing on continuous practices for blockchain and Distributed Ledger Technology (DLT). Specifically, it lacks such works with support for both design and deployment. The author has proposed a solution that takes into account the continuous delivery of a business application to diverse deployment environments in the DLT network. As a result, two continuous delivery pipelines have been implemented using the Jenkins automation server. The first pipeline prepares a business application whereas the second one generates complete node deployment packages. As a result, the framework ensures the deployment package in the actual version of the business application with the node-specific up-to-date version of deployment configuration files. The Smart Contract Design Pattern has been used when building a business application. The modeling aspect of blockchain network installation has required using Unified Modeling Language (UML) and the UML Profile for Distributed Ledger Deployment. The refined model-to-code transformation generates deployment configurations for nodes. Both the business application and deployment configurations are stored in the GitHub repositories. For the sake of verification, tests have been conducted for the electricity consumption and supply management system designed for prosumers of renewable energy.


2021 ◽  
Author(s):  
◽  
Meenu Mary John

Context: With the advent of Machine Learning (ML) and especially Deep Learning (DL) technology, companies are increasingly using Artificial Intelligence (AI) in systems, along with electronics and software. Nevertheless, the end-to-end process of developing, deploying and evolving ML and DL models in companies brings some challenges related to the design and scaling of these models. For example, access to and availability of data is often challenging, and activities such as collecting, cleaning, preprocessing, and storing data, as well as training, deploying and monitoring the model(s) are complex. Regardless of the level of expertise and/or access to data scientists, companies in all embedded systems domain struggle to build high-performing models due to a lack of established and systematic design methods and processes. Objective: The overall objective is to establish systematic and structured design methods and processes for the end-to-end process of developing, deploying and successfully evolving ML/DL models. Method: To achieve the objective, we conducted our research in close collaboration with companies in the embedded systems domain using different empirical research methods such as case study, action research and literature review. Results and Conclusions: This research provides six main results: First, it identifies the activities that companies undertake in parallel to develop, deploy and evolve ML/DL models, and the challenges associated with them. Second, it presents a conceptual framework for the continuous delivery of ML/DL models to accelerate AI-driven business in companies. Third, it presents a framework based on current literature to accelerate the end-to-end deployment process and advance knowledge on how to integrate, deploy and operationalize ML/DL models. Fourth, it develops a generic framework with five architectural alternatives for deploying ML/DL models at the edge. These architectural alternatives range from a centralized architecture that prioritizes (re)training in the cloud to a decentralized architecture that prioritizes (re)training at the edge. Fifth, it identifies key factors to help companies decide which architecture to choose for deploying ML/DL models. Finally, it explores how MLOps, as a practice that brings together data scientist teams and operations, ensures the continuous delivery and evolution of models.


Sign in / Sign up

Export Citation Format

Share Document