scholarly journals ReactiveFnJ: A Choreographed Model for Fork-Join Workflow in Serverless Computing

Author(s):  
Urmil Bharti ◽  
Anita Goel ◽  
S. C. Gupta

Abstract Function-as-a-Service (FaaS) is an event-based reactive programming model where functions run in ephemeral stateless containers for short duration. For building complex serverless applications, function composition is crucial to coordinate and synchronize the workflow of an application. Some serverless orchestration systems exist, but they are in their primitive state and do not provide inherent support for non-trivial workflows like, Fork-Join. To address this gap, we propose a fully serverless and scalable design model ReactiveFnJ for Fork-Join workflow. The intent of this work is to illustrate a design which is completely choreographed, reactive, asynchronous, and represents a dynamic composition model for serverless applications. Our design uses two innovative patterns, namely, Relay Composition and Master-Worker Composition to solve execution time-out challenges. As a Proof-of-Concept (PoC), the prototypical implementation of Split-Sort-Merge use case, based on Fork-Join workflow is discussed and evaluated. The ReactiveFnJ handles embarrassingly parallel computations, and its design does not depend on any external orchestration services, messaging services, and queue services. ReactiveFnJ facilitates in designing fully automated pipelines for distributed data processing systems, satisfying the Serverless Trilemma in true essence. A file of any size can be processed using our effective and extensible design without facing execution time-out challenges. The proposed model is generic and can be applied to a wide range of serverless applications that are based on the Fork-Join workflow pattern. It fosters the choreographed serverless composition for complex workflows. The proposed design model is useful for software engineers and developers in industry and commercial organizations, total solution vendors and academic researchers.

Author(s):  
András Éles ◽  
István Heckl ◽  
Heriberto Cabezas

AbstractA mathematical model is introduced to solve a mobile workforce management problem. In such a problem there are a number of tasks to be executed at different locations by various teams. For example, when an electricity utility company has to deal with planned system upgrades and damages caused by storms. The aim is to determine the schedule of the teams in such a way that the overall cost is minimal. The mobile workforce management problem involves scheduling. The following questions should be answered: when to perform a task, how to route vehicles—the vehicle routing problem—and the order the sites should be visited and by which teams. These problems are already complex in themselves. This paper proposes an integrated mathematical programming model formulation, which, by the assignment of its binary variables, can be easily included in heuristic algorithmic frameworks. In the problem specification, a wide range of parameters can be set. This includes absolute and expected time windows for tasks, packing and unpacking in case of team movement, resource utilization, relations between tasks such as precedence, mutual exclusion or parallel execution, and team-dependent travelling and execution times and costs. To make the model able to solve larger problems, an algorithmic framework is also implemented which can be used to find heuristic solutions in acceptable time. This latter solution method can be used as an alternative. Computational performance is examined through a series of test cases in which the most important factors are scaled.


Author(s):  
Yingchun Xia ◽  
Zhiqiang Xie ◽  
Yu Xin ◽  
Xiaowei Zhang

The customized products such as electromechanical prototype products are a type of product with research and trial manufacturing characteristics. The BOM structures and processing parameters of the products vary greatly, making it difficult for a single shop to meet such a wide range of processing parameters. For the dynamic and fuzzy manufacturing characteristics of the products, not only the coordinated transport time of multiple shops but also the fact that the product has a designated output shop should be considered. In order to solve such Multi-shop Integrated Scheduling Problem with Fixed Output Constraint (MISP-FOC), a constraint programming model is developed to minimize the total tardiness, and then a Multi-shop Integrated Scheduling Algorithm (MISA) based on EGA (Enhanced Genetic Algorithm) and B&B (Branch and Bound) is proposed. MISA is a hybrid optimization method and consists of four parts. Firstly, to deal with the dynamic and fuzzy manufacturing characteristics, the dynamic production process is transformed into a series of time-continuous static scheduling problem according to the proposed dynamic rescheduling mechanism. Secondly, the pre-scheduling scheme is generated by the EGA at each event moment. Thirdly, the jobs in the pre-scheduling scheme are divided into three parts, namely, dispatched jobs, jobs to be dispatched, and jobs available for rescheduling, and at last, the B&B method is used to optimize the jobs available for rescheduling by utilizing the period when the dispatched jobs are in execution. Google OR-Tools is used to verify the proposed constraint programming model, and the experiment results show that the proposed algorithm is effective and feasible.


Leonardo ◽  
2015 ◽  
Vol 48 (4) ◽  
pp. 384-399 ◽  
Author(s):  
Amit Zoran ◽  
Seppo O. Valjakka ◽  
Brian Chan ◽  
Atar Brosh ◽  
Rab Gordon ◽  
...  

This article introduces the Hybrid Craft exhibition, positioning 15 hybrid projects in the context of today’s Maker culture. Each project demonstrates a unique integration of contemporary making practice with traditional craft. The presenters in the show represent a wide range of professional backgrounds: independent makers, students and teachers, designers associated with research institutes, and commercial organizations. The background of Hybrid Craft, the makers and their works, including tool-making, jewelry, bowl-making and interactive design, are presented. The discussion focuses on integrating human skill and design to introduce a diverse portfolio of technologies used in this hybrid making process.


Author(s):  
Wilman Vega ◽  
Henry Umaña

Resumen Los Servicios Web Semánticos ofrecen beneficios, que coadyuvan a la evolución de la Web, como el descubrimiento, invocación y composición dinámica y automática de recursos, habilitan efectivamente la interoperabilidad entre sistemas, permitiendo una amplia gama de nuevos servicios y oportunidades de negocios en la Internet. La estructura necesaria para proveer estos beneficios, hace que su desarrollo sea un proceso complejo, requiriendo establecer formas más fáciles y dinámicas que garanticen reutilización, calidad y rapidez. El desarrollo dirigido por modelos realiza una contribución eficiente en estos aspectos, dado que trabaja de manera intrínseca conceptos como separación de conceptos, reusabilidad e interoperabilidad entre componentes. En este artículo se presenta un enfoque para desarrollo de software dirigido por modelos, orientado al desarrollo de los servicios web semánticos, donde inicialmente se plantean las fases correspondientes al análisis, diseño y desarrollo dentro de la metodología propuesta, aplicando la metodología sobre un pequeño caso de estudio y obtener como resultado la estructura de un Servicio web semántico. Palabras Clave: Servicios web semánticos, Desarrollo dirigido por modelos, ontologías web.   Abstract Semantic Web Services offers benefits that contribute to Web evolution. Benefits such as automatic discovery and invocation, and dynamic composition, effectively enables systems interoperability, allowing a wide range of services and Internet businesses. The necessary structure to provide those benefits by Semantic Web Services makes its development a complex process. It necessary to establish more easy and dynamic ways to develop this kind of software, in order to assure reuse, quality and speediness in the development process. The model-driven software development makes an efficient contribution in those aspect, because it works intrinsically concepts related such separation of concerns, reusability and components interoperability. In this paper we present an approach to model-driven development software applied to Semantic Web Services. First, we establish the phases corresponding to the analysis, design and development in the proposal methodology, by applying it to a case of study we obtain the structure of a Semantic Web Services. Keywords: Semantic Web Services, Model-Driven Development, Web Ontologies.


Author(s):  
Neal Jean ◽  
Sherrie Wang ◽  
Anshul Samar ◽  
George Azzari ◽  
David Lobell ◽  
...  

Geospatial analysis lacks methods like the word vector representations and pre-trained networks that significantly boost performance across a wide range of natural language and computer vision tasks. To fill this gap, we introduce Tile2Vec, an unsupervised representation learning algorithm that extends the distributional hypothesis from natural language — words appearing in similar contexts tend to have similar meanings — to spatially distributed data. We demonstrate empirically that Tile2Vec learns semantically meaningful representations for both image and non-image datasets. Our learned representations significantly improve performance in downstream classification tasks and, similarly to word vectors, allow visual analogies to be obtained via simple arithmetic in the latent space.


2020 ◽  
Vol 34 (04) ◽  
pp. 3817-3824
Author(s):  
Aritra Dutta ◽  
El Houcine Bergou ◽  
Ahmed M. Abdelmoniem ◽  
Chen-Yu Ho ◽  
Atal Narayan Sahu ◽  
...  

Compressed communication, in the form of sparsification or quantization of stochastic gradients, is employed to reduce communication costs in distributed data-parallel training of deep neural networks. However, there exists a discrepancy between theory and practice: while theoretical analysis of most existing compression methods assumes compression is applied to the gradients of the entire model, many practical implementations operate individually on the gradients of each layer of the model.In this paper, we prove that layer-wise compression is, in theory, better, because the convergence rate is upper bounded by that of entire-model compression for a wide range of biased and unbiased compression methods. However, despite the theoretical bound, our experimental study of six well-known methods shows that convergence, in practice, may or may not be better, depending on the actual trained model and compression ratio. Our findings suggest that it would be advantageous for deep learning frameworks to include support for both layer-wise and entire-model compression.


Constraints ◽  
2020 ◽  
Vol 25 (3-4) ◽  
pp. 319-337 ◽  
Author(s):  
Mark Wallace ◽  
Neil Yorke-Smith

AbstractThe cyclic hoist scheduling problem (CHSP) is a well-studied optimisation problem due to its importance in industry. Despite the wide range of solving techniques applied to the CHSP and its variants, the models have remained complicated and inflexible, or have failed to scale up with larger problem instances. This article re-examines modelling of the CHSP and proposes a new simple, flexible constraint programming formulation. We compare current state-of-the-art solving technologies on this formulation, and show that modelling in a high-level constraint language, MiniZinc, leads to both a simple, generic model and to computational results that outperform the state of the art. We further demonstrate that combining integer programming and lazy clause generation, using the multiple cores of modern processors, has potential to improve over either solving approach alone.


Author(s):  
Horacio González-Vélez ◽  
Maryam Kontagora

Performance evaluation of MapReduce using full virtualisation on a departmental cloudThis work analyses the performance of Hadoop, an implementation of the MapReduce programming model for distributed parallel computing, executing on a virtualisation environment comprised of 1+16 nodes running the VMWare workstation software. A set of experiments using the standard Hadoop benchmarks has been designed in order to determine whether or not significant reductions in the execution time of computations are experienced when using Hadoop on this virtualisation platform on a departmental cloud. Our findings indicate that a significant decrease in computing times is observed under these conditions. They also highlight how overheads and virtualisation in a distributed environment hinder the possibility of achieving the maximum (peak) performance.


2006 ◽  
Vol 17 (02) ◽  
pp. 251-270 ◽  
Author(s):  
THOMAS RAUBER ◽  
GUDULA RÜNGER

Multiprocessor task (M-task) programming is a suitable parallel programming model for coding application problems with an inherent modular structure. An M-task can be executed on a group of processors of arbitrary size, concurrently to other M-tasks of the same application program. The data of a multiprocessor task program usually include composed data structures, like vectors or arrays. For distributed memory machines or cluster platforms, those composed data structures are distributed within one or more processor groups. Thus, a concise parallel programming model for M-tasks requires a standardized distributed data format for composed data structures. Additionally, functions for data re-distribution with respect to different data distributions and different processor group layouts are needed to glue program parts together. In this paper, we present a data re-distribution library which extends the M-task programming with Tlib, a library providing operations to split processor groups and to map M-tasks to processor groups.


2015 ◽  
Vol 2015 ◽  
pp. 1-7 ◽  
Author(s):  
Yang Liu ◽  
Wei Wei

MapReduce is a programming model and an associated implementation for processing and generating large data sets with a parallel, distributed algorithm on a cluster. In cloud environment, node and task failure are no longer accidental but a common feature of large-scale systems. Current rescheduling-based fault tolerance method in MapReduce framework failed to fully consider the location of distributed data and the computation and storage overhead of rescheduling failure tasks. Thus, a single node failure will increase the completion time dramatically. In this paper, a replication-based mechanism is proposed, which takes both task and node failure into consideration. Experimental results show that, compared with default mechanism in Hadoop, our mechanism can significantly improve the performance at failure time, with more than 30% decreasing in execution time.


Sign in / Sign up

Export Citation Format

Share Document