scholarly journals Applicability of Remote Sensing Workflow in Kubernetes-Managed On-premise Cluster Environment

Proceedings ◽  
2019 ◽  
Vol 30 (1) ◽  
pp. 38
Author(s):  
Vithlani ◽  
Marcel ◽  
Melville ◽  
Prüm ◽  
Lam ◽  
...  

The acquisition, storage, and processing of huge amounts of data and their fast analysis to generate information is not a new approach, but it becomes challenging through smart decision-making on the choice of hardware and software improvements. In the specific cases of environment protection, nature conservation, and precision farming, where fast and accurate reactions are required, drone technologies with imaging sensors are of interest in many research groups. However, post-processing of the images acquired by drone-based sensors such as the generation of orthomosaics from aerial images and superimposing the orthomosaics on a global map to identify the exact locations of the interested area is computationally intensive and sometimes takes hours or even days to achieve desired results. Initial tests have shown that photogrammetry software takes less time to generate an orthomosaic by running them on a workstation with higher CPU, RAM and GPU configurations. Tasks like setting up the application environment with dependencies, making this setup portable and manage installed services can be challenging, especially for small-and-medium-sized enterprises that have limited resources in exploring different architectures. To enhance the competitiveness of the small and medium-sized enterprises and research institutions, the accessibility of the proposed solution includes the integration of open-source tools and frameworks such as Kubernetes (version v1.13.4, available online: https://kubernetes.io/) and OpenDroneMap (version 0.3, available online: https://github.com/OpenDroneMap/ODM) enabling a reference architecture that is as vendor-neutral as possible. Current work is based on an on-premise cluster computing approach for fast and efficient photogrammetry process using open source software such as OpenDroneMap combined with light-weight containerization techniques such as Docker (version 17.12.1, available online: https://www.docker.io/), orchestrated by Kubernetes. The services provided by OpenDroneMap enable microservice-based architecture. These container-based services can be administered easily by a container orchestrator like Kubernetes. After setting up the servers with core OpenDroneMap services on our container-based cluster with Kubernetes as the orchestrator engine, the plan is to use the advantages of Kubernetes' powerful management capabilities to help maximize resource efficiency as the basis for creating Service Level Agreements to provide a cloud service.

Electronics ◽  
2021 ◽  
Vol 10 (7) ◽  
pp. 869
Author(s):  
Pablo F. S. Melo ◽  
Eduardo P. Godoy ◽  
Paolo Ferrari ◽  
Emiliano Sisinni

The technical innovation of the fourth industrial revolution (Industry 4.0—I4.0) is based on the following respective conditions: horizontal and vertical integration of manufacturing systems, decentralization of computing resources and continuous digital engineering throughout the product life cycle. The reference architecture model for Industry 4.0 (RAMI 4.0) is a common model for systematizing, structuring and mapping the complex relationships and functionalities required in I4.0 applications. Despite its adoption in I4.0 projects, RAMI 4.0 is an abstract model, not an implementation guide, which hinders its current adoption and full deployment. As a result, many papers have recently studied the interactions required among the elements distributed along the three axes of RAMI 4.0 to develop a solution compatible with the model. This paper investigates RAMI 4.0 and describes our proposal for the development of an open-source control device for I4.0 applications. The control device is one of the elements in the hierarchy-level axis of RAMI 4.0. Its main contribution is the integration of open-source solutions of hardware, software, communication and programming, covering the relationships among three layers of RAMI 4.0 (assets, integration and communication). The implementation of a proof of concept of the control device is discussed. Experiments in an I4.0 scenario were used to validate the operation of the control device and demonstrated its effectiveness and robustness without interruption, failure or communication problems during the experiments.


2012 ◽  
Vol 4 (4) ◽  
pp. 68-88
Author(s):  
Chao-Tung Yang ◽  
Wen-Feng Hsieh

This paper’s objective is to implement and evaluate a high-performance computing environment by clustering idle PCs (personal computers) with diskless slave nodes on campuses to obtain the effectiveness of the largest computer potency. Two sets of Cluster platforms, BCCD and DRBL, are used to compare computing performance. It’s to prove that DRBL has better performance than BCCD in this experiment. Originally, DRBL was created to facilitate instructions for a Free Software Teaching platform. In order to achieve the purpose, DRBL is applied to the computer classroom with 32 PCs so to enable PCs to be switched manually or automatically among different OS (operating systems). The bioinformatics program, mpiBLAST, is executed smoothly in the Cluster architecture as well. From management’s view, the state of each Computation Node in Clusters is monitored by “Ganglia”, an existing Open Source. The authors gather the relevant information of CPU, Memory, and Network Load for each Computation Node in every network section. Through comparing aspects of performance, including performance of Swap and different network environment, they attempted to find out the best Cluster environment in a computer classroom at the school. Finally, HPL of HPCC is used to demonstrate cluster performance.


2012 ◽  
Vol 490-495 ◽  
pp. 1231-1236 ◽  
Author(s):  
Tran Van Hung ◽  
Chuan He Huang

MMDB cluster system is a memory optimized relation database that implements on cluster computing platform, provides applications with extremely fast response time and very high throughput as required by many applications in a wide range of industries. Here, a new dynamic fragment allocation algorithm (DFAPR) in Partially Replicated allocation scenario is proposed. This algorithm reallocates data with respect to changing data access pattern for each fragment in which data is maintained in current site, migrated or created new replicas on remote sites depend on accessing frequency and average response time. At last, the simulation results show that the DFAPR is suitable for MMDB cluster because it provides a better response time and maximize the locality of processing so it could be developed parallel processing of MMDB in cluster environment.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Alexander Döschl ◽  
Max-Emanuel Keller ◽  
Peter Mandl

Purpose This paper aims to evaluate different approaches for the parallelization of compute-intensive tasks. The study compares a Java multi-threaded algorithm, distributed computing solutions with MapReduce (Apache Hadoop) and resilient distributed data set (RDD) (Apache Spark) paradigms and a graphics processing unit (GPU) approach with Numba for compute unified device architecture (CUDA). Design/methodology/approach The paper uses a simple but computationally intensive puzzle as a case study for experiments. To find all solutions using brute force search, 15! permutations had to be computed and tested against the solution rules. The experimental application comprises a Java multi-threaded algorithm, distributed computing solutions with MapReduce (Apache Hadoop) and RDD (Apache Spark) paradigms and a GPU approach with Numba for CUDA. The implementations were benchmarked on Amazon-EC2 instances for performance and scalability measurements. Findings The comparison of the solutions with Apache Hadoop and Apache Spark under Amazon EMR showed that the processing time measured in CPU minutes with Spark was up to 30% lower, while the performance of Spark especially benefits from an increasing number of tasks. With the CUDA implementation, more than 16 times faster execution is achievable for the same price compared to the Spark solution. Apart from the multi-threaded implementation, the processing times of all solutions scale approximately linearly. Finally, several application suggestions for the different parallelization approaches are derived from the insights of this study. Originality/value There are numerous studies that have examined the performance of parallelization approaches. Most of these studies deal with processing large amounts of data or mathematical problems. This work, in contrast, compares these technologies on their ability to implement computationally intensive distributed algorithms.


2021 ◽  
Vol 17 (2) ◽  
pp. 179-195
Author(s):  
Priyanka Bharti ◽  
Rajeev Ranjan ◽  
Bhanu Prasad

Cloud computing provisions and allocates resources, in advance or real-time, to dynamic applications planned for execution. This is a challenging task as the Cloud-Service-Providers (CSPs) may not have sufficient resources at all times to satisfy the resource requests of the Cloud-Service-Users (CSUs). Further, the CSPs and CSUs have conflicting interests and may have different utilities. Service-Level-Agreement (SLA) negotiations among CSPs and CSUs can address these limitations. User Agents (UAs) negotiate for resources on behalf of the CSUs and help reduce the overall costs for the CSUs and enhance the resource utilization for the CSPs. This research proposes a broker-based mediation framework to optimize the SLA negotiation strategies between UAs and CSPs in Cloud environment. The impact of the proposed framework on utility, negotiation time, and request satisfaction are evaluated. The empirical results show that these strategies favor cooperative negotiation and achieve significantly higher utilities, higher satisfaction, and faster negotiation speed for all the entities involved in the negotiation.


2020 ◽  
Vol 17 (12) ◽  
pp. 5296-5306
Author(s):  
N. Keerthana ◽  
Viji Vinod ◽  
Sudhakar Sengan

Data in the Cloud, which applies to data as a cloud service provider (CSP), transmits stores, or manages it. The company will enforce the same definition of data usage while the data is resident within the enterprise and thus extend the required cryptographic security criteria to data collected, exchanged, or handled by CSP. The CSP Service Level Agreements cannot override the cryptographic access measures. When the data is transferred securely to CSP, it can be securely collected, distributed, and interpreted. Data at the rest position applies to data as it is processed internally in organized and in the unstructured ways like databases and file cabinets. The Data at the Rest example includes the use of cryptography for preserving the integrity of valuable data when processed. For cloud services, computing takes multiple forms from recording units, repositories, and many unstructured items. This paper presents a secure model for Data at rest. The TF-Sec model suggested is planned for use with Slicing, Tokenization, and Encryption. The model encrypts the given cloud data using AES 256 encryption, and then the encrypted block is sliced into the chunks of data fragments using HD-Slicer. Then it applies tokenization algorithm TKNZ to each chunk of data, applies erasure coding technique to tokens, applies the data dispersion technique to scramble encrypted data fragments, and allocates to storage nodes of the multiple CSP. In taking the above steps, this study aims to resolve the cloud security problems found and to guarantee the confidentiality of their data to cloud users due to encryption of data fragments would be of little benefit to a CSP.


2013 ◽  
Vol 4 (3) ◽  
pp. 38-52
Author(s):  
Sai Manoj Marepalli ◽  
Razia Sultana ◽  
Andreas Christ

Cloud computing is the emerging technology providing IT as a utility through internet. The benefits of cloud computing are but not limited to service based, scalable, elastic, shared pool of resources, metered by use. Due to mentioned benefits the concept of cloud computing fits very well with the concept of m-learning which differs from other forms of e-learning, covers a wide range of possibilities opened up by the convergence of new mobile technologies, wireless communication structure and distance learning development. The concept of cloud computing like any other concept has not only benefits but also introduces myriad of security issues, such as transparency between cloud user and provider, lack of standards, security concerns related to identity, Service Level Agreements (SLA) inadequacy etc. Providing secure, transparent, and reliable services in cloud computing environment is an important issue. This paper introduces a secured three layered architecture with an advance Intrusion Detection System (advIDS), which overcomes different vulnerabilities on cloud deployed applications. This proposed architecture can reduce the impact of different attacks by providing timely alerts, rejecting the unauthorized access over services, and recording the new threat profiles for future verification. The goal of this research is to provide more control over data and applications to the cloud user, which are now mainly controlled by Cloud Service Provider (CSP).


Sign in / Sign up

Export Citation Format

Share Document