scholarly journals Critical Analysis of Parallel and Distributed Computing and Future Research Direction of Cloud Computing

Author(s):  
Rimma Padovano

"Cloud computing" refers to large-scale parallel and distributed systems, which are essentially collections of autonomous. As a result, the “cloud organization” is made up on a wide range of ideas and experiences collected since the first digital computer was used to solve algorithmically complicated problems. Due to the complexity of established parallel and distributed computing ontologies, it is necessary for developers to have a high level of expertise to get the most out of the consolidated computer resources. The directions for future research for parallel and distributed computing are critically presented in this research: technology and application and cross-cutting concerns.

2019 ◽  
Vol 28 (06) ◽  
pp. 1930006 ◽  
Author(s):  
Pingping Lu ◽  
Gongxuan Zhang ◽  
Zhaomeng Zhu ◽  
Xiumin Zhou ◽  
Jin Sun ◽  
...  

Scientific workflow is a common model to organize large scientific computations. It borrows the concept of workflow in business activities to manage the complicated processes in scientific computing automatically or semi-automatically. The workflow scheduling, which maps tasks in workflows to parallel computing resources, has been extensively studied over years. In recent years, with the rise of cloud computing as a new large-scale distributed computing model, it is of great significance to study workflow scheduling problem in the cloud. Compared with traditional distributed computing platforms, cloud platforms have unique characteristics such as the self-service resource management model and the pay-as-you-go billing model. Therefore, the workflow scheduling in cloud needs to be reconsidered. When scheduling workflows in clouds, the monetary cost and the makespan of the workflow executions are concerned with both the cloud service providers (CSPs) and the customers. In this paper, we study a series of cost-and-time-aware workflow scheduling algorithms in cloud environments, which aims to provide researchers with a choice of appropriate cloud workflow scheduling approaches in various scenarios. We conducted a broad review of different cloud workflow scheduling algorithms and categorized them based on their optimization objectives and constraints. Also, we discuss the possible future research direction of the clouds workflow scheduling.


2020 ◽  
Author(s):  
Sina Faizollahzadeh Ardabili ◽  
Amir Mosavi ◽  
Pedram Ghamisi ◽  
Filip Ferdinand ◽  
Annamaria R. Varkonyi-Koczy ◽  
...  

Several outbreak prediction models for COVID-19 are being used by officials around the world to make informed-decisions and enforce relevant control measures. Among the standard models for COVID-19 global pandemic prediction, simple epidemiological and statistical models have received more attention by authorities, and they are popular in the media. Due to a high level of uncertainty and lack of essential data, standard models have shown low accuracy for long-term prediction. Although the literature includes several attempts to address this issue, the essential generalization and robustness abilities of existing models needs to be improved. This paper presents a comparative analysis of machine learning and soft computing models to predict the COVID-19 outbreak as an alternative to SIR and SEIR models. Among a wide range of machine learning models investigated, two models showed promising results (i.e., multi-layered perceptron, MLP, and adaptive network-based fuzzy inference system, ANFIS). Based on the results reported here, and due to the highly complex nature of the COVID-19 outbreak and variation in its behavior from nation-to-nation, this study suggests machine learning as an effective tool to model the outbreak. This paper provides an initial benchmarking to demonstrate the potential of machine learning for future research. Paper further suggests that real novelty in outbreak prediction can be realized through integrating machine learning and SEIR models.


Pharmaceutics ◽  
2021 ◽  
Vol 13 (2) ◽  
pp. 189
Author(s):  
Zhanying Zheng ◽  
Sharon Shui Yee Leung ◽  
Raghvendra Gupta

Dry powder inhaler (DPI) is a device used to deliver a drug in dry powder form to the lungs. A wide range of DPI products is currently available, with the choice of DPI device largely depending on the dose, dosing frequency and powder properties of formulations. Computational fluid dynamics (CFD), together with various particle motion modelling tools, such as discrete particle methods (DPM) and discrete element methods (DEM), have been increasingly used to optimise DPI design by revealing the details of flow patterns, particle trajectories, de-agglomerations and depositions within the device and the delivery paths. This review article focuses on the development of the modelling methodologies of flow and particle behaviours in DPI devices and their applications to device design in several emerging fields. Various modelling methods, including the most recent multi-scale approaches, are covered and the latest simulation studies of different devices are summarised and critically assessed. The potential and effectiveness of the modelling tools in optimising designs of emerging DPI devices are specifically discussed, such as those with the features of high-dose, pediatric patient compatibility and independency of patients’ inhalation manoeuvres. Lastly, we summarise the challenges that remain to be addressed in DPI-related fluid and particle modelling and provide our thoughts on future research direction in this field.


2014 ◽  
Vol 687-691 ◽  
pp. 3733-3737
Author(s):  
Dan Wu ◽  
Ming Quan Zhou ◽  
Rong Fang Bie

Massive image processing technology requires high requirements of processor and memory, and it needs to adopt high performance of processor and the large capacity memory. While the single or single core processing and traditional memory can’t satisfy the need of image processing. This paper introduces the cloud computing function into the massive image processing system. Through the cloud computing function it expands the virtual space of the system, saves computer resources and improves the efficiency of image processing. The system processor uses multi-core DSP parallel processor, and develops visualization parameter setting window and output results using VC software settings. Through simulation calculation we get the image processing speed curve and the system image adaptive curve. It provides the technical reference for the design of large-scale image processing system.


Author(s):  
Olexander Melnikov ◽  
◽  
Konstantin Petrov ◽  
Igor Kobzev ◽  
Viktor Kosenko ◽  
...  

The article considers the development and implementation of cloud services in the work of government agencies. The classification of the choice of cloud service providers is offered, which can serve as a basis for decision making. The basics of cloud computing technology are analyzed. The COVID-19 pandemic has identified the benefits of cloud services in remote work Government agencies at all levels need to move to cloud infrastructure. Analyze the prospects of cloud computing in Ukraine as the basis of e-governance in development. This is necessary for the rapid provision of quality services, flexible, large-scale and economical technological base. The transfer of electronic information interaction in the cloud makes it possible to attract a wide range of users with relatively low material costs. Automation of processes and their transfer to the cloud environment make it possible to speed up the process of providing services, as well as provide citizens with minimal time to obtain certain information. The article also lists the risks that exist in the transition to cloud services and the shortcomings that may arise in the process of using them.


2021 ◽  
Vol 10 (4) ◽  
pp. e001318
Author(s):  
Gemma Johns ◽  
Sara Khalil ◽  
Mike Ogonovsky ◽  
Markus Hesseling ◽  
Allan Wardhaugh ◽  
...  

The use of video consulting (VC) in the UK has expanded rapidly during the COVID-19 pandemic. Technology Enabled Care (TEC) Cymru, the Welsh Government and Local Health boards began implementing the National Health Service (NHS) Wales VC Service in March 2020. This has been robustly evaluated on a large-scale All-Wales basis, across a wide range of NHS Wales specialities.AimsTo understand the early use of VC in Wales from the perspective of NHS professionals using it. NHS professionals were approached by TEC Cymru to provide early data.MethodsUsing an observational study design with descriptive methods including a cross-sectional survey, TEC Cymru captured data on the use, benefits and challenges of VC from NHS professionals in Wales during August and September 2020. This evidence is based on the rapid adoption of VC in Wales, which mirrors that of other nations.ResultsA total of 1256 NHS professionals shared their VC experience. Overall, responses were positive, and professionals expressed optimistic views regarding the use and benefit of VC, even when faced with challenges on occasions.ConclusionsThis study provides evidence of general positivity, acceptance and the success of the VC service in Wales. Future research studies will now be able to explore and evaluate the implementation methods used within this study, and investigate their effectiveness in being able to achieve better outcomes through VC.


Cloud computing is the on-request accessibility of computer system resources, specially data storage and computing power, without direct dynamic management by the client. In the simplest terms, cloud computing means storing and accessing data and programs over the Internet instead of your computer’s hard drive. Along the improvement of cloud computing, more and more applications are migrated into the cloud. A significant element of distributed computing is pay-more only as costs arise. Distributed computing gives strong computational capacity to the general public at diminished cost that empowers clients with least computational assets to redistribute their huge calculation outstanding burdens to the cloud, and monetarily appreciate the monstrous computational force, transmission capacity, stockpiling, and even reasonable programming that can be partaken in a compensation for each utilization way Tremendous bit of leeway is the essential objective that forestalls the wide scope of registering model for clients when their secret information are expended during the figuring procedure. Critical thinking is a system to arrive at the pragmatic objective of specific instruments that tackles the issues as well as shield from pernicious practices.. In this paper, we examine secure outsourcing for large-scale systems of linear equations, which are the most popular problems in various engineering disciplines. Linear programming is an operation research technique formulates private data by the customer for LP problem as a set of matrices and vectors, to develop a set of efficient privacypreserving problem transformation techniques, which allow customers to transform original LP problem into some arbitrary one while protecting sensitive input/output information. Identify that LP problem solving in Cloud component is efficient extra cost on cloud server. In this paper we are utilizing Homomorphic encryption system to increase the performance and time efficiency


2012 ◽  
pp. 201-222
Author(s):  
Yujian Fu ◽  
Zhijang Dong ◽  
Xudong He

The approach aims at solving the above problems by including the analysis and verification of two different levels of software development process–design level and implementation level-and bridging the gap between software architecture analysis and verification and the software product. In the architecture design level, to make sure the design correctness and attack the large scale of complex systems, the compositional verification is used by dividing and verifying each component individually and synthesizing them based on the driving theory. Then for those properties that cannot be verified on the design level, the design model is translated to implementation and runtime verification technique is adapted to the program. This approach can highly reduce the work on the design verification and avoid the state-explosion problem using model checking. Moreover, this approach can ensure both design and implementation correctness, and can further provide a high confident final software product. This approach is based on Software Architecture Model (SAM) that was proposed by Florida International University in 1999. SAM is a formal specification and built on the pair of component-connector with two formalisms – Petri nets and temporal logic. The ACV approach places strong demands on an organization to articulate those quality attributes of primary importance. It also requires a selection of benchmark combination points with which to verify integrated properties. The purpose of the ACV is not to commend particular architectures, but to provide a method for verification and analysis of large scale software systems in architecture level. The future research works fall in two directions. In the compositional verification of SAM model, it is possible that there is circular waiting of certain data among different component and connectors. This problem was not discussed in the current work. The translation of SAM to implementation is based on the restricted Petri nets due to the undecidable issue of high level Petri nets. In the runtime analysis of implementation, extraction of the execution trace of the program is still needed to get a white box view, and further analysis of execution can provide more information of the product correctness.


2018 ◽  
Vol 7 (4.6) ◽  
pp. 13
Author(s):  
Mekala Sandhya ◽  
Ashish Ladda ◽  
Dr. Uma N Dulhare ◽  
. . ◽  
. .

In this generation of Internet, information and data are growing continuously. Even though various Internet services and applications. The amount of information is increasing rapidly. Hundred billions even trillions of web indexes exist. Such large data brings people a mass of information and more difficulty discovering useful knowledge in these huge amounts of data at the same time. Cloud computing can provide infrastructure for large data. Cloud computing has two significant characteristics of distributed computing i.e. scalability, high availability. The scalability can seamlessly extend to large-scale clusters. Availability says that cloud computing can bear node errors. Node failures will not affect the program to run correctly. Cloud computing with data mining does significant data processing through high-performance machine. Mass data storage and distributed computing provide a new method for mass data mining and become an effective solution to the distributed storage and efficient computing in data mining. 


Aerospace ◽  
2018 ◽  
Vol 5 (4) ◽  
pp. 104 ◽  
Author(s):  
Ilias Lappas ◽  
Michail Bozoudis

The development of a parametric model for the variable portion of the Cost Per Flying Hour (CPFH) of an ‘unknown’ aircraft platform and its application to diverse types of fixed and rotary wing aircraft development programs (F-35A, Su-57, Dassault Rafale, T-X candidates, AW189, Airbus RACER among others) is presented. The novelty of this paper lies in the utilization of a diverse sample of aircraft types, aiming to obtain a ‘universal’ Cost Estimating Relationship (CER) applicable to a wide range of platforms. Moreover, the model does not produce absolute cost figures but rather analogy ratios versus the F-16’s CPFH, broadening the model’s applicability. The model will enable an analyst to carry out timely and reliable Operational and Support (O&S) cost estimates for a wide range of ‘unknown’ aircraft platforms at their early stages of conceptual design, despite the lack of actual data from the utilization and support life cycle stages. The statistical analysis is based on Ordinary Least Squares (OLS) regression, conducted with R software (v5.3.1, released on 2 July 2018). The model’s output is validated against officially published CPFH data of several existing ‘mature’ aircraft platforms, including one of the most prolific fighter jet types all over the world, the F-16C/D, which is also used as a reference to compare CPFH estimates of various next generation aircraft platforms. Actual CPFH data of the Hellenic Air Force (HAF) have been used to develop the parametric model, the application of which is expected to significantly inform high level decision making regarding aircraft procurement, budgeting and future force structure planning, including decisions related to large scale aircraft modifications and upgrades.


Sign in / Sign up

Export Citation Format

Share Document