scholarly journals Scheduling Hardware-Accelerated Cloud Functions

Author(s):  
Jessica Vandebon ◽  
Jose G. F. Coutinho ◽  
Wayne Luk

AbstractThis paper presents a Function-as-a-Service (FaaS) approach for deploying managed cloud functions onto heterogeneous cloud infrastructures. Current FaaS systems, such as AWS Lambda, allow domain-specific functionality, such as AI, HPC and image processing, to be deployed in the cloud while abstracting users from infrastructure and platform concerns. Existing approaches, however, use a single type of resource configuration to execute all function requests. In this paper, we present a novel FaaS approach that allows cloud functions to be effectively executed across heterogeneous compute resources, including hardware accelerators such as GPUs and FPGAs. We implement heterogeneous scheduling to tailor resource selection to each request, taking into account performance and cost concerns. In this way, our approach makes use of different processor types and quantities (e.g. 2 CPU cores), uniquely suited to handle different types of workload, potentially providing improved performance at a reduced cost. We validate our approach in three application domains: machine learning, bio-informatics, and physics, and target a hardware platform with a combined computational capacity of 24 FPGAs and 12 CPU cores. Compared to traditional FaaS, our approach achieves a cost improvement for non-uniform traffic of up to 8.9 times, while maintaining performance objectives.

Author(s):  
Adrián Bernal ◽  
M. Emilia Cambronero ◽  
Alberto Núñez ◽  
Pablo C. Cañizares ◽  
Valentín Valero

AbstractIn this paper, we investigate how to improve the profits in cloud infrastructures by using price schemes and analyzing the user interactions with the cloud provider. For this purpose, we consider two different types of client behavior, namely regular and high-priority users. Regular users do not require a continuous service, and they can wait to be attended to. In contrast, high-priority users require a continuous service, e.g., a 24/7 service, and usually need an immediate answer to any request. A complete framework has been implemented, which includes a UML profile that allows us to define specific cloud scenarios and the automatic transformations to produce the code for the cloud simulations in the Simcan2Cloud simulator. The engine of Simcan2Cloud has also been modified by adding specific SLAs and price schemes. Finally, we present a thorough experimental study to analyze the performance results obtained from the simulations, thus making it possible to draw conclusions about how to improve the cloud profit for the cloud studied by adjusting the different parameters and resource configuration.


2014 ◽  
Vol 694 ◽  
pp. 80-84
Author(s):  
Xiao Tong Yin ◽  
Chao Qun Ma ◽  
Liang Peng Qu

The analysis of the unban road traffic state based on kinds of floating car data, is based on the model and algorithm of floating car data preprocessing and map matching, etc. Firstly, according to the characteristics of the different types of urban road, the urban road section division has been carried on the elaboration and optimization. And this paper introduces the method of calculating the section average speed with single floating car data, also applies the dynamic consolidation of sections to estimate the section average velocity.Then the minimum sample size of floating car data is studied, and section average velocity estimation model based on single type of floating car data in the different case of floating car data sample sizes has been built. Finally, the section average speed of floating car in different types is fitted to the section average car speed by the least square method, using section average speed as the judgment standard, the grade division standard of urban road traffic state is established to obtain the information of road traffic state.


ZDM ◽  
2021 ◽  
Author(s):  
Haim Elgrably ◽  
Roza Leikin

AbstractThis study was inspired by the following question: how is mathematical creativity connected to different kinds of expertise in mathematics? Basing our work on arguments about the domain-specific nature of expertise and creativity, we looked at how participants from two groups with two different types of expertise performed in problem-posing-through-investigations (PPI) in a dynamic geometry environment (DGE). The first type of expertise—MO—involved being a candidate or a member of the Israeli International Mathematical Olympiad team. The second type—MM—was comprised of mathematics majors who excelled in university mathematics. We conducted individual interviews with eight MO participants who were asked to perform PPI in geometry, without previous experience in performing a task of this kind. Eleven MMs tackled the same PPI task during a mathematics test at the end of a 52-h course that integrated PPI. To characterize connections between creativity and expertise, we analyzed participants’ performance on the PPI tasks according to proof skills (i.e., auxiliary constructions, the complexity of posed tasks, and correctness of their proofs) and creativity components (i.e., fluency, flexibility and originality of the discovered properties). Our findings demonstrate significant differences between PPI by MO participants and by MM participants as reflected in the more creative performance and more successful proving processes demonstrated by MO participants. We argue that problem posing and problem solving are inseparable when MO experts are engaged in PPI.


2021 ◽  
Vol 11 (4) ◽  
pp. 1438
Author(s):  
Sebastián Risco ◽  
Germán Moltó

Serverless computing has introduced scalable event-driven processing in Cloud infrastructures. However, it is not trivial for multimedia processing to benefit from the elastic capabilities featured by serverless applications. To this aim, this paper introduces the evolution of a framework to support the execution of customized runtime environments in AWS Lambda in order to accommodate workloads that do not satisfy its strict computational requirements: increased execution times and the ability to use GPU-based resources. This has been achieved through the integration of AWS Batch, a managed service to deploy virtual elastic clusters for the execution of containerized jobs. In addition, a Functions Definition Language (FDL) is introduced for the description of data-driven workflows of functions. These workflows can simultaneously leverage both AWS Lambda for the highly-scalable execution of short jobs and AWS Batch, for the execution of compute-intensive jobs that can profit from GPU-based computing. To assess the developed open-source framework, we executed a case study for efficient serverless video processing. The workflow automatically generates subtitles based on the audio and applies GPU-based object recognition to the video frames, thus simultaneously harnessing different computing services. This allows for the creation of cost-effective highly-parallel scale-to-zero serverless workflows in AWS.


Author(s):  
Aisha Naseer ◽  
Lampros Stergiolas

Adoption of cutting edge technologies in order to facilitate various healthcare operations and tasks is significant. There is a need for health information systems to be fully integrated with each other and provide interoperability across various organizational domains for ubiquitous access and sharing. The emerging technology of HealthGrids holds the promise to successfully integrate health information systems and various healthcare entities onto a common, globally shared and easily accessible platform. This chapter presents a systematic taxonomy of different types of HealthGrid resources, where the specialized resources can be categorised into three major types; namely, Data or Information or Files (DIF); Applications & Peripherals (AP); and Services. Resource discovery in HealthGrids is an emerging challenge comprising many technical issues encapsulating performance, consistency, compatibility, heterogeneity, integrity, aggregation and security of life-critical data. To address these challenges, a systematic search strategy could be devised and adopted, as the discovered resource should be valid, refined and relevant to the query. Standards could be implemented on domain-specific metadata. This chapter proposes potential solutions for the discovery of different types of HealthGrid resources and reflects on discovering and integrating data resources.


2020 ◽  
Vol 32 (3) ◽  
pp. 527-545 ◽  
Author(s):  
Peter Kok ◽  
Lindsay I. Rait ◽  
Nicholas B. Turk-Browne

Recent work suggests that a key function of the hippocampus is to predict the future. This is thought to depend on its ability to bind inputs over time and space and to retrieve upcoming or missing inputs based on partial cues. In line with this, previous research has revealed prediction-related signals in the hippocampus for complex visual objects, such as fractals and abstract shapes. Implicit in such accounts is that these computations in the hippocampus reflect domain-general processes that apply across different types and modalities of stimuli. An alternative is that the hippocampus plays a more domain-specific role in predictive processing, with the type of stimuli being predicted determining its involvement. To investigate this, we compared hippocampal responses to auditory cues predicting abstract shapes (Experiment 1) versus oriented gratings (Experiment 2). We measured brain activity in male and female human participants using high-resolution fMRI, in combination with inverted encoding models to reconstruct shape and orientation information. Our results revealed that expectations about shape and orientation evoked distinct representations in the hippocampus. For complex shapes, the hippocampus represented which shape was expected, potentially serving as a source of top–down predictions. In contrast, for simple gratings, the hippocampus represented only unexpected orientations, more reminiscent of a prediction error. We discuss several potential explanations for this content-based dissociation in hippocampal function, concluding that the computational role of the hippocampus in predictive processing may depend on the nature and complexity of stimuli.


1959 ◽  
Vol 81 (4) ◽  
pp. 423-426
Author(s):  
H. N. McManus ◽  
W. E. Ibele ◽  
T. E. Murphy

A series of tests to determine the effect of combustion-chamber length for three different types of fuel admission (gaseous, spray, and vaporized) upon combustion efficiency was performed in identical combustor geometries and with similar air-flow patterns. The effects of fuel-air ratio and full-section velocity were examined for individual methods of admission. The effect of fuel volatility also was examined. It was found that the vaporized fuel type of admission was superior in efficiency to the spray-fuel admission in all comparable cases. Increased fuel volatility improved performance in the case of the vaporizer but did not affect the performance of the spray nozzle. The performance of vaporising tubes was found to vary inversely with size. An optimum size was exhibited.


2020 ◽  
Vol 2020 ◽  
pp. 1-8
Author(s):  
Jooyoung Lee ◽  
Jiho Yeo ◽  
Ilsoo Yun ◽  
Sanghyeok Kang

The aim of this study was to evaluate the effects of driver-related factors on crash involvement of four different types of commercial vehicles—express buses, local buses, taxis, and trucks—and to compare outcomes across types. Previous studies on commercial vehicle crashes have generally been focused on a single type of commercial vehicle; however, the characteristics of drivers as factors affecting crashes vary widely across types of commercial vehicles as well as across study sites. This underscores the need for comparative analysis between different types of commercial vehicles that operate in similar environments. Toward these ends, we analyzed 627,594 commercial vehicle driver records in South Korea using a mixed logit model able to address unobserved heterogeneity in crash-related data. The estimated outcomes showed that driver-related factors have common effects on crash involvement: greater experience had a positive effect (diminished driver crash involvement), while traffic violations, job change, and previous crash involvement had negative effects. However, the magnitude of the effects and heterogeneity varied across different types of commercial vehicles. The findings support the contention that the safety management policy of commercial drivers needs to be set differently according to the vehicle type. Furthermore, the variables in this study can be used as promising predictors to quantify potential crash involvement of commercial vehicles. Using these variables, it is possible to proactively identify groups of accident-prone commercial vehicle drivers and to implement effective measures to reduce their involvement in crashes.


2018 ◽  
Vol 15 (4) ◽  
pp. 82-96 ◽  
Author(s):  
Lei Wu ◽  
Yuandou Wang

Cloud computing, with dependable, consistent, pervasive, and inexpensive access to geographically distributed computational capabilities, is becoming an increasingly popular platform for the execution of scientific applications such as scientific workflows. Scheduling multiple workflows over cloud infrastructures and resources is well recognized to be NP-hard and thus critical to meeting various types of Quality-of-Service (QoS) requirements. In this work, the authors consider a multi-objective scientific workflow scheduling framework based on the dynamic game-theoretic model. It aims at reducing make-spans, cloud cost, while maximizing system fairness in terms of workload distribution among heterogeneous cloud virtual machines (VMs). The authors consider randomly-generated scientific workflow templates as test cases and carry out extensive real-world tests based on third-party commercial clouds. Experimental results show that their proposed framework outperforms traditional ones by achieving lower make-spans, lower cost, and better system fairness.


Sensors ◽  
2020 ◽  
Vol 20 (10) ◽  
pp. 2779
Author(s):  
Yaoming Zhuang ◽  
Chengdong Wu ◽  
Hao Wu ◽  
Zuyuan Zhang ◽  
Yuan Gao ◽  
...  

Wireless sensor and robot networks (WSRNs) often work in complex and dangerous environments that are subject to many constraints. For obtaining a better monitoring performance, it is necessary to deploy different types of sensors for various complex environments and constraints. The traditional event-driven deployment algorithm is only applicable to a single type of monitoring scenario, so cannot effectively adapt to different types of monitoring scenarios at the same time. In this paper, a multi-constrained event-driven deployment model is proposed based on the maximum entropy function, which transforms the complex event-driven deployment problem into two continuously differentiable single-objective sub-problems. Then, a collaborative neural network (CONN) event-driven deployment algorithm is proposed based on neural network methods. The CONN event-driven deployment algorithm effectively solves the problem that it is difficult to obtain a large amount of sensor data and environmental information in a complex and dangerous monitoring environment. Unlike traditional deployment methods, the CONN algorithm can adaptively provide an optimal deployment solution for a variety of complex monitoring environments. This greatly reduces the time and cost involved in adapting to different monitoring environments. Finally, a large number of experiments verify the performance of the CONN algorithm, which can be adapted to a variety of complex application scenarios.


Sign in / Sign up

Export Citation Format

Share Document