scholarly journals Infrastructure as Software in Micro Clouds at the Edge

Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7001
Author(s):  
Miloš Simić ◽  
Goran Sladić ◽  
Miroslav Zarić ◽  
Branko Markoski

Edge computing offers cloud services closer to data sources and end-users, making the foundation for novel applications. The infrastructure deployment is taking off, bringing new challenges: how to use geo-distribution properly, or harness the advantages of having resources at a specific location? New real-time applications require multi-tier infrastructure, preferably doing data preprocessing locally, but using the cloud for heavy workloads. We present a model, able to organize geo-distributed nodes into micro clouds dynamically, allowing resource reorganization to best serve population needs. Such elasticity is achieved by relying on cloud organization principles, adapted for a different environment. The desired state is specified descriptively, and the system handles the rest. As such, infrastructure is abstracted to the software level, thus enabling “infrastructure as software” at the edge. We argue about blending the proposed model into existing tools, allowing cloud providers to offer future micro clouds as a service.

2020 ◽  
Vol 26 (8) ◽  
pp. 83-99
Author(s):  
Sarah Haider Abdulredah ◽  
Dheyaa Jasim Kadhim

A Tonido cloud server provides a private cloud storage solution and synchronizes customers and employees with the required cloud services over the enterprise. Generally, access to any cloud services by users is via the Internet connection, which can face some problems, and then users may encounter in accessing these services due to a weak Internet connection or heavy load sometimes especially with live video streaming applications overcloud. In this work, flexible and inexpensive proposed accessing methods are submitted and implemented concerning real-time applications that enable users to access cloud services locally and regionally. Practically, to simulate our network connection, we proposed to use the Raspberry-pi3 model B+ as a router wireless LAN (WLAN) that enables users to have the cloud services using different access approaches such as wireless and wireline connections. As a case study for a real-time application over the cloud server, it is suggested to do a live video streaming using an IP webcam and IVIDEON cloud where the streaming video can be accessed via the cloud server at any time with different users taking into account the proposed practical connections. Practical experiments showed and proved that accessing real-time applications of cloud services via wireline and wireless connections is improved by using Tonido cloud server's facilities.


Author(s):  
Fereshteh Hoseini ◽  
Mostafa Ghobaei Arani ◽  
Alireza Taghizadeh

<p class="Abstract">By increasing the use of cloud services and the number of requests to processing tasks with minimum time and costs, the resource allocation and scheduling, especially in real-time applications become more challenging. The problem of resource scheduling, is one of the most important scheduling problems in the area of NP-hard problems. In this paper, we propose an efficient algorithm is proposed to schedule real-time cloud services by considering the resource constraints. The simulation results show that the proposed algorithm shorten the processing time of tasks and decrease the number of canceled tasks.</p>


Sensors ◽  
2019 ◽  
Vol 19 (5) ◽  
pp. 982 ◽  
Author(s):  
Hyo Lee ◽  
Ihsan Ullah ◽  
Weiguo Wan ◽  
Yongbin Gao ◽  
Zhijun Fang

Make and model recognition (MMR) of vehicles plays an important role in automatic vision-based systems. This paper proposes a novel deep learning approach for MMR using the SqueezeNet architecture. The frontal views of vehicle images are first extracted and fed into a deep network for training and testing. The SqueezeNet architecture with bypass connections between the Fire modules, a variant of the vanilla SqueezeNet, is employed for this study, which makes our MMR system more efficient. The experimental results on our collected large-scale vehicle datasets indicate that the proposed model achieves 96.3% recognition rate at the rank-1 level with an economical time slice of 108.8 ms. For inference tasks, the deployed deep model requires less than 5 MB of space and thus has a great viability in real-time applications.


2020 ◽  
Vol 13 (5) ◽  
pp. 957-964
Author(s):  
Siva Rama Krishna ◽  
Mohammed Ali Hussain

Background: In recent years, the computational memory and energy conservation have become a major problem in cloud computing environment due to the increase in data size and computing resources. Since, most of the different cloud providers offer different cloud services and resources use limited number of user’s applications. Objective: The main objective of this work is to design and implement a cloud resource allocation and resources scheduling model in the cloud environment. Methods: In the proposed model, a novel cloud server to resource management technique is proposed on real-time cloud environment to minimize the cost and time. In this model different types of cloud resources and its services are scheduled using multi-level objective constraint programming. Proposed cloud server-based resource allocation model is based on optimization functions to minimize the resource allocation time and cost. Results: Experimental results proved that the proposed model has high computational resource allocation time and cost compared to the existing resource allocation models. Conclusion: This cloud service and resource optimization model is efficiently implemented and tested in real-time cloud instances with different types of services and resource sets.


2021 ◽  
Author(s):  
Jingwei Li ◽  
Wei Huang ◽  
Choon Ling Sia ◽  
Zhuo Chen ◽  
Tailai Wu ◽  
...  

BACKGROUND The SARS-COV-2 virus and its variants are posing extraordinary challenges for public health worldwide. More timely and accurate forecasting of COVID-19 epidemics is the key to maintaining timely interventions and policies and efficient resources allocation. Internet-based data sources have shown great potential to supplement traditional infectious disease surveillance, and the combination of different Internet-based data sources has shown greater power to enhance epidemic forecasting accuracy than using a single Internet-based data source. However, existing methods incorporating multiple Internet-based data sources only used real-time data from these sources as exogenous inputs, but didn’t take all the historical data into account. Moreover, the predictive power of different Internet-based data sources in providing early warning for COVID-19 outbreaks has not been fully explored. OBJECTIVE The main aim of our study is to explore whether combining real-time and historical data from multiple Internet-based sources could improve the COVID-19 forecasting accuracy over the existing baseline models. A secondary aim is to explore the COVID-19 forecasting timeliness based on different Internet-based data sources. METHODS We first used core terms and symptoms related keywords-based methods to extract COVID-19 related Internet-based data from December 21, 2019, to February 29, 2020. The Internet-based data we explored included 90,493,912 online news articles, 37,401,900 microblogs, and all the Baidu search query data during that period. We then proposed an autoregressive model with exogenous inputs, incorporating the real-time and historical data from multiple Internet-based sources. Our proposed model was compared with baseline models, and all the models were tested during the first wave of COVID-19 epidemics in Hubei province and the rest of mainland China separately. We also used the lagged Pearson correlations for the COVID-19 forecasting timeliness analysis. RESULTS Our proposed model achieved the highest accuracy in all the five accuracy measures, compared with all the baseline models in both Hubei province and the rest of mainland China. In mainland China except Hubei, the COVID-19 epidemics forecasting accuracy differences between our proposed model (model i) and all the other baseline models were statistically significant (model 1, t=–8.722, P<.001; model 2, t=–5.000, P<.001, model 3, t=–1.882, P =0.063, model 4, t=–4.644, P<.001; model 5, t=–4.488, P<.001). In Hubei province, our proposed model's forecasting accuracy improved significantly compared with the baseline model using historical COVID-19 new confirmed case counts only (model 1, t=–1.732, P=0.086). Our results also showed that Internet-based sources could provide a 2-6 days earlier warning for COVID-19 outbreaks. CONCLUSIONS Our approach incorporating real-time and historical data from multiple Internet-based sources could improve forecasting accuracy for COVID-19 epidemics and its variants, which may help improve public health agencies' interventions and resources allocation in mitigating and controlling new waves of COVID-19 or other epidemics.


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3726 ◽  
Author(s):  
Bandar Almaslukh ◽  
Abdel Artoli ◽  
Jalal Al-Muhtadi

Recently, modern smartphones equipped with a variety of embedded-sensors, such as accelerometers and gyroscopes, have been used as an alternative platform for human activity recognition (HAR), since they are cost-effective, unobtrusive and they facilitate real-time applications. However, the majority of the related works have proposed a position-dependent HAR, i.e., the target subject has to fix the smartphone in a pre-defined position. Few studies have tackled the problem of position-independent HAR. They have tackled the problem either using handcrafted features that are less influenced by the position of the smartphone or by building a position-aware HAR. The performance of these studies still needs more improvement to produce a reliable smartphone-based HAR. Thus, in this paper, we propose a deep convolution neural network model that provides a robust position-independent HAR system. We build and evaluate the performance of the proposed model using the RealWorld HAR public dataset. We find that our deep learning proposed model increases the overall performance compared to the state-of-the-art traditional machine learning method from 84% to 88% for position-independent HAR. In addition, the position detection performance of our model improves superiorly from 89% to 98%. Finally, the recognition time of the proposed model is evaluated in order to validate the applicability of the model for real-time applications.


Author(s):  
Chitra A. Dhawale ◽  
Krtika Dhawale

Artificial Intelligence (AI) is going through its golden era by playing an important role in various real-time applications. Most AI applications are using Machine learning and it represents the most promising path to strong AI. On the other hand, Deep Learning (DL), which is itself a kind of Machine Learning (ML), is becoming more and more popular and successful at different use cases, and is at the peak of developments. Hence, DL is becoming a leader in this domain. To foster the growth of the DL community to a greater extent, many open source frameworks are available which implemented DL algorithms. Each framework is based on an algorithm with specific applications. This chapter provides a brief qualitative review of the most popular and comprehensive DL frameworks, and informs end users of trends in DL Frameworks. This helps them make an informed decision to choose the best DL framework that suits their needs, resources, and applications so they choose a proper career.


Author(s):  
Libor Wasziwoski ◽  
Zdenek Hanzalek

The aim of this chapter is to show how a multitasking real-time application running under a real-time operating system can be modeled by timed automata. The application under consideration consists of several preemptive tasks and interrupts service routines that can be synchronized by events and can share resources. A real-time operating system compliant with an OSEK/VDX standard is considered for demonstration. A model checking tool UPPAAL is used to verify time and logical properties of the proposed model. Since the complexity of the model-checking verification exponentially grows with the number of clocks used in a model, the proposed model uses only one clock for measuring execution time of all modeled tasks.


Paper The objective of face recognition is, given an image of a human face identify the class to which the face belongs to. Face classification is one of the useful task and can be used as a base for many real-time applications like authentication, tracking, fraud detection etc. Given a photo of a person, we humans can easily identify who the person is without any effort. But manual systems are biased and involves lot of effort and expensive. Automatic face recognition has been an important research topic due to its importance in real-time applications. The recent advance in GPU has taken many applications like image classification, hand written digit recognition and object recognition to the next level. According to the literature Deep CNN (Convolution neural network) features can effectively represent the image. In this paper we propose to use deep CNN based features for face recognition task. In this work we also investigate the effectiveness of different Deep CNN models for the task of face recognition. Initially facial features are extracted from pretrained CNN model like VGG16, VGG19, ResNet50 and Inception V3. Then a deep Neural network is used for the classification task. To show the effectiveness of the proposed model, ORL dataset is used for our experimental studies. Based on the experimental results we claim that deep CNN based features give better performance than existing hand crafted features. We also observe that the among all the pretrained CNN models we used, ResNet scores highest performance.


Author(s):  
Munesh Chandra Trivedi ◽  
Virendra Kumar Yadav ◽  
Avadhesh Kumar Gupta

<p>Data warehouse generally contains both types of data i.e. historical &amp; current data from various data sources. Data warehouse in world of computing can be defined as system created for analysis and reporting of these both types of data. These analysis report is then used by an organization to make decisions which helps them in their growth. Construction of data warehouse appears to be simple, collection of data from data sources into one place (after extraction, transform and loading). But construction involves several issues such as inconsistent data, logic conflicts, user acceptance, cost, quality, security, stake holder’s contradictions, REST alignment etc. These issues need to be overcome otherwise will lead to unfortunate consequences affecting the organization growth. Proposed model tries to solve these issues such as REST alignment, stake holder’s contradiction etc. by involving experts of various domains such as technical, analytical, decision makers, management representatives etc. during initialization phase to better understand the requirements and mapping these requirements to data sources during design phase of data warehouse.</p>


Sign in / Sign up

Export Citation Format

Share Document