A Survey on Service Level Components in Big-Cloud-IoT Systems with Hybrid Meta-heuristic Techniques

Author(s):  
Xueqiang Yin ◽  
Athreya Tao Chen

Big data is one such technology. When we receive huge volume of data, there will be high demand in processing the huge data. It can also be said challenging task in big data processing. The increases in IoT devices in the network system collect more data to be processed in centralized devices called cloud storage. Every big data is processed and stored in the cloud. To overcome the performance and latency issues in large data computation, big cloud processing system uses edge computing in it. One of the key components of IoT is edge computing. We combine big data with cloud and edge computing in this paper as hybrid edge computing system. In the edge computing system, huge number of IoT devices computes services in its nearby network edge. Data sharing and transmission between the various service components may affect performance of the system. The main aim of this research article is to reduce the delay in data transfer between the components. This optimization goal is achieved by new Hybrid Meta-heuristic optimization (HMeO) algorithm. New HMeO algorithm designed for IoT devices to deploy the service components. MHO model is design to optimize the process by selecting the edge computing with minimum latency. Our proposed HMeO algorithm is compared with existing genetic algorithm and ant colony algorithm. The result shows HMeO algorithm provides more performance and efficient in in-depth data analysing and locating the component in big databased cloud environment.

Sensors ◽  
2018 ◽  
Vol 18 (9) ◽  
pp. 3071 ◽  
Author(s):  
Jun-Hong Park ◽  
Hyeong-Su Kim ◽  
Won-Tae Kim

Edge computing is proposed to solve the problem of centralized cloud computing caused by a large number of IoT (Internet of Things) devices. The IoT protocols need to be modified according to the edge computing paradigm, where the edge computing devices for analyzing IoT data are distributed to the edge networks. The MQTT (Message Queuing Telemetry Transport) protocol, as a data distribution protocol widely adopted in many international IoT standards, is suitable for cloud computing because it uses a centralized broker to effectively collect and transmit data. However, the standard MQTT may suffer from serious traffic congestion problem on the broker, causing long transfer delays if there are massive IoT devices connected to the broker. In addition, the big data exchange between the IoT devices and the broker decreases network capability of the edge networks. The authors in this paper propose a novel MQTT with a multicast mechanism to minimize data transfer delay and network usage for the massive IoT communications. The proposed MQTT reduces data transfer delays by establishing bidirectional SDN (Software Defined Networking) multicast trees between the publishers and the subscribers by means of bypassing the centralized broker. As a result, it can reduce transmission delay by 65% and network usage by 58% compared with the standard MQTT.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4375 ◽  
Author(s):  
Yuxuan Wang ◽  
Jun Yang ◽  
Xiye Guo ◽  
Zhi Qu

As one of the information industry’s future development directions, the Internet of Things (IoT) has been widely used. In order to reduce the pressure on the network caused by the long distance between the processing platform and the terminal, edge computing provides a new paradigm for IoT applications. In many scenarios, the IoT devices are distributed in remote areas or extreme terrain and cannot be accessed directly through the terrestrial network, and data transmission can only be achieved via satellite. However, traditional satellites are highly customized, and on-board resources are designed for specific applications rather than universal computing. Therefore, we propose to transform the traditional satellite into a space edge computing node. It can dynamically load software in orbit, flexibly share on-board resources, and provide services coordinated with the cloud. The corresponding hardware structure and software architecture of the satellite is presented. Through the modeling analysis and simulation experiments of the application scenarios, the results show that the space edge computing system takes less time and consumes less energy than the traditional satellite constellation. The quality of service is mainly related to the number of satellites, satellite performance, and task offloading strategy.


Author(s):  
Yao Wu ◽  
Long Zheng ◽  
Brian Heilig ◽  
Guang R Gao

As the attention given to big data grows, cluster computing systems for distributed processing of large data sets become the mainstream and critical requirement in high performance distributed system research. One of the most successful systems is Hadoop, which uses MapReduce as a programming/execution model and takes disks as intermedia to process huge volumes of data. Spark, as an in-memory computing engine, can solve the iterative and interactive problems more efficiently. However, currently it is a consensus that they are not the final solutions to big data due to a MapReduce-like programming model, synchronous execution model and the constraint that only supports batch processing, and so on. A new solution, especially, a fundamental evolution is needed to bring big data solutions into a new era. In this paper, we introduce a new cluster computing system called HAMR which supports both batch and streaming processing. To achieve better performance, HAMR integrates high performance computing approaches, i.e. dataflow fundamental into a big data solution. With more specifications, HAMR is fully designed based on in-memory computing to reduce the unnecessary disk access overhead; task scheduling and memory management are in fine-grain manner to explore more parallelism; asynchronous execution improves efficiency of computation resource usage, and also makes workload balance across the whole cluster better. The experimental results show that HAMR can outperform Hadoop MapReduce and Spark by up to 19x and 7x respectively, in the same cluster environment. Furthermore, HAMR can handle scaling data size well beyond the capabilities of Spark.


Author(s):  
Janusz Bobulski ◽  
Mariusz Kubanek

Big Data in medicine contains conceivably fast processing of large data volumes, alike new and old in perseverance associate the diagnosis and treatment of patients’ diseases. Backing systems for that kind activities may include pre-programmed rules based on data obtained from the medical interview, and automatic analysis of test diagnostic results will lead to classification of observations to a specific disease entity. The current revolution using Big Data significantly expands the role of computer science in achieving these goals, which is why we propose a computer data processing system using artificial intelligence to analyse and process medical images. We conducted research that confirms the need to use GPUs in Big Data systems that process medical images. The use of this type of processor increases system performance.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Wang Zhouhuo

In order to solve the problem of large data classification of human resources, a new parallel classification algorithm of large data of human resources based on the Spark platform is proposed in this study. According to the spark platform, it can complete the update and distance calculation of the human resource big data clustering center and design the big data clustering process. Based on this, the K-means clustering method is introduced to mine frequent itemsets of large data and optimize the aggregation degree of similar large data. A fuzzy genetic algorithm is used to identify the balance of big data. This study adopts the selective integration method to study the unbalanced human resource database classifier in the process of transmission, introduces the decision contour matrix to construct the anomaly support model of the set of unbalanced human resource data classifier, identifies the features of the big data of human resource in parallel, repairs the relevance of the big data of human resource, introduces the improved ant colony algorithm, and finally realizes the design of the parallel classification algorithm of the big data of human resource. The experimental results show that the proposed algorithm has a low time cost, good classification effect, and ideal parallel classification rule complexity.


Author(s):  
Hayoung Oh

Cognitive IoT is exponentially increased because of various real time and robust applications with sensor networks and big data analysis. Each IoT protocol of network layer can be RPL, COAP and so on based on IETF standards. But still collision problems and security-aware fair transmission on top of scalable IoT devices were not solved enough. In the open wireless LAN system based cognitive IoTs, IoT node that is continuously being stripped of its transmission opportunity will continue to accumulate packets to be sent in the butter and spoofing attacks will not allow the data transfer opportunities to be fair. Therefore, in this paper, we propose a method to reduce the average wait time of all packets in the system by dynamically controlling the contention window (CW) in a wireless LAN based cognitive IoT environment where there are nodes that do not have fair transmission opportunities due to spoofing attacks. Through the performance evaluation, we have proved that the proposed technique improves up to 80% in terms of various performance evaluation than the basic WLAN 802.11 based IoT.


Electronics ◽  
2020 ◽  
Vol 9 (9) ◽  
pp. 1434
Author(s):  
Yustus Eko Oktian ◽  
Sang-Gon Lee ◽  
Byung-Gook Lee

The state-of-the-art centralized Internet of Things (IoT) data flow pipeline has started aging since it cannot cope with the vast number of newly connected IoT devices. As a result, the community begins the transition to a decentralized pipeline to encourage data and resource sharing. However, the move is not trivial. With many instances allocating data or service arbitrarily, how can we guarantee the correctness of IoT data or processes that other parties offer. Furthermore, in case of dispute, how can the IoT data assist in determining which party is guilty of faulty behavior. Finally, the number of Service Level Agreement (SLA) increases as the number of sharing grows. The problem then becomes how we can provide a natural SLA generation and verification that we can automate instead of going through a manual and tedious legalization process through a trusted third party. In this paper, we explore blockchain solutions to answer those issues and propose continued data integrity services for IoT big data management. Specifically, we design five integrity protocols across three phases of IoT operations—during the transmission of IoT data (data in transit), when we physically store the data in the database (data at rest), and at the time of data processing (data in process). In each phase, we first lay out our motivations and survey the related blockchain solutions from the literature. We then use curated papers from our surveys as building blocks in designing the protocol. Using our proposal, we augment the overall value of IoT data and commands, generated in the IoT system, as they are now tamper-proof, verifiable, non-repudiable, and more robust.


2021 ◽  
Vol 17 (3) ◽  
pp. 1-23
Author(s):  
Borui Li ◽  
Wei Dong ◽  
Gaoyang Guan ◽  
Jiadong Zhang ◽  
Tao Gu ◽  
...  

Many IoT applications have the requirements of conducting complex IoT events processing (e.g., speech recognition) that are hardly supported by low-end IoT devices due to limited resources. Most existing approaches enable complex IoT event processing on low-end IoT devices by statically allocating tasks to the edge or the cloud. In this article, we present Queec, a QoE-aware edge computing system for complex IoT event processing under dynamic workloads. With Queec, the complex IoT event processing tasks that are relatively computation-intensive for low-end IoT devices can be transparently offloaded to nearby edge nodes at runtime. We formulate the problem of scheduling multi-user tasks to multiple edge nodes as an optimization problem, which minimizes the overall offloading latency of all tasks while avoiding the overloading problem. We implement Queec on low-end IoT devices, edge nodes, and the cloud. We conduct extensive evaluations, and the results show that Queec reduces 56.98% of the offloading latency on average compared with the state-of-the-art under dynamic workloads, while incurring acceptable overhead.


Author(s):  
Rabia Latif ◽  
Malik Uzair Ahmed ◽  
Shahzaib Tahir ◽  
Seemab Latif ◽  
Waseem Iqbal ◽  
...  

AbstractEdge computing is a distributed architecture that features decentralized processing of data near the source/devices, where data are being generated. These devices are known as Internet of Things (IoT) devices or edge devices. As we continue to rely on IoT devices, the amount of data generated by the IoT devices have increased significantly due to which it has become infeasible to transfer all the data over to the Cloud for processing. Since these devices contain insufficient storage and processing power, it gives rise to the edge computing paradigm. In edge computing data are processed by edge devices and only the required data are sent to the Cloud to increase robustness and decrease overall network overhead. IoT edge devices are inherently suffering from various security risks and attacks causing a lack of trust between devices. To reduce this malicious behavior, a lightweight trust management model is proposed that maintains the trust of a device and manages the service level trust along with quality of service (QoS). The model calculates the overall trust of the devices by using QoS parameters to evaluate the trust of devices through assigned weights. Trust management models using QoS parameters show improved results that can be helpful in identifying malicious edge nodes in edge computing networks and can be used for industrial purposes.


Author(s):  
Janusz Bobulski ◽  
Mariusz Kubanek

Big Data in medicine includes possibly fast processing of large data sets, both current and historical in purpose supporting the diagnosis and therapy of patients' diseases. Support systems for these activities may include pre-programmed rules based on data obtained from the interview medical and automatic analysis of test results diagnostic results will lead to classification of observations to a specific disease entity. The current revolution using Big Data significantly expands the role of computer science in achieving these goals, which is why we propose a Big Data computer data processing system using artificial intelligence to analyze and process medical images.


Sign in / Sign up

Export Citation Format

Share Document