scholarly journals Scalable and Reliable Deep Learning Model to handle real-time Streaming Data

We have real-time data everywhere and every day. Most of the data comes from IoT sensors, data from GPS positions, web transactions and social media updates. Real time data is typically generated in a continuous fashion. Such real-time data are called Data streams. Data streams are transient and there is very little time to process each item in the stream. It is a great challenge to do analytics on rapidly flowing high velocity data. Another issue is the percentage of incoming data that is considered for analytics. Higher the percentage greater would be the accuracy. Considering these two issues, the proposed work is intended to find a better solution by gaining insight on real-time streaming data with minimum response time and greater accuracy. This paper combines the two technology giants TensorFlow and Apache Kafka. is used to handle the real-time streaming data since TensorFlow supports analytics support with deep learning algorithms. The Training and Testing is done on Uber connected vehicle public data set RideAustin. The experimental result of RideAustin shows the predicted failure under each type of vehicle parameter. The comparative analysis showed 16% improvement over the traditional Machine Learning algorithm.

2016 ◽  
Vol 7 (3) ◽  
pp. 38-55
Author(s):  
Srinivasa K.G. ◽  
Ganesh Hegde ◽  
Kushagra Mishra ◽  
Mohammad Nabeel Siddiqui ◽  
Abhishek Kumar ◽  
...  

With the advancement of portable devices and sensors, there has been a need to build a universal framework, which can serve as a nodal point to aggregate data from different kinds of devices and sensors. We propose a unified framework that will provide a robust set of guidelines for sensors with varied degree of complexities connected to common set of System-on-Chip (SoC). These will help to monitor, control and visualize real time data coming from different type of sensors connected to these SoCs. We have defined a set of APIs, which will help the sensors to register with the server. These APIs will be the standard to which the sensors will comply while streaming data when connected to the client platforms.


2020 ◽  
Vol 12 (23) ◽  
pp. 10175
Author(s):  
Fatima Abdullah ◽  
Limei Peng ◽  
Byungchul Tak

The volume of streaming sensor data from various environmental sensors continues to increase rapidly due to wider deployments of IoT devices at much greater scales than ever before. This, in turn, causes massive increase in the fog, cloud network traffic which leads to heavily delayed network operations. In streaming data analytics, the ability to obtain real time data insight is crucial for computational sustainability for many IoT enabled applications such as environmental monitors, pollution and climate surveillance, traffic control or even E-commerce applications. However, such network delays prevent us from achieving high quality real-time data analytics of environmental information. In order to address this challenge, we propose the Fog Sampling Node Selector (Fossel) technique that can significantly reduce the IoT network and processing delays by algorithmically selecting an optimal subset of fog nodes to perform the sensor data sampling. In addition, our technique performs a simple type of query executions within the fog nodes in order to further reduce the network delays by processing the data near the data producing devices. Our extensive evaluations show that Fossel technique outperforms the state-of-the-art in terms of latency reduction as well as in bandwidth consumption, network usage and energy consumption.


Author(s):  
Giovanni Capobianco ◽  
Umberto Di Giacomo ◽  
Tommaso Di Tusa ◽  
Francesco Mercaldo ◽  
Antonella Santone

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Shihong Dang ◽  
Wei Tang

The traditional real-time data scheduling method ignores the optimization process of job data that leads to delayed delivery, high inventory cost, and low utilization rate of equipment. This paper proposes a novel real-time data scheduling method based on deep learning and an improved fuzzy algorithm for flexible operations in the papermaking workshop. The algorithm is divided into three parts: the first part describes the flexible job shop scheduling problem; the second part constructs the fuzzy scheduling model of flexible job data in papermaking workshop; and finally the third part uses a genetic algorithm to obtain the optimal solution of fuzzy scheduling of flexible job data in papermaking workshop. The results show that the optimal solution is obtained in 48 seconds at the 23rd attempt (iteration) under the application of the proposed method. This result is much better than the three traditional scheduling methods with which we compared our results. Hence, this paper improves the work efficiency and quality of papermaking workshop and reduces the operating cost of the papermaking enterprise.


2011 ◽  
Vol 5 (1) ◽  
pp. 85-110 ◽  
Author(s):  
Krasimira Kapitanova ◽  
Yuan Wei ◽  
Woo-Chul Kang ◽  
Sang-H. Son

Sign in / Sign up

Export Citation Format

Share Document