Large-Scale Full-Coverage Traffic Speed Estimation under Extreme Traffic Conditions Using a Big Data and Deep Learning Approach: Case Study in China

2019 ◽  
Vol 145 (5) ◽  
pp. 05019001 ◽  
Author(s):  
Fan Ding ◽  
Zhen Zhang ◽  
Yang Zhou ◽  
Xiaoxuan Chen ◽  
Bin Ran
2015 ◽  
Vol 2015 ◽  
pp. 1-19 ◽  
Author(s):  
Zongjian He ◽  
Buyang Cao ◽  
Yan Liu

Real-time traffic speed is indispensable for many ITS applications, such as traffic-aware route planning and eco-driving advisory system. Existing traffic speed estimation solutions assume vehicles travel along roads using constant speed. However, this assumption does not hold due to traffic dynamicity and can potentially lead to inaccurate estimation in real world. In this paper, we propose a novel in-network traffic speed estimation approach using infrastructure-free vehicular networks. The proposed solution utilizes macroscopic traffic flow model to estimate the traffic condition. The selected model only relies on vehicle density, which is less likely to be affected by the traffic dynamicity. In addition, we also demonstrate an application of the proposed solution in real-time route planning applications. Extensive evaluations using both traffic trace based large scale simulation and testbed based implementation have been performed. The results show that our solution outperforms some existing ones in terms of accuracy and efficiency in traffic-aware route planning applications.


2020 ◽  
Author(s):  
Anusha Ampavathi ◽  
Vijaya Saradhi T

UNSTRUCTURED Big data and its approaches are generally helpful for healthcare and biomedical sectors for predicting the disease. For trivial symptoms, the difficulty is to meet the doctors at any time in the hospital. Thus, big data provides essential data regarding the diseases on the basis of the patient’s symptoms. For several medical organizations, disease prediction is important for making the best feasible health care decisions. Conversely, the conventional medical care model offers input as structured that requires more accurate and consistent prediction. This paper is planned to develop the multi-disease prediction using the improvised deep learning concept. Here, the different datasets pertain to “Diabetes, Hepatitis, lung cancer, liver tumor, heart disease, Parkinson’s disease, and Alzheimer’s disease”, from the benchmark UCI repository is gathered for conducting the experiment. The proposed model involves three phases (a) Data normalization (b) Weighted normalized feature extraction, and (c) prediction. Initially, the dataset is normalized in order to make the attribute's range at a certain level. Further, weighted feature extraction is performed, in which a weight function is multiplied with each attribute value for making large scale deviation. Here, the weight function is optimized using the combination of two meta-heuristic algorithms termed as Jaya Algorithm-based Multi-Verse Optimization algorithm (JA-MVO). The optimally extracted features are subjected to the hybrid deep learning algorithms like “Deep Belief Network (DBN) and Recurrent Neural Network (RNN)”. As a modification to hybrid deep learning architecture, the weight of both DBN and RNN is optimized using the same hybrid optimization algorithm. Further, the comparative evaluation of the proposed prediction over the existing models certifies its effectiveness through various performance measures.


Big data is large-scale data collected for knowledge discovery, it has been widely used in various applications. Big data often has image data from the various applications and requires effective technique to process data. In this paper, survey has been done in the big image data researches to analysis the effective performance of the methods. Deep learning techniques provides the effective performance compared to other methods included wavelet based methods. The deep learning techniques has the problem of requiring more computational time, and this can be overcome by lightweight methods.


2021 ◽  
Vol 48 (1) ◽  
pp. 55-71
Author(s):  
Xiao-Bo Tang ◽  
Wei-Gang Fu ◽  
Yan Liu

The scale of know­ledge is growing rapidly in the big data environment, and traditional know­ledge organization and services have faced the dilemma of semantic inaccuracy and untimeliness. From a know­ledge fusion perspective-combining the precise semantic superiority of traditional ontology with the large-scale graph processing power and the predicate attribute expression ability of property graph-this paper presents an ontology and property graph fusion framework (OPGFF). The fusion process is divided into content layer fusion and constraint layer fusion. The result of the fusion, that is, the know­ledge representation model is called know­ledge big graph. In addition, this paper applies the know­ledge big graph model to the ownership network in the China’s financial field and builds a financial ownership know­ledge big graph. Furthermore, this paper designs and implements six consistency inference algorithms for finding contradictory data and filling in missing data in the financial ownership know­ledge big graph, five of which are completely domain agnostic. The correctness and validity of the algorithms have been experimentally verified with actual data. The fusion OPGFF framework and the implementation method of the know­ledge big graph could provide technical reference for big data know­ledge organization and services.


Sign in / Sign up

Export Citation Format

Share Document