Parallel Shortest Path Big Data Graph Computations of US Road Network Using Apache Spark: Survey, Architecture, and Evaluation

Author(s):  
Yasir Arfat ◽  
Sugimiyanto Suma ◽  
Rashid Mehmood ◽  
Aiiad Albeshri
Author(s):  
Muhammad Junaid ◽  
Shiraz Ali Wagan ◽  
Nawab Muhammad Faseeh Qureshi ◽  
Choon Sung Nam ◽  
Dong Ryeol Shin

2021 ◽  
Vol 464 ◽  
pp. 432-437
Author(s):  
Mario Juez-Gil ◽  
Álvar Arnaiz-González ◽  
Juan J. Rodríguez ◽  
Carlos López-Nozal ◽  
César García-Osorio
Keyword(s):  
Big Data ◽  

2021 ◽  
pp. 369-389
Author(s):  
Atsushi Takizawa ◽  
Yutaka Kawagishi

AbstractWhen a disaster such as a large earthquake occurs, the resulting breakdown in public transportation leaves urban areas with many people who are struggling to return home. With people from various surrounding areas gathered in the city, unusually heavy congestion may occur on the roads when the commuters start to return home all at once on foot. In this chapter, it is assumed that a large earthquake caused by the Nankai Trough occurs at 2 p.m. on a weekday in Osaka City, where there are many commuters. We then assume a scenario in which evacuation from a resulting tsunami is carried out in the flooded area and people return home on foot in the other areas. At this time, evacuation and returning-home routes with the shortest possible travel times are obtained by solving the evacuation planning problem. However, the road network big data for Osaka City make such optimization difficult. Therefore, we propose methods for simplifying the large network while keeping those properties necessary for solving the optimization problem and then recovering the network. The obtained routes are then verified by large-scale pedestrian simulation, and the effect of the optimization is verified.


Author(s):  
J. Boehm ◽  
K. Liu ◽  
C. Alis

In the geospatial domain we have now reached the point where data volumes we handle have clearly grown beyond the capacity of most desktop computers. This is particularly true in the area of point cloud processing. It is therefore naturally lucrative to explore established big data frameworks for big geospatial data. The very first hurdle is the import of geospatial data into big data frameworks, commonly referred to as data ingestion. Geospatial data is typically encoded in specialised binary file formats, which are not naturally supported by the existing big data frameworks. Instead such file formats are supported by software libraries that are restricted to single CPU execution. We present an approach that allows the use of existing point cloud file format libraries on the Apache Spark big data framework. We demonstrate the ingestion of large volumes of point cloud data into a compute cluster. The approach uses a map function to distribute the data ingestion across the nodes of a cluster. We test the capabilities of the proposed method to load billions of points into a commodity hardware compute cluster and we discuss the implications on scalability and performance. The performance is benchmarked against an existing native Apache Spark data import implementation.


Sign in / Sign up

Export Citation Format

Share Document