Failure-Tolerant Monitoring Based on Spatial-Temporal Correlation via Mobile Sensors for Large-Scale Acyclic Flow Systems in Smart Cities

Author(s):  
Haihan Zhang ◽  
Junbin Liang ◽  
Victor C.M. Leung
Electronics ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 218
Author(s):  
Ala’ Khalifeh ◽  
Khalid A. Darabkh ◽  
Ahmad M. Khasawneh ◽  
Issa Alqaisieh ◽  
Mohammad Salameh ◽  
...  

The advent of various wireless technologies has paved the way for the realization of new infrastructures and applications for smart cities. Wireless Sensor Networks (WSNs) are one of the most important among these technologies. WSNs are widely used in various applications in our daily lives. Due to their cost effectiveness and rapid deployment, WSNs can be used for securing smart cities by providing remote monitoring and sensing for many critical scenarios including hostile environments, battlefields, or areas subject to natural disasters such as earthquakes, volcano eruptions, and floods or to large-scale accidents such as nuclear plants explosions or chemical plumes. The purpose of this paper is to propose a new framework where WSNs are adopted for remote sensing and monitoring in smart city applications. We propose using Unmanned Aerial Vehicles to act as a data mule to offload the sensor nodes and transfer the monitoring data securely to the remote control center for further analysis and decision making. Furthermore, the paper provides insight about implementation challenges in the realization of the proposed framework. In addition, the paper provides an experimental evaluation of the proposed design in outdoor environments, in the presence of different types of obstacles, common to typical outdoor fields. The experimental evaluation revealed several inconsistencies between the performance metrics advertised in the hardware-specific data-sheets. In particular, we found mismatches between the advertised coverage distance and signal strength with our experimental measurements. Therefore, it is crucial that network designers and developers conduct field tests and device performance assessment before designing and implementing the WSN for application in a real field setting.


Author(s):  
Bassel Al Homssi ◽  
Akram Al-Hourani ◽  
Kagiso Magowe ◽  
James Delaney ◽  
Neil Tom ◽  
...  
Keyword(s):  

Smart Cities ◽  
2021 ◽  
Vol 4 (2) ◽  
pp. 662-685
Author(s):  
Stephan Olariu

Under present-day practices, the vehicles on our roadways and city streets are mere spectators that witness traffic-related events without being able to participate in the mitigation of their effect. This paper lays the theoretical foundations of a framework for harnessing the on-board computational resources in vehicles stuck in urban congestion in order to assist transportation agencies with preventing or dissipating congestion through large-scale signal re-timing. Our framework is called VACCS: Vehicular Crowdsourcing for Congestion Support in Smart Cities. What makes this framework unique is that we suggest that in such situations the vehicles have the potential to cooperate with various transportation authorities to solve problems that otherwise would either take an inordinate amount of time to solve or cannot be solved for lack for adequate municipal resources. VACCS offers direct benefits to both the driving public and the Smart City. By developing timing plans that respond to current traffic conditions, overall traffic flow will improve, carbon emissions will be reduced, and economic impacts of congestion on citizens and businesses will be lessened. It is expected that drivers will be willing to donate under-utilized on-board computing resources in their vehicles to develop improved signal timing plans in return for the direct benefits of time savings and reduced fuel consumption costs. VACCS allows the Smart City to dynamically respond to traffic conditions while simultaneously reducing investments in the computational resources that would be required for traditional adaptive traffic signal control systems.


Energies ◽  
2018 ◽  
Vol 11 (10) ◽  
pp. 2741 ◽  
Author(s):  
George Lavidas ◽  
Vengatesan Venugopal

At autonomous electricity grids Renewable Energy (RE) contributes significantly to energy production. Offshore resources benefit from higher energy density, smaller visual impacts, and higher availability levels. Offshore locations at the West of Crete obtain wind availability ≈80%, combining this with the installation potential for large scale modern wind turbines (rated power) then expected annual benefits are immense. Temporal variability of production is a limiting factor for wider adaptation of large offshore farms. To this end multi-generation with wave energy can alleviate issues of non-generation for wind. Spatio-temporal correlation of wind and wave energy production exhibit that wind and wave hybrid stations can contribute significant amounts of clean energy, while at the same time reducing spatial constrains and public acceptance issues. Offshore technologies can be combined as co-located or not, altering contribution profiles of wave energy to non-operating wind turbine production. In this study a co-located option contributes up to 626 h per annum, while a non co-located solution is found to complement over 4000 h of a non-operative wind turbine. Findings indicate the opportunities associated not only in terms of capital expenditure reduction, but also in the ever important issue of renewable variability and grid stability.


Author(s):  
Fan Zuo ◽  
Abdullah Kurkcu ◽  
Kaan Ozbay ◽  
Jingqin Gao

Emergency events affect human security and safety as well as the integrity of the local infrastructure. Emergency response officials are required to make decisions using limited information and time. During emergency events, people post updates to social media networks, such as tweets, containing information about their status, help requests, incident reports, and other useful information. In this research project, the Latent Dirichlet Allocation (LDA) model is used to automatically classify incident-related tweets and incident types using Twitter data. Unlike the previous social media information models proposed in the related literature, the LDA is an unsupervised learning model which can be utilized directly without prior knowledge and preparation for data in order to save time during emergencies. Twitter data including messages and geolocation information during two recent events in New York City, the Chelsea explosion and Hurricane Sandy, are used as two case studies to test the accuracy of the LDA model for extracting incident-related tweets and labeling them by incident type. Results showed that the model could extract emergency events and classify them for both small and large-scale events, and the model’s hyper-parameters can be shared in a similar language environment to save model training time. Furthermore, the list of keywords generated by the model can be used as prior knowledge for emergency event classification and training of supervised classification models such as support vector machine and recurrent neural network.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Minyu Shi ◽  
Yongting Zhang ◽  
Huanhuan Wang ◽  
Junfeng Hu ◽  
Xiang Wu

The innovation of the deep learning modeling scheme plays an important role in promoting the research of complex problems handled with artificial intelligence in smart cities and the development of the next generation of information technology. With the widespread use of smart interactive devices and systems, the exponential growth of data volume and the complex modeling requirements increase the difficulty of deep learning modeling, and the classical centralized deep learning modeling scheme has encountered bottlenecks in the improvement of model performance and the diversification of smart application scenarios. The parallel processing system in deep learning links the virtual information space with the physical world, although the distributed deep learning research has become a crucial concern with its unique advantages in training efficiency, and improving the availability of trained models and preventing privacy disclosure are still the main challenges faced by related research. To address these above issues in distributed deep learning, this research developed a clonal selective optimization system based on the federated learning framework for the model training process involving large-scale data. This system adopts the heuristic clonal selective strategy in local model optimization and optimizes the effect of federated training. First of all, this process enhances the adaptability and robustness of the federated learning scheme and improves the modeling performance and training efficiency. Furthermore, this research attempts to improve the privacy security defense capability of the federated learning scheme for big data through differential privacy preprocessing. The simulation results show that the proposed clonal selection optimization system based on federated learning has significant optimization ability on model basic performance, stability, and privacy.


Ocean Science ◽  
2016 ◽  
Vol 12 (5) ◽  
pp. 1067-1090 ◽  
Author(s):  
Marie-Isabelle Pujol ◽  
Yannice Faugère ◽  
Guillaume Taburet ◽  
Stéphanie Dupuy ◽  
Camille Pelloquin ◽  
...  

Abstract. The new DUACS DT2014 reprocessed products have been available since April 2014. Numerous innovative changes have been introduced at each step of an extensively revised data processing protocol. The use of a new 20-year altimeter reference period in place of the previous 7-year reference significantly changes the sea level anomaly (SLA) patterns and thus has a strong user impact. The use of up-to-date altimeter standards and geophysical corrections, reduced smoothing of the along-track data, and refined mapping parameters, including spatial and temporal correlation-scale refinement and measurement errors, all contribute to an improved high-quality DT2014 SLA data set. Although all of the DUACS products have been upgraded, this paper focuses on the enhancements to the gridded SLA products over the global ocean. As part of this exercise, 21 years of data have been homogenized, allowing us to retrieve accurate large-scale climate signals such as global and regional MSL trends, interannual signals, and better refined mesoscale features.An extensive assessment exercise has been carried out on this data set, which allows us to establish a consolidated error budget. The errors at mesoscale are about 1.4 cm2 in low-variability areas, increase to an average of 8.9 cm2 in coastal regions, and reach nearly 32.5 cm2 in high mesoscale activity areas. The DT2014 products, compared to the previous DT2010 version, retain signals for wavelengths lower than  ∼  250 km, inducing SLA variance and mean EKE increases of, respectively, +5.1 and +15 %. Comparisons with independent measurements highlight the improved mesoscale representation within this new data set. The error reduction at the mesoscale reaches nearly 10 % of the error observed with DT2010. DT2014 also presents an improved coastal signal with a nearly 2 to 4 % mean error reduction. High-latitude areas are also more accurately represented in DT2014, with an improved consistency between spatial coverage and sea ice edge position. An error budget is used to highlight the limitations of the new gridded products, with notable errors in areas with strong internal tides.


Sensor Review ◽  
2019 ◽  
Vol 39 (2) ◽  
pp. 208-217 ◽  
Author(s):  
Jinghan Du ◽  
Haiyan Chen ◽  
Weining Zhang

Purpose In large-scale monitoring systems, sensors in different locations are deployed to collect massive useful time-series data, which can help in real-time data analytics and its related applications. However, affected by hardware device itself, sensor nodes often fail to work, resulting in a common phenomenon that the collected data are incomplete. The purpose of this study is to predict and recover the missing data in sensor networks. Design/methodology/approach Considering the spatio-temporal correlation of large-scale sensor data, this paper proposes a data recover model in sensor networks based on a deep learning method, i.e. deep belief network (DBN). Specifically, when one sensor fails, the historical time-series data of its own and the real-time data from surrounding sensor nodes, which have high similarity with a failure observed using the proposed similarity filter, are collected first. Then, the high-level feature representation of these spatio-temporal correlation data is extracted by DBN. Moreover, to determine the structure of a DBN model, a reconstruction error-based algorithm is proposed. Finally, the missing data are predicted based on these features by a single-layer neural network. Findings This paper collects a noise data set from an airport monitoring system for experiments. Various comparative experiments show that the proposed algorithms are effective. The proposed data recovery model is compared with several other classical models, and the experimental results prove that the deep learning-based model can not only get a better prediction accuracy but also get a better performance in training time and model robustness. Originality/value A deep learning method is investigated in data recovery task, and it proved to be effective compared with other previous methods. This might provide a practical experience in the application of a deep learning method.


Author(s):  
Eduardo Felipe Zambom Santana ◽  
Nelson Lago ◽  
Fabio Kon ◽  
Dejan S. Milojicic

Sign in / Sign up

Export Citation Format

Share Document