A Robust P2P Information Sharing System and its Application to Communication Support in Natural Disasters

Author(s):  
Takuma Oide ◽  
Akiko Takahashi ◽  
Atsushi Takeda ◽  
Takuo Suganuma

To provide the stable and continuous network services in cases of large-scale natural disasters, computers must use extremely limited network and computational resources effectively without imposing additional administrative burdens. The authors propose a P2P Information Sharing System for affected areas based on our proposed structured P2P network called the Well-distribution Algorithm for an Overlay Network (WAON). By applying the WAON framework, the system configures the P2P network autonomously using the remaining nodes, and achieves load balancing dynamically without additional network maintenance costs. Therefore, the system can perform well in an unstable network environment such as that during a disaster. The authors designed and implemented the system and evaluated its overall system behavior and performance in simulations assuming the real scenario of the Great East Japan Earthquake. Results show that the authors' system can distribute safety confirmation information of victims efficiently among the remaining nodes.

2001 ◽  
Vol 16 (3) ◽  
pp. 128-137 ◽  
Author(s):  
Frederick M. Burkle ◽  
Robin Hayden

AbstractManagement of large-scale disasters is impeded by inadequately designed organizational infrastructure. The vertical organizational structures of most agencies responding to disasters contribute to a poorly integrated response, especially when collaboration, information sharing, and coordination are required. Horizontal (or lateral) organizations have assisted traditionally vertical civilian and military agencies by enhancing their capacity to operate successfully in complex human emergencies and large-scale natural disasters. Because of the multiagency and highly technical multidisciplinary requirements for decision-making in chemical and biological disasters, similar horizontal management options must be considered.


2019 ◽  
Vol 141 (11) ◽  
Author(s):  
Ayush Raina ◽  
Christopher McComb ◽  
Jonathan Cagan

Abstract Humans as designers have quite versatile problem-solving strategies. Computer agents on the other hand can access large-scale computational resources to solve certain design problems. Hence, if agents can learn from human behavior, a synergetic human-agent problem-solving team can be created. This paper presents an approach to extract human design strategies and implicit rules, purely from historical human data, and use that for design generation. A two-step framework that learns to imitate human design strategies from observation is proposed and implemented. This framework makes use of deep learning constructs to learn to generate designs without any explicit information about objective and performance metrics. The framework is designed to interact with the problem through a visual interface as humans did when solving the problem. It is trained to imitate a set of human designers by observing their design state sequences without inducing problem-specific modeling bias or extra information about the problem. Furthermore, an end-to-end agent is developed that uses this deep learning framework as its core in conjunction with image processing to map pixel-to-design moves as a mechanism to generate designs. Finally, the designs generated by a computational team of these agents are then compared with actual human data for teams solving a truss design problem. Results demonstrate that these agents are able to create feasible and efficient truss designs without guidance, showing that this methodology allows agents to learn effective design strategies.


2021 ◽  
Vol 4 ◽  
Author(s):  
Andreas Zeiselmair ◽  
Bernd Steinkopf ◽  
Ulrich Gallersdörfer ◽  
Alexander Bogensperger ◽  
Florian Matthes

The energy system is becoming increasingly decentralized. This development requires integrating and coordinating a rising number of actors and small units in a complex system. Blockchain could provide a base infrastructure for new tools and platforms that address these tasks in various aspects—ranging from dispatch optimization or dynamic load adaption to (local) market mechanisms. Many of these applications are currently in development and subject to research projects. In decentralized energy markets especially, the optimized allocation of energy products demands complex computation. Combining these with distributed ledger technologies leads to bottlenecks and challenges regarding privacy requirements and performance due to limited storage and computational resources. Verifiable computation techniques promise a solution to these issues. This paper presents an overview of verifiable computation technologies, including trusted oracles, zkSNARKs, and multi-party computation. We further analyze their application in blockchain environments with a focus on energy-related applications. Applied to a distinct optimization problem of renewable energy certificates, we have evaluated these solution approaches and finally demonstrate an implementation of a Simplex-Optimization using zkSNARKs as a case study. We conclude with an assessment of the applicability of the described verifiable computation techniques and address limitations for large-scale deployment, followed by an outlook on current development trends.


2019 ◽  
Vol 214 ◽  
pp. 03023
Author(s):  
F G Sciacca ◽  
M Weber

Prediction for requirements for the LHC computing for Run 3 and for Run 4 (HL-LHC) over the course of the next 10 year, show a considerable gap between required and available resources, assuming budgets will globally remain flat at best. This will require some radical changes to the computing models for the data processing of the LHC experiments. The use of large scale computational resources at HPC centres worldwide is expected to increase substantially the cost-efficiency of the processing. In order to pave the path towards the HL-LHC data processing, the Swiss Institute of Particle Physics (CHIPP) has taken the strategic decision to migrate the processing of all the Tier-2 workloads for ATLAS and other LHC experiments from a dedicated x86 ̲ 64 cluster that has been in continuous operation and evolution since 2007, to Piz Daint, the current European flagship HPC, which ranks third in the TOP500 at the time of writing. We report on the technical challenges and solutions adopted to migrate to Piz Daint, and on the experience and measured performance for ATLAS in over one year of running in production.


Author(s):  
Ayush Raina ◽  
Christopher McComb ◽  
Jonathan Cagan

Abstract Humans as designers have quite versatile problem-solving strategies. Computer agents on the other hand can access large scale computational resources to solve certain design problems. Hence, if agents can learn from human behavior, a synergetic human-agent problem solving team can be created. This paper presents an approach to extract human design strategies and implicit rules, purely from historical human data, and use that for design generation. A two-step framework that learns to imitate human design strategies from observation is proposed and implemented. This framework makes use of deep learning constructs to learn to generate designs without any explicit information about objective and performance metrics. The framework is designed to interact with the problem through a visual interface as humans did when solving the problem. It is trained to imitate a set of human designers by observing their design state sequences without inducing problem-specific modelling bias or extra information about the problem. Furthermore, an end-to-end agent is developed that uses this deep learning framework as its core in conjunction with image processing to map pixel-to-design moves as a mechanism to generate designs. Finally, the designs generated by a computational team of these agents are then compared to actual human data for teams solving a truss design problem. Results demonstrates that these agents are able to create feasible and efficient truss designs without guidance, showing that this methodology allows agents to learn effective design strategies.


2021 ◽  
Author(s):  
Parsoa Khorsand ◽  
Fereydoun Hormozdiari

Abstract Large scale catalogs of common genetic variants (including indels and structural variants) are being created using data from second and third generation whole-genome sequencing technologies. However, the genotyping of these variants in newly sequenced samples is a nontrivial task that requires extensive computational resources. Furthermore, current approaches are mostly limited to only specific types of variants and are generally prone to various errors and ambiguities when genotyping complex events. We are proposing an ultra-efficient approach for genotyping any type of structural variation that is not limited by the shortcomings and complexities of current mapping-based approaches. Our method Nebula utilizes the changes in the count of k-mers to predict the genotype of structural variants. We have shown that not only Nebula is an order of magnitude faster than mapping based approaches for genotyping structural variants, but also has comparable accuracy to state-of-the-art approaches. Furthermore, Nebula is a generic framework not limited to any specific type of event. Nebula is publicly available at https://github.com/Parsoa/Nebula.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1342
Author(s):  
Borja Nogales ◽  
Miguel Silva ◽  
Ivan Vidal ◽  
Miguel Luís ◽  
Francisco Valera ◽  
...  

5G communications have become an enabler for the creation of new and more complex networking scenarios, bringing together different vertical ecosystems. Such behavior has been fostered by the network function virtualization (NFV) concept, where the orchestration and virtualization capabilities allow the possibility of dynamically supplying network resources according to its needs. Nevertheless, the integration and performance of heterogeneous network environments, each one supported by a different provider, and with specific characteristics and requirements, in a single NFV framework is not straightforward. In this work we propose an NFV-based framework capable of supporting the flexible, cost-effective deployment of vertical services, through the integration of two distinguished mobile environments and their networks: small sized unmanned aerial vehicles (SUAVs), supporting a flying ad hoc network (FANET) and vehicles, promoting a vehicular ad hoc network (VANET). In this context, a use case involving the public safety vertical will be used as an illustrative example to showcase the potential of this framework. This work also includes the technical implementation details of the framework proposed, allowing to analyse and discuss the delays on the network services deployment process. The results show that the deployment times can be significantly reduced through a distributed VNF configuration function based on the publish–subscribe model.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2648
Author(s):  
Muhammad Aamir ◽  
Tariq Ali ◽  
Muhammad Irfan ◽  
Ahmad Shaf ◽  
Muhammad Zeeshan Azam ◽  
...  

Natural disasters not only disturb the human ecological system but also destroy the properties and critical infrastructures of human societies and even lead to permanent change in the ecosystem. Disaster can be caused by naturally occurring events such as earthquakes, cyclones, floods, and wildfires. Many deep learning techniques have been applied by various researchers to detect and classify natural disasters to overcome losses in ecosystems, but detection of natural disasters still faces issues due to the complex and imbalanced structures of images. To tackle this problem, we propose a multilayered deep convolutional neural network. The proposed model works in two blocks: Block-I convolutional neural network (B-I CNN), for detection and occurrence of disasters, and Block-II convolutional neural network (B-II CNN), for classification of natural disaster intensity types with different filters and parameters. The model is tested on 4428 natural images and performance is calculated and expressed as different statistical values: sensitivity (SE), 97.54%; specificity (SP), 98.22%; accuracy rate (AR), 99.92%; precision (PRE), 97.79%; and F1-score (F1), 97.97%. The overall accuracy for the whole model is 99.92%, which is competitive and comparable with state-of-the-art algorithms.


Sign in / Sign up

Export Citation Format

Share Document