scholarly journals Locating the Source of Diffusion in Complex Networks via Gaussian-Based Localization and Deduction

2019 ◽  
Vol 9 (18) ◽  
pp. 3758 ◽  
Author(s):  
Xiang Li ◽  
Xiaojie Wang ◽  
Chengli Zhao ◽  
Xue Zhang ◽  
Dongyun Yi

Locating the source that undergoes a diffusion-like process is a fundamental and challenging problem in complex network, which can help inhibit the outbreak of epidemics among humans, suppress the spread of rumors on the Internet, prevent cascading failures of power grids, etc. However, our ability to accurately locate the diffusion source is strictly limited by incomplete information of nodes and inevitable randomness of diffusion process. In this paper, we propose an efficient optimization approach via maximum likelihood estimation to locate the diffusion source in complex networks with limited observations. By modeling the informed times of the observers, we derive an optimal source localization solution for arbitrary trees and then extend it to general graphs via proper approximations. The numerical analyses on synthetic networks and real networks all indicate that our method is superior to several benchmark methods in terms of the average localization accuracy, high-precision localization and approximate area localization. In addition, low computational cost enables our method to be widely applied for the source localization problem in large-scale networks. We believe that our work can provide valuable insights on the interplay between information diffusion and source localization in complex networks.

2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Franz Kaiser ◽  
Vito Latora ◽  
Dirk Witthaut

AbstractIn our daily lives, we rely on the proper functioning of supply networks, from power grids to water transmission systems. A single failure in these critical infrastructures can lead to a complete collapse through a cascading failure mechanism. Counteracting strategies are thus heavily sought after. In this article, we introduce a general framework to analyse the spreading of failures in complex networks and demostrate that not only decreasing but also increasing the connectivity of the network can be an effective method to contain damages. We rigorously prove the existence of certain subgraphs, called network isolators, that can completely inhibit any failure spreading, and we show how to create such isolators in synthetic and real-world networks. The addition of selected links can thus prevent large scale outages as demonstrated for power transmission grids.


Entropy ◽  
2021 ◽  
Vol 23 (9) ◽  
pp. 1216
Author(s):  
Jedidiah Yanez-Sierra ◽  
Arturo Diaz-Perez ◽  
Victor Sosa-Sosa

One of the main problems in graph analysis is the correct identification of relevant nodes for spreading processes. Spreaders are crucial for accelerating/hindering information diffusion, increasing product exposure, controlling diseases, rumors, and more. Correct identification of spreaders in graph analysis is a relevant task to optimally use the network structure and ensure a more efficient flow of information. Additionally, network topology has proven to play a relevant role in the spreading processes. In this sense, more of the existing methods based on local, global, or hybrid centrality measures only select relevant nodes based on their ranking values, but they do not intentionally focus on their distribution on the graph. In this paper, we propose a simple yet effective method that takes advantage of the underlying graph topology to guarantee that the selected nodes are not only relevant but also well-scattered. Our proposal also suggests how to define the number of spreaders to select. The approach is composed of two phases: first, graph partitioning; and second, identification and distribution of relevant nodes. We have tested our approach by applying the SIR spreading model over nine real complex networks. The experimental results showed more influential and scattered values for the set of relevant nodes identified by our approach than several reference algorithms, including degree, closeness, Betweenness, VoteRank, HybridRank, and IKS. The results further showed an improvement in the propagation influence value when combining our distribution strategy with classical metrics, such as degree, outperforming computationally more complex strategies. Moreover, our proposal shows a good computational complexity and can be applied to large-scale networks.


2019 ◽  
Vol 9 (18) ◽  
pp. 3644 ◽  
Author(s):  
Xiang Li ◽  
Xiaojie Wang ◽  
Chengli Zhao ◽  
Xue Zhang ◽  
Dongyun Yi

Epidemic source localization is one of the most meaningful areas of research in complex networks, which helps solve the problem of infectious disease spread. Limited by incomplete information of nodes and inevitable randomness of the spread process, locating the epidemic source becomes a little difficult. In this paper, we propose an efficient algorithm via Bayesian Estimation to locate the epidemic source and find the initial time in complex networks with sparse observers. By modeling the infected time of observers, we put forward a valid epidemic source localization method for tree network and further extend it to the general network via maximum spanning tree. The numerical analyses in synthetic networks and empirical networks show that our algorithm has a higher source localization accuracy than other comparison algorithms. In particular, when the randomness of the spread path enhances, our algorithm has a better performance. We believe that our method can provide an effective reference for epidemic spread and source localization in complex networks.


2017 ◽  
Vol 29 (1) ◽  
pp. 26-36 ◽  
Author(s):  
Ryu Takeda ◽  
◽  
Kazunori Komatani

[abstFig src='/00290001/03.jpg' width='300' text='Sound source localization and problem' ] We focus on the problem of localizing soft/weak voices recorded by small humanoid robots, such as NAO. Sound source localization (SSL) for such robots requires fast processing and noise robustness owing to the restricted resources and the internal noise close to the microphones. Multiple signal classification using generalized eigenvalue decomposition (GEVD-MUSIC) is a promising method for SSL. It achieves noise robustness by whitening robot internal noise using prior noise information. However, whitening increases the computational cost and creates a direction-dependent bias in the localization score, which degrades the localization accuracy. We have thus developed a new implementation of GEVD-MUSIC based on steering vector transformation (TSV-MUSIC). The application of a transformation equivalent to whitening to steering vectors in advance reduces the real-time computational cost of TSV-MUSIC. Moreover, normalization of the transformed vectors cancels the direction-dependent bias and improves the localization accuracy. Experiments using simulated data showed that TSV-MUSIC had the highest accuracy of the methods tested. An experiment using real recoded data showed that TSV-MUSIC outperformed GEVD-MUSIC and other MUSIC methods in terms of localization by about 4 points under low signal-to-noise-ratio conditions.


2014 ◽  
Vol 2014 ◽  
pp. 1-9
Author(s):  
Diana Irazú Escalona-Vargas ◽  
Ivan Lopez-Arevalo ◽  
David Gutiérrez

We study the use of nonparametric multicompare statistical tests on the performance of simulated annealing (SA), genetic algorithm (GA), particle swarm optimization (PSO), and differential evolution (DE), when used for electroencephalographic (EEG) source localization. Such task can be posed as an optimization problem for which the referred metaheuristic methods are well suited. Hence, we evaluate the localization’s performance in terms of metaheuristics’ operational parameters and for a fixed number of evaluations of the objective function. In this way, we are able to link the efficiency of the metaheuristics with a common measure of computational cost. Our results did not show significant differences in the metaheuristics’ performance for the case of single source localization. In case of localizing two correlated sources, we found that PSO (ring and tree topologies) and DE performed the worst, then they should not be considered in large-scale EEG source localization problems. Overall, the multicompare tests allowed to demonstrate the little effect that the selection of a particular metaheuristic and the variations in their operational parameters have in this optimization problem.


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3747 ◽  
Author(s):  
Zhixin Liu ◽  
Rui Wang ◽  
Yongjun Zhao

Current bias compensation methods for distributed localization consider the time difference of arrival (TDOA) and frequency difference of arrival (FDOA) measurements noise, but ignore the negative influence by the sensor location uncertainties on source localization accuracy. Therefore, a new bias compensation method for distributed localization is proposed to improve the localization accuracy in this paper. This paper derives the theoretical bias of maximum likelihood estimation when the sensor location errors and positioning measurements noise both exist. Using the rough estimate result by MLE to subtract the theoretical bias can obtain a more accurate source location estimation. Theoretical analysis and simulation results indicate that the theoretical bias derived in this paper matches well with the actual bias in moderate noise level so that it can prove the correctness of the theoretical derivation. Furthermore, after bias compensation, the estimate accuracy of the proposed method achieves a certain improvement compared with existing methods.


2021 ◽  
Author(s):  
Ghazi D. AL-Qahtani ◽  
Noah Berlow

Abstract Multilateral wells are an evolution of horizontal wells in which several wellbore branches radiate from the main borehole. In the last two decades, multilateral wells have been increasingly utilized in producing hydrocarbon reservoirs. The main advantage of using such technology against conventional and single-bore wells comes from the additional access to reservoir rock by maximizing the reservoir contact with fewer resources. Today, multilateral wells are rapidly becoming more complex in both designs and architecture (i.e., extended reach wells, maximum reservoir contact, and extreme reservoir contact wells). Certain multilateral design templates prevail in the industry, such as fork and fishbone types, which tend to be populated throughout the reservoir of interest with no significant changes to the original architecture and, therefore, may not fully realize the reservoir's potential. Placement of optimal multilateral wells is a multivariable problem, which is a function of determining the best well locations and trajectories in a hydrocarbon reservoir with the ultimate objectives of maximizing productivity and recovery. The placement of the multilateral wells can be subject to many constraints such as the number of wells required, maximum length limits, and overall economics. This paper introduces a novel technology for placement of multilateral wells in hydrocarbon reservoirs utilizing a transshipment network optimization approach. This method generates scenarios of multiple wells with different designs honoring the most favorable completion points in a reservoir. In addition, the algorithm was developed to find the most favorable locations and trajectories for the multilateral wells in both local and global terms. A partitioning algorithm is uniquely utilized to reduce the computational cost of the process. The proposed method will not only create different multilateral designs; it will justify the trajectories of every borehole section generated. The innovative method is capable of constructing hundreds of multilateral wells with design variations in large-scale reservoirs. As the complexity of the reservoirs (e.g., active forces that influence fluid mobility) and heterogeneity dictate variability in performance at different area of the reservoir, multilateral wells should be constructed to capture the most productive zones. The new method also allows different levels of branching for the laterals (i.e., laterals can emanate from the motherbore, from other laterals or from subsequent branches). These features set the stage for a new generation of multilateral wells to achieve the most effective reservoir contact.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Marcelo V. W. Zibetti ◽  
Gabor T. Herman ◽  
Ravinder R. Regatte

AbstractIn this study, a fast data-driven optimization approach, named bias-accelerated subset selection (BASS), is proposed for learning efficacious sampling patterns (SPs) with the purpose of reducing scan time in large-dimensional parallel MRI. BASS is applicable when Cartesian fully-sampled k-space measurements of specific anatomy are available for training and the reconstruction method for undersampled measurements is specified; such information is used to define the efficacy of any SP for recovering the values at the non-sampled k-space points. BASS produces a sequence of SPs with the aim of finding one of a specified size with (near) optimal efficacy. BASS was tested with five reconstruction methods for parallel MRI based on low-rankness and sparsity that allow a free choice of the SP. Three datasets were used for testing, two of high-resolution brain images ($$\text {T}_{2}$$ T 2 -weighted images and, respectively, $$\text {T}_{1\rho }$$ T 1 ρ -weighted images) and another of knee images for quantitative mapping of the cartilage. The proposed approach has low computational cost and fast convergence; in the tested cases it obtained SPs up to 50 times faster than the currently best greedy approach. Reconstruction quality increased by up to 45% over that provided by variable density and Poisson disk SPs, for the same scan time. Optionally, the scan time can be nearly halved without loss of reconstruction quality. Quantitative MRI and prospective accelerated MRI results show improvements. Compared with greedy approaches, BASS rapidly learns effective SPs for various reconstruction methods, using larger SPs and larger datasets; enabling better selection of sampling-reconstruction pairs for specific MRI problems.


2021 ◽  
Author(s):  
Amita Giri ◽  
Lalan Kumar ◽  
Nilesh Kurwale ◽  
Tapan K. Gandhi

Abstract Brain Source Localization (BSL) using Electroencephalogram (EEG) has been a useful noninvasive modality for the diagnosis of epileptogenic zones, study of evoked related potentials, and brain disorders. The inverse solution of BSL is limited by high computational cost and localization error. The performance is additionally limited by head shape assumption and the corresponding harmonics basis function. In this work, an anatomical harmonics basis (Spherical Harmonics (SH), and more particularly Head Harmonics (H2)) based BSL is presented. The spatio-temporal four shell head model is formulated in SH domain. The performance of spatial subspace based Multiple Signal Classification (MUSIC) and Recursively Applied and Projected (RAP)-MUSIC method is compared with the proposed SH and H2 counterparts on simulated data. SH and H2 domain processing effectively resolves the problem of high computational cost without sacrificing the inverse source localization accuracy. The proposed H2 MUSIC was additionally validated for epileptogenic zone localization on clinical EEG data. The proposed framework offers an effective solution to clinicians in automated and time efficient seizure localization.


Sign in / Sign up

Export Citation Format

Share Document