scholarly journals Predictive Sampling Method for Spread Models in Networks

2021 ◽  
Vol 23 ◽  
Author(s):  
Caijun Qin

This paper proposes a novel, exploration-based network sampling algorithm called caterpillar quota walk sampling (CQWS) inspired by the caterpillar tree. Network sampling identifies a subset of nodes and edges from a network, creating an induced graph. Beginning from an initial node, exploration-based sampling algorithms grow the induced set by traversing and tracking unvisited neighboring nodes from the original network. Tunable and trainable parameters allow CQWS to maximize the sum of the degrees of the induced graph from multiple trials when sampling dense networks. A network spread model renders effective use in various applications, including tracking the spread of epidemics, visualizing information transmissions through social media, and cell-to-cell spread of neurodegenerative diseases. CQWS generates a spread model as its sample by visiting the highest-degree neighbors of previously visited nodes. For each previously visited node, a top proportion of the highest-degree neighbors fulfills a quota and branches into a new caterpillar tree. Sampling more high-degree nodes constitutes an objective among various applications. Many exploration-based sampling algorithms suffer drawbacks that limit the sum of degrees of visited nodes and thus the number of high-degree nodes visited. Furthermore, a strategy may not be adaptable to volatile degree frequencies throughout the original network architecture, which influences how deep into the original network an algorithm could sample. This paper analyzes CQWS in comparison to four other exploration-based network in tackling these two problems by sampling sparse and dense randomly generated networks.

2016 ◽  
Vol 27 (05) ◽  
pp. 1650052 ◽  
Author(s):  
Zeinab S. Jalali ◽  
Alireza Rezvanian ◽  
Mohammad Reza Meybodi

Due to the large scales and limitations in accessing most online social networks, it is hard or infeasible to directly access them in a reasonable amount of time for studying and analysis. Hence, network sampling has emerged as a suitable technique to study and analyze real networks. The main goal of sampling online social networks is constructing a small scale sampled network which preserves the most important properties of the original network. In this paper, we propose two sampling algorithms for sampling online social networks using spanning trees. The first proposed sampling algorithm finds several spanning trees from randomly chosen starting nodes; then the edges in these spanning trees are ranked according to the number of times that each edge has appeared in the set of found spanning trees in the given network. The sampled network is then constructed as a sub-graph of the original network which contains a fraction of nodes that are incident on highly ranked edges. In order to avoid traversing the entire network, the second sampling algorithm is proposed using partial spanning trees. The second sampling algorithm is similar to the first algorithm except that it uses partial spanning trees. Several experiments are conducted to examine the performance of the proposed sampling algorithms on well-known real networks. The obtained results in comparison with other popular sampling methods demonstrate the efficiency of the proposed sampling algorithms in terms of Kolmogorov–Smirnov distance (KSD), skew divergence distance (SDD) and normalized distance (ND).


Author(s):  
D. V. Iatsenko ◽  
B. B. Zhmaylov

In many pattern recognition problems solved using convolutional neural networks (CNN), one of the important characteristics of network architecture is the size of the convolution kernel, since it coincides with the size of the maximum element that can act as a recognition sign. However, increasing the size of the convolution kernel greatly increases the number of tunable network parameters. The method of effective receptive field was first applied on AlexNet in 2012. The practical application of the method of increasing the effective receptive field without increasing convolution kernel size is discussed in this article. A presented example of a small network designed to recognize a fire in apicture demonstrates the use of an effective receptive field which consists of a stack of smaller convolutions. Comparison of a original network with a large convolution core and a modified network with a stack of smaller cores shows that, with equal network characteristics, such as prediction accuracy, prediction time, the number of parameters in the network with an effective receptive field, the number of tunable parameters is significantly reduced.


2021 ◽  
Vol 7 (9) ◽  
pp. 173
Author(s):  
Eduardo Paluzo-Hidalgo ◽  
Rocio Gonzalez-Diaz ◽  
Miguel A. Gutiérrez-Naranjo ◽  
Jónathan Heras

Simplicial-map neural networks are a recent neural network architecture induced by simplicial maps defined between simplicial complexes. It has been proved that simplicial-map neural networks are universal approximators and that they can be refined to be robust to adversarial attacks. In this paper, the refinement toward robustness is optimized by reducing the number of simplices (i.e., nodes) needed. We have shown experimentally that such a refined neural network is equivalent to the original network as a classification tool but requires much less storage.


2004 ◽  
Vol 18 (17n19) ◽  
pp. 2394-2400 ◽  
Author(s):  
L. P. CHI ◽  
X. CAI

Through the study of US airport network, we find that the network displays a high degree of error tolerance and an extreme vulnerability to attacks. The topological properties, including average degree, clustering coefficient, diameter and efficiency, are slightly affected when a few least connected airports are removed. Such properties change drastically with the removal of a few most connected ones. The degree distribution and the weight distribution under errors behave similarly to those of the original network. Under attacks, the degree distribution changes from a two-segment power-law to a monotonic one. While the under-attacked weight distribution still displays a power-law tail, with the exponent changing from 1.50 to 1.24.


Photonics ◽  
2021 ◽  
Vol 8 (9) ◽  
pp. 400
Author(s):  
Zhe Yang ◽  
Yu-Ming Bai ◽  
Li-Da Sun ◽  
Ke-Xin Huang ◽  
Jun Liu ◽  
...  

We propose a concurrent single-pixel imaging, object location, and classification scheme based on deep learning (SP-ILC). We used multitask learning, developed a new loss function, and created a dataset suitable for this project. The dataset consists of scenes that contain different numbers of possibly overlapping objects of various sizes. The results we obtained show that SP-ILC runs concurrent processes to locate objects in a scene with a high degree of precision in order to produce high quality single-pixel images of the objects, and to accurately classify objects, all with a low sampling rate. SP-ILC has potential for effective use in remote sensing, medical diagnosis and treatment, security, and autonomous vehicle control.


2021 ◽  
Author(s):  
Amanda Lucas Pereira ◽  
Manoela Kohler ◽  
Marco Aurélio C. Pacheco

Most of the state-of-the-art Convolutional Neural Network (CNN) architectures are manually crafted by experts, usually with background knowledge from extent working experience in this research field. Therefore, this manner of designing CNNs is highly limited and many approaches have been developed to try to make this procedure more automatic. This paper presents a case study in tackling the architecture search problem by using a Genetic Algorithm (GA) to optimize an existing CNN Architecture. The proposed methodology uses VGG-16 convolutional blocks as its building blocks and each individual from the GA corresponds to a possible model built from these blocks with varying filter sizes, keeping fixed the original network architecture connections. The selection of the fittest individuals are done according to their weighted F1-Score when training from scratch on the available data. To evaluate the best individual found from the proposed methodology, the performance is compared to a VGG-16 model trained from scratch on the same data.


2020 ◽  
Vol 66 (1) ◽  
pp. 54-63 ◽  
Author(s):  
E.I. Olekhnovich ◽  
A.I. Manolov ◽  
A.V. Pavlenko ◽  
D.N. Konanov ◽  
D.E. Fedorov ◽  
...  

Numerous studies confirm the high degree of involvement of the intestinal microbiota in most processes in the human body. There is evidence for the effect of intestinal microbiota on the success of chemo and immunotherapy of oncological diseases. It is assumed that the intestinal microbiota exhibits an indirect effect on the antitumor therapy through such mechanisms as general immunomodulation, an increase in cells that specifically respond to antigens of both microbial and tumor origin, metabolism, degradation (utilization) of drug compounds. The intestinal microbiota is currently considered as an additional, but important target for studying the effective use of antitumor therapy and reducing its toxicity, as well as a predictor of the success of immunotherapy. In this review, we highlight the results of studies published to date that confirm the relationship between gut microbiome and antitumor efficacy of immune checkpoint inhibitors. Despite the promising and theoretically substantiated conclusions, there are still some discrepancies among the existing data that will have to be addressed in order to facilitate the further development of this direction.


2014 ◽  
Vol 15 (3-4) ◽  
pp. 25-37 ◽  
Author(s):  
A. V. Bogovin

This article covers the dynamics of the current state of natural ecosystems under the influence of rapid increase in recent decades of anthropogenic impact on them, with negative effects on the environment and normal reproduction and survival of the biota in it as a resource and most crucial basic integral part of the biosphere. It is noted, particularly, that the current anthropogenic influence has become a powerful factor in the evolution of the biosphere in which biological systems began to function in anthropogenically transformed circulation of substances, often severely impaired harmonization of processes of their self-recovery, often leading to the need for society to revise its behavior in the  "man-nature-economy - living environment" system. At the same time the conceptual aspects of strategic environmental and anthropogenic using of ecological and biological systems are presented. Against the background of the above-mentioned changes of the necessity of the transition from the unitary-consumptive use of biotic systems to the system(biosphere)-balanced, in which their component parts - the soil, plant, animals or other forms of terrestrial and aquatic ecosystems are considered not only as a source of obtaining the necessary and usefull for human products or basic production resoursed and objects of application of labor, but are as inseparable parts of a whole, functionally interacting entities of nature, beyond which, development and existence is impossible. According to tasks, assessment of the ecological and biological formations and optimization of their use can be carried out on 1) a globally-biosphere 2) landscape-ecological, 3) elementary biogeocenotical levels of the organization of natural and anthropogenically transformed systems. In the article the methodological principles of assessment of the ecological and biological systems in the biosphere-balanced use of them. It is stated that the main focus of their learning and assessment is a systematic approach to the wide range of applications in addition to traditional methods of identifying of structurally elementary indicators and functionally-group-biomorphological, environmental, rhythmic and many other features, the fundamental properties of the study of nonlinear dynamics of processes as complex open ecosystems with determinant-chaotic type of development and the appearance in them of high degree of random factors in the formation. The high appropriateness of accounting hemerobility of representatives of biota is mentioned. that is, their genetic and physiological responses to disturbance of edaphotopes or cultivated land for establishing the degree of degradation of natural ecosystems and acceptable thresholds of anthropogenic load on them. It is noted, that the implementation of a balanced use of natural resources of the biosphere requires changes in traditional thinking and developing of skills of innovative systemic approach and analysis of the surrounding material world, the ability to see the invisible on the basis of visible phenomena of nature, that is, the so-called invisible matter and its powerful energy - intra- and intersystem communication, laws of present and future development of ecological systems, and on this foundation to build properly a model of effective use. It is noted that human disturbance of balance in one or more parts of the system, due to the action intra - and intersystem balance masses, inevitably leads to a change in the entire system and puts it into new functioning modes, which are not always desirable. The task is to prevent the release of anthropogenic variability of natural systems beyond their adaptive stability.


2021 ◽  
Vol 15 (4) ◽  
pp. 1-27
Author(s):  
Nesreen K. Ahmed ◽  
Nick Duffield ◽  
Ryan A. Rossi

Temporal networks representing a stream of timestamped edges are seemingly ubiquitous in the real world. However, the massive size and continuous nature of these networks make them fundamentally challenging to analyze and leverage for descriptive and predictive modeling tasks. In this work, we propose a general framework for temporal network sampling with unbiased estimation. We develop online, single-pass sampling algorithms, and unbiased estimators for temporal network sampling. The proposed algorithms enable fast, accurate, and memory-efficient statistical estimation of temporal network patterns and properties. In addition, we propose a temporally decaying sampling algorithm with unbiased estimators for studying networks that evolve in continuous time, where the strength of links is a function of time, and the motif patterns are temporally weighted. In contrast to the prior notion of a △ t -temporal motif, the proposed formulation and algorithms for counting temporally weighted motifs are useful for forecasting tasks in networks such as predicting future links, or a future time-series variable of nodes and links. Finally, extensive experiments on a variety of temporal networks from different domains demonstrate the effectiveness of the proposed algorithms. A detailed ablation study is provided to understand the impact of the various components of the proposed framework.


Sign in / Sign up

Export Citation Format

Share Document