scholarly journals Identifying influential spreaders in complex networks by an improved gravity model

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Zhe Li ◽  
Xinyu Huang

AbstractIdentification of influential spreaders is still a challenging issue in network science. Therefore, it attracts increasing attention from both computer science and physical societies, and many algorithms to identify influential spreaders have been proposed so far. Degree centrality, as the most widely used neighborhood-based centrality, was introduced into the network world to evaluate the spreading ability of nodes. However, degree centrality always assigns too many nodes with the same value, so it leads to the problem of resolution limitation in distinguishing the real influences of these nodes, which further affects the ranking efficiency of the algorithm. The k-shell decomposition method also faces the same problem. In order to solve the resolution limit problem, we propose a high-resolution index combining both degree centrality and the k-shell decomposition method. Furthermore, based on the proposed index and the well-known gravity law, we propose an improved gravity model to measure the importance of nodes in propagation dynamics. Experiments on ten real networks show that our model outperforms most of the state-of-the-art methods. It has a better performance in terms of ranking performance as measured by the Kendall’s rank correlation, and in terms of ranking efficiency as measured by the monotonicity value.

Author(s):  
T. Yanaka ◽  
K. Shirota

It is significant to note field aberrations (chromatic field aberration, coma, astigmatism and blurring due to curvature of field, defined by Glaser's aberration theory relative to the Blenden Freien System) of the objective lens in connection with the following three points of view; field aberrations increase as the resolution of the axial point improves by increasing the lens excitation (k2) and decreasing the half width value (d) of the axial lens field distribution; when one or all of the imaging lenses have axial imperfections such as beam deflection in image space by the asymmetrical magnetic leakage flux, the apparent axial point has field aberrations which prevent the theoretical resolution limit from being obtained.


Author(s):  
Erik Paul ◽  
Holger Herzog ◽  
Sören Jansen ◽  
Christian Hobert ◽  
Eckhard Langer

Abstract This paper presents an effective device-level failure analysis (FA) method which uses a high-resolution low-kV Scanning Electron Microscope (SEM) in combination with an integrated state-of-the-art nanomanipulator to locate and characterize single defects in failing CMOS devices. The presented case studies utilize several FA-techniques in combination with SEM-based nanoprobing for nanometer node technologies and demonstrate how these methods are used to investigate the root cause of IC device failures. The methodology represents a highly-efficient physical failure analysis flow for 28nm and larger technology nodes.


Author(s):  
Wei Huang ◽  
Xiaoshu Zhou ◽  
Mingchao Dong ◽  
Huaiyu Xu

AbstractRobust and high-performance visual multi-object tracking is a big challenge in computer vision, especially in a drone scenario. In this paper, an online Multi-Object Tracking (MOT) approach in the UAV system is proposed to handle small target detections and class imbalance challenges, which integrates the merits of deep high-resolution representation network and data association method in a unified framework. Specifically, while applying tracking-by-detection architecture to our tracking framework, a Hierarchical Deep High-resolution network (HDHNet) is proposed, which encourages the model to handle different types and scales of targets, and extract more effective and comprehensive features during online learning. After that, the extracted features are fed into different prediction networks for interesting targets recognition. Besides, an adjustable fusion loss function is proposed by combining focal loss and GIoU loss to solve the problems of class imbalance and hard samples. During the tracking process, these detection results are applied to an improved DeepSORT MOT algorithm in each frame, which is available to make full use of the target appearance features to match one by one on a practical basis. The experimental results on the VisDrone2019 MOT benchmark show that the proposed UAV MOT system achieves the highest accuracy and the best robustness compared with state-of-the-art methods.


2021 ◽  
Vol 15 (3) ◽  
pp. 1-28
Author(s):  
Xueyan Liu ◽  
Bo Yang ◽  
Hechang Chen ◽  
Katarzyna Musial ◽  
Hongxu Chen ◽  
...  

Stochastic blockmodel (SBM) is a widely used statistical network representation model, with good interpretability, expressiveness, generalization, and flexibility, which has become prevalent and important in the field of network science over the last years. However, learning an optimal SBM for a given network is an NP-hard problem. This results in significant limitations when it comes to applications of SBMs in large-scale networks, because of the significant computational overhead of existing SBM models, as well as their learning methods. Reducing the cost of SBM learning and making it scalable for handling large-scale networks, while maintaining the good theoretical properties of SBM, remains an unresolved problem. In this work, we address this challenging task from a novel perspective of model redefinition. We propose a novel redefined SBM with Poisson distribution and its block-wise learning algorithm that can efficiently analyse large-scale networks. Extensive validation conducted on both artificial and real-world data shows that our proposed method significantly outperforms the state-of-the-art methods in terms of a reasonable trade-off between accuracy and scalability. 1


1980 ◽  
Vol 2 ◽  
Author(s):  
Fernando A. Ponce

ABSTRACTThe structure of the silicon-sapphire interface of CVD silicon on a (1102) sapphire substrate has been studied in crøss section by high resolution transmission electron microscopy. Multibeam images of the interface region have been obtained where both the silicon and sapphire lattices are directly resolved. The interface is observed to be planar and abrupt to the instrument resolution limit of 3 Å. No interfacial phase is evident. Defects are inhomogeneously distributed at the interface: relatively defect-free regions are observed in the silicon layer in addition to regions with high concentration of defects.


2018 ◽  
Author(s):  
Rishi Rajalingham ◽  
Elias B. Issa ◽  
Pouya Bashivan ◽  
Kohitij Kar ◽  
Kailyn Schmidt ◽  
...  

ABSTRACTPrimates—including humans—can typically recognize objects in visual images at a glance even in the face of naturally occurring identity-preserving image transformations (e.g. changes in viewpoint). A primary neuroscience goal is to uncover neuron-level mechanistic models that quantitatively explain this behavior by predicting primate performance for each and every image. Here, we applied this stringent behavioral prediction test to the leading mechanistic models of primate vision (specifically, deep, convolutional, artificial neural networks; ANNs) by directly comparing their behavioral signatures against those of humans and rhesus macaque monkeys. Using high-throughput data collection systems for human and monkey psychophysics, we collected over one million behavioral trials for 2400 images over 276 binary object discrimination tasks. Consistent with previous work, we observed that state-of-the-art deep, feed-forward convolutional ANNs trained for visual categorization (termed DCNNIC models) accurately predicted primate patterns of object-level confusion. However, when we examined behavioral performance for individual images within each object discrimination task, we found that all tested DCNNIC models were significantly non-predictive of primate performance, and that this prediction failure was not accounted for by simple image attributes, nor rescued by simple model modifications. These results show that current DCNNIC models cannot account for the image-level behavioral patterns of primates, and that new ANN models are needed to more precisely capture the neural mechanisms underlying primate object vision. To this end, large-scale, high-resolution primate behavioral benchmarks—such as those obtained here—could serve as direct guides for discovering such models.SIGNIFICANCE STATEMENTRecently, specific feed-forward deep convolutional artificial neural networks (ANNs) models have dramatically advanced our quantitative understanding of the neural mechanisms underlying primate core object recognition. In this work, we tested the limits of those ANNs by systematically comparing the behavioral responses of these models with the behavioral responses of humans and monkeys, at the resolution of individual images. Using these high-resolution metrics, we found that all tested ANN models significantly diverged from primate behavior. Going forward, these high-resolution, large-scale primate behavioral benchmarks could serve as direct guides for discovering better ANN models of the primate visual system.


Sign in / Sign up

Export Citation Format

Share Document