Exploiting structural similarity in network reliability analysis using graph learning

Author(s):  
Ping Zhang ◽  
Min Xie ◽  
Xiaoyan Zhu

Considering the large-scale networks that can represent construction of components in a unit, a transportation system, a supply chain, a social network system, and so on, some nodes have similar topological structures and thus play similar roles in the network and system analysis, usually complicating the analysis and resulting in considerable duplicated computations. In this paper, we present a graph learning approach to define and identify structural similarity between the nodes in a network or the components in a network system. Based on the structural similarity, we investigate component clustering at various significance levels that represent different extents of similarity. We further specify a spectral-graph-wavelet based graph learning method to measure the structural similarity and present its application in easing computation load of evaluating system survival signature and system reliability. The numerical examples and the application show the insights of structural similarity and effectiveness of the graph learning approach. Finally, we discuss potential applications of the graph-learning based structural similarity and conclude that the proposed structural similarity, component clustering, and graph learning approach are effective in simplifying the complexity of the network systems and reducing the computational cost for complex network analysis.

Sensors ◽  
2020 ◽  
Vol 20 (8) ◽  
pp. 2325
Author(s):  
Nefeli Lamprinou ◽  
Nikolaos Nikolikos ◽  
Emmanouil Z. Psarakis

Compared with pairwise registration, the groupwise one is capable of handling a large-scale population of images simultaneously in an unbiased way. In this work we improve upon the state-of-the-art pixel-level, Least-Squares (LS)-based groupwise image registration methods. Specifically, the registration technique is properly adapted by the use of Self Quotient Images (SQI) in order to become capable for solving the groupwise registration of photometrically distorted, partially occluded as well as unimodal and multimodal images. Moreover, the proposed groupwise technique is linear to the cardinality of the image set and thus it can be used for the successful solution of the problem on large image sets with low complexity. From the application of the proposed technique on a series of experiments for the groupwise registration of photometrically and geometrically distorted, partially occluded faces as well as unimodal and multimodal magnetic resonance image sets and its comparison with the Lucas–Kanade Entropy (LKE) algorithm, the obtained results look very promising, in terms of alignment quality, using as figures of merit the mean Peak Signal to Noise Ratio ( m P S N R ) and mean Structural Similarity ( m S S I M ), and computational cost.


Electronics ◽  
2019 ◽  
Vol 8 (8) ◽  
pp. 892 ◽  
Author(s):  
Wazir Muhammad ◽  
Supavadee Aramvith

Single image super-resolution (SISR) aims to reconstruct a high-resolution (HR) image from a low-resolution (LR) image. In order to address the SISR problem, recently, deep convolutional neural networks (CNNs) have achieved remarkable progress in terms of accuracy and efficiency. In this paper, an innovative technique, namely a multi-scale inception-based super-resolution (SR) using deep learning approach, or MSISRD, was proposed for fast and accurate reconstruction of SISR. The proposed network employs the deconvolution layer to upsample the LR image to the desired HR image. The proposed method is in contrast to existing approaches that use the interpolation techniques to upscale the LR image. Primarily, interpolation techniques are not designed for this purpose, which results in the creation of undesired noise in the model. Moreover, the existing methods mainly focus on the shallow network or stacking multiple layers in the model with the aim of creating a deeper network architecture. The technique based on the aforementioned design creates the vanishing gradients problem during the training and increases the computational cost of the model. Our proposed method does not use any hand-designed pre-processing steps, such as the bicubic interpolation technique. Furthermore, an asymmetric convolution block is employed to reduce the number of parameters, in addition to the inception block adopted from GoogLeNet, to reconstruct the multiscale information. Experimental results demonstrate that the proposed model exhibits an enhanced performance compared to twelve state-of-the-art methods in terms of the average peak signal-to-noise ratio (PSNR), structural similarity index (SSIM) with a reduced number of parameters for the scale factor of 2 × , 4 × , and 8 × .


Author(s):  
V. Skibchyk ◽  
V. Dnes ◽  
R. Kudrynetskyi ◽  
O. Krypuch

Аnnotation Purpose. To increase the efficiency of technological processes of grain harvesting by large-scale agricultural producers due to the rational use of combine harvesters available on the farm. Methods. In the course of the research the methods of system analysis and synthesis, induction and deduction, system-factor and system-event approaches, graphic method were used. Results. Characteristic events that occur during the harvesting of grain crops, both within a single production unit and the entire agricultural producer are identified. A method for predicting time intervals of use and downtime of combine harvesters of production units has been developed. The roadmap of substantiation the rational seasonal scenario of the use of grain harvesters of large-scale agricultural producers is developed, which allows estimating the efficiency of each of the scenarios of multivariate placement of grain harvesters on fields taking into account influence of natural production and agrometeorological factors on the efficiency of technological cultures. Conclusions 1. Known scientific and methodological approaches to optimization of machine used in agriculture do not take into account the risks of losses of crops due to late harvesting, as well as seasonal natural and agrometeorological conditions of each production unit of the farmer, which requires a new approach to the rational use of rational seasonal combines of large agricultural producers. 2. The developed new approach to the substantiation of the rational seasonal scenario of the use of combined harvesters of large-scale agricultural producers allows taking into account the costs of harvesting of grain and the cost of the lost crop because of the lateness of harvesting at optimum variants of attraction of additional free combine harvesters. provides more profit. 3. The practical application of the developed road map will allow large-scale agricultural producers to use combine harvesters more efficiently and reduce harvesting costs. Keywords: combine harvesters, use, production divisions, risk, seasonal scenario, large-scale agricultural producers.


Electronics ◽  
2021 ◽  
Vol 10 (8) ◽  
pp. 955
Author(s):  
Vasyl Teslyuk ◽  
Andriy Sydor ◽  
Vincent Karovič ◽  
Olena Pavliuk ◽  
Iryna Kazymyra

Technical systems in the modern global world are rapidly evolving and improving. In most cases, these are large-scale multi-level systems and one of the problems that arises in the design process of such systems is to determine their reliability. Accordingly, in the paper, a mathematical model based on the Weibull distribution has been developed for determining a computer network reliability. In order to simplify calculating the reliability characteristics, the system is considered to be a hierarchical one, ramified to level 2, with bypass through the level. The developed model allows us to define the following parameters: the probability distribution of the count of working output elements, the availability function of the system, the duration of the system’s stay in each of its working states, and the duration of the system’s stay in the prescribed availability condition. The accuracy of the developed model is high. It can be used to determine the reliability parameters of the large, hierarchical, ramified systems. The research results of modelling a local area computer network are presented. In particular, we obtained the following best option for connecting workstations: 4 of them are connected to the main hub, and the rest (16) are connected to the second level hub, with a time to failure of 4818 h.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Daiji Ichishima ◽  
Yuya Matsumura

AbstractLarge scale computation by molecular dynamics (MD) method is often challenging or even impractical due to its computational cost, in spite of its wide applications in a variety of fields. Although the recent advancement in parallel computing and introduction of coarse-graining methods have enabled large scale calculations, macroscopic analyses are still not realizable. Here, we present renormalized molecular dynamics (RMD), a renormalization group of MD in thermal equilibrium derived by using the Migdal–Kadanoff approximation. The RMD method improves the computational efficiency drastically while retaining the advantage of MD. The computational efficiency is improved by a factor of $$2^{n(D+1)}$$ 2 n ( D + 1 ) over conventional MD where D is the spatial dimension and n is the number of applied renormalization transforms. We verify RMD by conducting two simulations; melting of an aluminum slab and collision of aluminum spheres. Both problems show that the expectation values of physical quantities are in good agreement after the renormalization, whereas the consumption time is reduced as expected. To observe behavior of RMD near the critical point, the critical exponent of the Lennard-Jones potential is extracted by calculating specific heat on the mesoscale. The critical exponent is obtained as $$\nu =0.63\pm 0.01$$ ν = 0.63 ± 0.01 . In addition, the renormalization group of dissipative particle dynamics (DPD) is derived. Renormalized DPD is equivalent to RMD in isothermal systems under the condition such that Deborah number $$De\ll 1$$ D e ≪ 1 .


Author(s):  
Mahdi Esmaily Moghadam ◽  
Yuri Bazilevs ◽  
Tain-Yen Hsia ◽  
Alison Marsden

A closed-loop lumped parameter network (LPN) coupled to a 3D domain is a powerful tool that can be used to model the global dynamics of the circulatory system. Coupling a 0D LPN to a 3D CFD domain is a numerically challenging problem, often associated with instabilities, extra computational cost, and loss of modularity. A computationally efficient finite element framework has been recently proposed that achieves numerical stability without sacrificing modularity [1]. This type of coupling introduces new challenges in the linear algebraic equation solver (LS), producing an strong coupling between flow and pressure that leads to an ill-conditioned tangent matrix. In this paper we exploit this strong coupling to obtain a novel and efficient algorithm for the linear solver (LS). We illustrate the efficiency of this method on several large-scale cardiovascular blood flow simulation problems.


2006 ◽  
Vol 18 (12) ◽  
pp. 2959-2993 ◽  
Author(s):  
Eduardo Ros ◽  
Richard Carrillo ◽  
Eva M. Ortigosa ◽  
Boris Barbour ◽  
Rodrigo Agís

Nearly all neuronal information processing and interneuronal communication in the brain involves action potentials, or spikes, which drive the short-term synaptic dynamics of neurons, but also their long-term dynamics, via synaptic plasticity. In many brain structures, action potential activity is considered to be sparse. This sparseness of activity has been exploited to reduce the computational cost of large-scale network simulations, through the development of event-driven simulation schemes. However, existing event-driven simulations schemes use extremely simplified neuronal models. Here, we implement and evaluate critically an event-driven algorithm (ED-LUT) that uses precalculated look-up tables to characterize synaptic and neuronal dynamics. This approach enables the use of more complex (and realistic) neuronal models or data in representing the neurons, while retaining the advantage of high-speed simulation. We demonstrate the method's application for neurons containing exponential synaptic conductances, thereby implementing shunting inhibition, a phenomenon that is critical to cellular computation. We also introduce an improved two-stage event-queue algorithm, which allows the simulations to scale efficiently to highly connected networks with arbitrary propagation delays. Finally, the scheme readily accommodates implementation of synaptic plasticity mechanisms that depend on spike timing, enabling future simulations to explore issues of long-term learning and adaptation in large-scale networks.


Sign in / Sign up

Export Citation Format

Share Document