scholarly journals Using the cluster "Sergey Korolev" for modelling computer networks

Author(s):  
D Y Polukarov ◽  
A P Bogdan

Modelling large-scale networks requires significant computational resources on a computer that produces a simulation. Moreover, the complexity of the calculations increases nonlinearly with increasing volume of the simulated network. On the other hand, cluster computing has gained considerable popularity recently. The idea of using cluster computing structures for modelling computer networks arises naturally. This paper describes the creation of software which combines an interactive mode of operation, including a graphical user interface for the OMNeT++ environment, with a batch mode of operation more natural to the high-performance cluster, "Sergey Korolev". The architecture of such a solution is developed. An example of using this approach is also given.

Electronics ◽  
2020 ◽  
Vol 9 (10) ◽  
pp. 1747
Author(s):  
Hansaka Angel Dias Edirisinghe Kodituwakku ◽  
Alex Keller ◽  
Jens Gregor

The complexity and throughput of computer networks are rapidly increasing as a result of the proliferation of interconnected devices, data-driven applications, and remote working. Providing situational awareness for computer networks requires monitoring and analysis of network data to understand normal activity and identify abnormal activity. A scalable platform to process and visualize data in real time for large-scale networks enables security analysts and researchers to not only monitor and study network flow data but also experiment and develop novel analytics. In this paper, we introduce InSight2, an open-source platform for manipulating both streaming and archived network flow data in real time that aims to address the issues of existing solutions such as scalability, extendability, and flexibility. Case-studies are provided that demonstrate applications in monitoring network activity, identifying network attacks and compromised hosts and anomaly detection.


2014 ◽  
Vol 2014 ◽  
pp. 1-15 ◽  
Author(s):  
Vinícius da Fonseca Vieira ◽  
Carolina Ribeiro Xavier ◽  
Nelson Francisco Favilla Ebecken ◽  
Alexandre Gonçalves Evsukoff

Community structure detection is one of the major research areas of network science and it is particularly useful for large real networks applications. This work presents a deep study of the most discussed algorithms for community detection based on modularity measure: Newman’s spectral method using a fine-tuning stage and the method of Clauset, Newman, and Moore (CNM) with its variants. The computational complexity of the algorithms is analysed for the development of a high performance code to accelerate the execution of these algorithms without compromising the quality of the results, according to the modularity measure. The implemented code allows the generation of partitions with modularity values consistent with the literature and it overcomes 1 million nodes with Newman’s spectral method. The code was applied to a wide range of real networks and the performances of the algorithms are evaluated.


2021 ◽  
Author(s):  
Eleonora De Filippi ◽  
Anira Escrichs ◽  
Matthieu Gilson ◽  
Marti Sanchez-Fibla ◽  
Estela Camara ◽  
...  

In the past decades, there has been a growing scientific interest in characterizing neural correlates of meditation training. Nonetheless, the mechanisms underlying meditation remain elusive. In the present work, we investigated meditation-related changes in structural and functional connectivities (SC and FC, respectively). For this purpose, we scanned experienced meditators and control (naive) subjects using magnetic resonance imaging (MRI) to acquire structural and functional data during two conditions, resting-state and meditation (focused attention on breathing). In this way, we aimed to characterize and distinguish both short-term and long-term modifications in the brain's structure and function. First, we performed a network-based analysis of anatomical connectivity. Then, to analyze the fMRI data, we calculated whole-brain effective connectivity (EC) estimates, relying on a dynamical network model to replicate BOLD signals' spatio-temporal structure, akin to FC with lagged correlations. We compared the estimated EC, FC, and SC links as features to train classifiers to predict behavioral conditions and group identity. The whole-brain SC analysis revealed strengthened anatomical connectivity across large-scale networks for meditators compared to controls. We found that differences in SC were reflected in the functional domain as well. We demonstrated through a machine-learning approach that EC features were more informative than FC and SC solely. Using EC features we reached high performance for the condition-based classification within each group and moderately high accuracies when comparing the two groups in each condition. Moreover, we showed that the most informative EC links that discriminated between meditators and controls involved the same large-scale networks previously found to have increased anatomical connectivity. Overall, the results of our whole-brain model-based approach revealed a mechanism underlying meditation by providing causal relationships at the structure-function level.


2018 ◽  
Vol 7 (2.20) ◽  
pp. 236
Author(s):  
Anantula Jyothi ◽  
Baddam Indira

High Performance Computing (HPC) has become one of the predominant techniques for processing the large scale applications. Cloud environment has been chosen to provide the required services and to process these high demand applications. Management of such            applications challenges us on three major things i.e. network feasibility, computational feasibility and data security. Several research endeavours are focused on network load and computing cloud date and provided better outcomes. Still those approaches are not able to provide standard mechanisms in view of data security. On the other side, research towards enabling the auditing features on the cloud based data by various researchers has been addressed but their performance is poor. However, the complexity of the audit process proven to be the bottleneck in improving performance of the application as it consumes the computational resources of the same application. Henceforth, this work proposes a novel framework for cloud data auditing at multiple levels to audit the access requests and upon             validating the conditions of one level, the connection request will be moved to the further complex levels in order to reduce the              computational loads. The proposed framework determines a substantial reduction in the computational load on the cloud server, thus improves the application performance leveraging the infrastructure use. 


Gigabyte ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Ben Duggan ◽  
John Metzcar ◽  
Paul Macklin

Modern agent-based models (ABM) and other simulation models require evaluation and testing of many different parameters. Managing that testing for large scale parameter sweeps (grid searches), as well as storing simulation data, requires multiple, potentially customizable steps that may vary across simulations. Furthermore, parameter testing, processing, and analysis are slowed if simulation and processing jobs cannot be shared across teammates or computational resources. While high-performance computing (HPC) has become increasingly available, models can often be tested faster with the use of multiple computers and HPC resources. To address these issues, we created the Distributed Automated Parameter Testing (DAPT) Python package. By hosting parameters in an online (and often free) “database”, multiple individuals can run parameter sets simultaneously in a distributed fashion, enabling ad hoc crowdsourcing of computational power. Combining this with a flexible, scriptable tool set, teams can evaluate models and assess their underlying hypotheses quickly. Here, we describe DAPT and provide an example demonstrating its use.


2018 ◽  
Vol 106 (4) ◽  
Author(s):  
Jean-Paul Courneya ◽  
Alexa Mayo

Despite having an ideal setup in their labs for wet work, researchers often lack the computational infrastructure to analyze the magnitude of data that result from “-omics” experiments. In this innovative project, the library supports analysis of high-throughput data from global molecular profiling experiments by offering a high-performance computer with open source software along with expert bioinformationist support. The audience for this new service is faculty, staff, and students for whom using the university’s large scale, CORE computational resources is not warranted because these resources exceed the needs of smaller projects. In the library’s approach, users are empowered to analyze high-throughput data that they otherwise would not be able to on their own computers. To develop the project, the library’s bioinformationist identified the ideal computing hardware and a group of open source bioinformatics software to provide analysis options for experimental data such as scientific images, sequence reads, and flow cytometry files. To close the loop between learning and practice, the bioinformationist developed self-guided learning materials and workshops or consultations on topics such as the National Center for Biotechnology Information’s BLAST, Bioinformatics on the Cloud, and ImageJ. Researchers apply the data analysis techniques that they learned in the classroom in an ideal computing environment.


2013 ◽  
Vol 42 (5) ◽  
pp. e32-e32 ◽  
Author(s):  
Jun Li ◽  
Hairong Wei ◽  
Tingsong Liu ◽  
Patrick Xuechun Zhao

Abstract The accurate construction and interpretation of gene association networks (GANs) is challenging, but crucial, to the understanding of gene function, interaction and cellular behavior at the genome level. Most current state-of-the-art computational methods for genome-wide GAN reconstruction require high-performance computational resources. However, even high-performance computing cannot fully address the complexity involved with constructing GANs from very large-scale expression profile datasets, especially for the organisms with medium to large size of genomes, such as those of most plant species. Here, we present a new approach, GPLEXUS (http://plantgrn.noble.org/GPLEXUS/), which integrates a series of novel algorithms in a parallel-computing environment to construct and analyze genome-wide GANs. GPLEXUS adopts an ultra-fast estimation for pairwise mutual information computing that is similar in accuracy and sensitivity to the Algorithm for the Reconstruction of Accurate Cellular Networks (ARACNE) method and runs ∼1000 times faster. GPLEXUS integrates Markov Clustering Algorithm to effectively identify functional subnetworks. Furthermore, GPLEXUS includes a novel ‘condition-removing’ method to identify the major experimental conditions in which each subnetwork operates from very large-scale gene expression datasets across several experimental conditions, which allows users to annotate the various subnetworks with experiment-specific conditions. We demonstrate GPLEXUS’s capabilities by construing global GANs and analyzing subnetworks related to defense against biotic and abiotic stress, cell cycle growth and division in Arabidopsis thaliana.


Sign in / Sign up

Export Citation Format

Share Document