Scalable Dynamic Fuzzy Biomolecular Network Models for Large Scale Biology

Author(s):  
Bahrad A. Sokhansanj ◽  
Suman Datta ◽  
Xiaohua Hu

The success of the Program of housing stock renovation in Moscow depends on the efficiency of resource management. One of the main urban planning documents that determine the nature of the reorganization of residential areas included in the Program of renovation is the territory planning project. The implementation of the planning project is a complex process that has a time point of its beginning and end, and also includes a set of interdependent parallel-sequential activities. From an organizational point of view, it is convenient to use network planning and management methods for project implementation. These methods are based on the construction of network models, including its varieties – a Gantt chart. A special application has been developed to simulate the implementation of planning projects. The article describes the basic principles and elements of modeling. The list of the main implementation parameters of the Program of renovation obtained with the help of the developed software for modeling is presented. The variants of using the results obtained for a comprehensive analysis of the implementation of large-scale urban projects are proposed.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Giuseppe Giacopelli ◽  
Domenico Tegolo ◽  
Emiliano Spera ◽  
Michele Migliore

AbstractThe brain’s structural connectivity plays a fundamental role in determining how neuron networks generate, process, and transfer information within and between brain regions. The underlying mechanisms are extremely difficult to study experimentally and, in many cases, large-scale model networks are of great help. However, the implementation of these models relies on experimental findings that are often sparse and limited. Their predicting power ultimately depends on how closely a model’s connectivity represents the real system. Here we argue that the data-driven probabilistic rules, widely used to build neuronal network models, may not be appropriate to represent the dynamics of the corresponding biological system. To solve this problem, we propose to use a new mathematical framework able to use sparse and limited experimental data to quantitatively reproduce the structural connectivity of biological brain networks at cellular level.


1997 ◽  
pp. 931-935 ◽  
Author(s):  
Anders Lansner ◽  
Örjan Ekeberg ◽  
Erik Fransén ◽  
Per Hammarlund ◽  
Tomas Wilhelmsson

2020 ◽  
Vol 34 (05) ◽  
pp. 9282-9289
Author(s):  
Qingyang Wu ◽  
Lei Li ◽  
Hao Zhou ◽  
Ying Zeng ◽  
Zhou Yu

Many social media news writers are not professionally trained. Therefore, social media platforms have to hire professional editors to adjust amateur headlines to attract more readers. We propose to automate this headline editing process through neural network models to provide more immediate writing support for these social media news writers. To train such a neural headline editing model, we collected a dataset which contains articles with original headlines and professionally edited headlines. However, it is expensive to collect a large number of professionally edited headlines. To solve this low-resource problem, we design an encoder-decoder model which leverages large scale pre-trained language models. We further improve the pre-trained model's quality by introducing a headline generation task as an intermediate task before the headline editing task. Also, we propose Self Importance-Aware (SIA) loss to address the different levels of editing in the dataset by down-weighting the importance of easily classified tokens and sentences. With the help of Pre-training, Adaptation, and SIA, the model learns to generate headlines in the professional editor's style. Experimental results show that our method significantly improves the quality of headline editing comparing against previous methods.


Author(s):  
Sacha J. van Albada ◽  
Jari Pronold ◽  
Alexander van Meegen ◽  
Markus Diesmann

AbstractWe are entering an age of ‘big’ computational neuroscience, in which neural network models are increasing in size and in numbers of underlying data sets. Consolidating the zoo of models into large-scale models simultaneously consistent with a wide range of data is only possible through the effort of large teams, which can be spread across multiple research institutions. To ensure that computational neuroscientists can build on each other’s work, it is important to make models publicly available as well-documented code. This chapter describes such an open-source model, which relates the connectivity structure of all vision-related cortical areas of the macaque monkey with their resting-state dynamics. We give a brief overview of how to use the executable model specification, which employs NEST as simulation engine, and show its runtime scaling. The solutions found serve as an example for organizing the workflow of future models from the raw experimental data to the visualization of the results, expose the challenges, and give guidance for the construction of an ICT infrastructure for neuroscience.


2021 ◽  
Author(s):  
Damoun Langary ◽  
Anika Kueken ◽  
Zoran Nikoloski

Balanced complexes in biochemical networks are at core of several theoretical and computational approaches that make statements about the properties of the steady states supported by the network. Recent computational approaches have employed balanced complexes to reduce metabolic networks, while ensuring preservation of particular steady-state properties; however, the underlying factors leading to the formation of balanced complexes have not been studied, yet. Here, we present a number of factorizations providing insights in mechanisms that lead to the origins of the corresponding balanced complexes. The proposed factorizations enable us to categorize balanced complexes into four distinct classes, each with specific origins and characteristics. They also provide the means to efficiently determine if a balanced complex in large-scale networks belongs to a particular class from the categorization. The results are obtained under very general conditions and irrespective of the network kinetics, rendering them broadly applicable across variety of network models. Application of the categorization shows that all classes of balanced complexes are present in large-scale metabolic models across all kingdoms of life, therefore paving the way to study their relevance with respect to different properties of steady states supported by these networks.


Author(s):  
Jiang Xie ◽  
Weibing Feng ◽  
Shihua Zhang ◽  
Songbei Li ◽  
Guoyong Mao ◽  
...  

Author(s):  
Vo Ngoc Phu ◽  
Vo Thi Ngoc Tran

Artificial intelligence (ARTINT) and information have been famous fields for many years. A reason has been that many different areas have been promoted quickly based on the ARTINT and information, and they have created many significant values for many years. These crucial values have certainly been used more and more for many economies of the countries in the world, other sciences, companies, organizations, etc. Many massive corporations, big organizations, etc. have been established rapidly because these economies have been developed in the strongest way. Unsurprisingly, lots of information and large-scale data sets have been created clearly from these corporations, organizations, etc. This has been the major challenges for many commercial applications, studies, etc. to process and store them successfully. To handle this problem, many algorithms have been proposed for processing these big data sets.


2018 ◽  
Vol 7 (3.15) ◽  
pp. 95 ◽  
Author(s):  
M Zabir ◽  
N Fazira ◽  
Zaidah Ibrahim ◽  
Nurbaity Sabri

This paper aims to evaluate the accuracy performance of pre-trained Convolutional Neural Network (CNN) models, namely AlexNet and GoogLeNet accompanied by one custom CNN. AlexNet and GoogLeNet have been proven for their good capabilities as these network models had entered ImageNet Large Scale Visual Recognition Challenge (ILSVRC) and produce relatively good results. The evaluation results in this research are based on the accuracy, loss and time taken of the training and validation processes. The dataset used is Caltech101 by California Institute of Technology (Caltech) that contains 101 object categories. The result reveals that custom CNN architecture produces 91.05% accuracy whereas AlexNet and GoogLeNet achieve similar accuracy which is 99.65%. GoogLeNet consistency arrives at an early training stage and provides minimum error function compared to the other two models. 


Sign in / Sign up

Export Citation Format

Share Document