HIGHLY SCALABLE PARALLEL COMPUTATIONAL MODELS FOR LARGE-SCALE RTM PROCESS MODELING SIMULATIONS, PART 1: THEORETICAL FORMULATIONS AND GENERIC DESIGN

1999 ◽  
Vol 36 (3) ◽  
pp. 265-285 ◽  
Author(s):  
R. Kanapady, K. K. Tamma, A. Mark
2020 ◽  
Vol 27 ◽  
Author(s):  
Zaheer Ullah Khan ◽  
Dechang Pi

Background: S-sulfenylation (S-sulphenylation, or sulfenic acid) proteins, are special kinds of post-translation modification, which plays an important role in various physiological and pathological processes such as cytokine signaling, transcriptional regulation, and apoptosis. Despite these aforementioned significances, and by complementing existing wet methods, several computational models have been developed for sulfenylation cysteine sites prediction. However, the performance of these models was not satisfactory due to inefficient feature schemes, severe imbalance issues, and lack of an intelligent learning engine. Objective: In this study, our motivation is to establish a strong and novel computational predictor for discrimination of sulfenylation and non-sulfenylation sites. Methods: In this study, we report an innovative bioinformatics feature encoding tool, named DeepSSPred, in which, resulting encoded features is obtained via n-segmented hybrid feature, and then the resampling technique called synthetic minority oversampling was employed to cope with the severe imbalance issue between SC-sites (minority class) and non-SC sites (majority class). State of the art 2DConvolutional Neural Network was employed over rigorous 10-fold jackknife cross-validation technique for model validation and authentication. Results: Following the proposed framework, with a strong discrete presentation of feature space, machine learning engine, and unbiased presentation of the underline training data yielded into an excellent model that outperforms with all existing established studies. The proposed approach is 6% higher in terms of MCC from the first best. On an independent dataset, the existing first best study failed to provide sufficient details. The model obtained an increase of 7.5% in accuracy, 1.22% in Sn, 12.91% in Sp and 13.12% in MCC on the training data and12.13% of ACC, 27.25% in Sn, 2.25% in Sp, and 30.37% in MCC on an independent dataset in comparison with 2nd best method. These empirical analyses show the superlative performance of the proposed model over both training and Independent dataset in comparison with existing literature studies. Conclusion : In this research, we have developed a novel sequence-based automated predictor for SC-sites, called DeepSSPred. The empirical simulations outcomes with a training dataset and independent validation dataset have revealed the efficacy of the proposed theoretical model. The good performance of DeepSSPred is due to several reasons, such as novel discriminative feature encoding schemes, SMOTE technique, and careful construction of the prediction model through the tuned 2D-CNN classifier. We believe that our research work will provide a potential insight into a further prediction of S-sulfenylation characteristics and functionalities. Thus, we hope that our developed predictor will significantly helpful for large scale discrimination of unknown SC-sites in particular and designing new pharmaceutical drugs in general.


Cancers ◽  
2021 ◽  
Vol 13 (9) ◽  
pp. 2111
Author(s):  
Bo-Wei Zhao ◽  
Zhu-Hong You ◽  
Lun Hu ◽  
Zhen-Hao Guo ◽  
Lei Wang ◽  
...  

Identification of drug-target interactions (DTIs) is a significant step in the drug discovery or repositioning process. Compared with the time-consuming and labor-intensive in vivo experimental methods, the computational models can provide high-quality DTI candidates in an instant. In this study, we propose a novel method called LGDTI to predict DTIs based on large-scale graph representation learning. LGDTI can capture the local and global structural information of the graph. Specifically, the first-order neighbor information of nodes can be aggregated by the graph convolutional network (GCN); on the other hand, the high-order neighbor information of nodes can be learned by the graph embedding method called DeepWalk. Finally, the two kinds of feature are fed into the random forest classifier to train and predict potential DTIs. The results show that our method obtained area under the receiver operating characteristic curve (AUROC) of 0.9455 and area under the precision-recall curve (AUPR) of 0.9491 under 5-fold cross-validation. Moreover, we compare the presented method with some existing state-of-the-art methods. These results imply that LGDTI can efficiently and robustly capture undiscovered DTIs. Moreover, the proposed model is expected to bring new inspiration and provide novel perspectives to relevant researchers.


2021 ◽  
Vol 376 (1821) ◽  
pp. 20190765 ◽  
Author(s):  
Giovanni Pezzulo ◽  
Joshua LaPalme ◽  
Fallon Durant ◽  
Michael Levin

Nervous systems’ computational abilities are an evolutionary innovation, specializing and speed-optimizing ancient biophysical dynamics. Bioelectric signalling originated in cells' communication with the outside world and with each other, enabling cooperation towards adaptive construction and repair of multicellular bodies. Here, we review the emerging field of developmental bioelectricity, which links the field of basal cognition to state-of-the-art questions in regenerative medicine, synthetic bioengineering and even artificial intelligence. One of the predictions of this view is that regeneration and regulative development can restore correct large-scale anatomies from diverse starting states because, like the brain, they exploit bioelectric encoding of distributed goal states—in this case, pattern memories. We propose a new interpretation of recent stochastic regenerative phenotypes in planaria, by appealing to computational models of memory representation and processing in the brain. Moreover, we discuss novel findings showing that bioelectric changes induced in planaria can be stored in tissue for over a week, thus revealing that somatic bioelectric circuits in vivo can implement a long-term, re-writable memory medium. A consideration of the mechanisms, evolution and functionality of basal cognition makes novel predictions and provides an integrative perspective on the evolution, physiology and biomedicine of information processing in vivo . This article is part of the theme issue ‘Basal cognition: multicellularity, neurons and the cognitive lens’.


2018 ◽  
Vol 373 (1742) ◽  
pp. 20170031 ◽  
Author(s):  
Steven E. Hyman

An epochal opportunity to elucidate the pathogenic mechanisms of psychiatric disorders has emerged from advances in genomic technology, new computational tools and the growth of international consortia committed to data sharing. The resulting large-scale, unbiased genetic studies have begun to yield new biological insights and with them the hope that a half century of stasis in psychiatric therapeutics will come to an end. Yet a sobering picture is coming into view; it reveals daunting genetic and phenotypic complexity portending enormous challenges for neurobiology. Successful exploitation of results from genetics will require eschewal of long-successful reductionist approaches to investigation of gene function, a commitment to supplanting much research now conducted in model organisms with human biology, and development of new experimental systems and computational models to analyse polygenic causal influences. In short, psychiatric neuroscience must develop a new scientific map to guide investigation through a polygenic terra incognita . This article is part of a discussion meeting issue ‘Of mice and mental health: facilitating dialogue between basic and clinical neuroscientists’.


2018 ◽  
Author(s):  
Yang Xu ◽  
Barbara Claire Malt ◽  
Mahesh Srinivasan

One way that languages are able to communicate a potentially infinite set of ideas through a finite lexicon is by compressing emerging meanings into words, such that over time, individual words come to express multiple, related senses of meaning. We propose that overarching communicative and cognitive pressures have created systematic directionality in how new metaphorical senses have developed from existing word senses over the history of English. Given a large set of pairs of semantic domains, we used computational models to test which domains have been more commonly the starting points (source domains) and which the ending points (target domains) of metaphorical mappings over the past millennium. We found that a compact set of variables, including externality, embodiment, and valence, explain directionality in the majority of about 5000 metaphorical mappings recorded over the past 1100 years. These results provide the first large-scale historical evidence that metaphorical mapping is systematic, and driven by measurable communicative and cognitive principles.


Qui Parle ◽  
2021 ◽  
Vol 30 (1) ◽  
pp. 119-157
Author(s):  
Brett Zehner

Abstract This methodologically important essay aims to trace a genealogical account of Herbert Simon’s media philosophy and to contest the histories of artificial intelligence that overlook the organizational capacities of computational models. As Simon’s work demonstrates, humans’ subjection to large-scale organizations and divisions of labor is at the heart of artificial intelligence. As such, questions of procedures are key to understanding the power assumed by institutions wielding artificial intelligence. Most media-historical accounts of the development of contemporary artificial intelligence stem from the work of Warren S. McCulloch and Walter Pitts, especially the 1943 essay “A Logical Calculus of the Ideas Immanent in Nervous Activity.” Yet Simon’s revenge is perhaps that reinforcement learning systems adopt his prescriptive approach to algorithmic procedures. Computer scientists criticized Simon for the performative nature of his artificially intelligent systems, mainly for his positivism, but he defended his positivism based on his belief that symbolic computation could stand in for any reality and in fact shape that reality. Simon was not looking to actually re-create human intelligence; he was using coercion, bad faith, and fraud as tactical weapons in the reordering of human decision-making. Artificial intelligence was the perfect medium for his explorations.


2020 ◽  
Author(s):  
Yu Wang ◽  
ZAHEER ULLAH KHAN ◽  
Shaukat Ali ◽  
Maqsood Hayat

Abstract BackgroundBacteriophage or phage is a type of virus that replicates itself inside bacteria. It consist of genetic material surrounded by a protein structure. Bacteriophage plays a vital role in the domain of phage therapy and genetic engineering. Phage and hydrolases enzyme proteins have a significant impact on the cure of pathogenic bacterial infections and disease treatment. Accurate identification of bacteriophage proteins is important in the host subcellular localization for further understanding of the interaction between phage, hydrolases, and in designing antibacterial drugs. Looking at the significance of Bacteriophage proteins, besides wet laboratory-based methods several computational models have been developed so far. However, the performance was not considerable due to inefficient feature schemes, redundancy, noise, and lack of an intelligent learning engine. Therefore we have developed an anovative bi-layered model name DeepEnzyPred. A Hybrid feature vector was obtained via a novel Multi-Level Multi-Threshold subset feature selection (MLMT-SFS) algorithm. A two-dimensional convolutional neural network was adopted as a baseline classifier.ResultsA conductive hybrid feature was obtained via a serial combination of CTD and KSAACGP features. The optimum feature was selected via a Novel Multi-Level Multi-Threshold Subset Feature selection algorithm. Over 5-fold jackknife cross-validation, an accuracy of 91.6 %, Sensitivity of 63.39%, Specificity 95.72%, MCC of 0.6049, and ROC value of 0.8772 over Layer-1 were recorded respectively. Similarly, the underline model obtained an Accuracy of 96.05%, Sensitivity of 96.22%, Specificity of 95.91%, MCC of 0.9219, and ROC value of 0.9899 over layer-2 respectivily.ConclusionThis paper presents a robust and effective classification model was developed for bacteriophage and their types. Primitive features were extracted via CTD and KSAACGP. A novel method (MLMT-SFS ) was devised for yielding optimum hybrid feature space out of primitive features. The result drew over hybrid feature space and 2D-CNN shown an excellent classification. Based on the recorded results, we believe that the developed predictor will be a valuable resource for large scale discrimination of unknown Phage and hydrolase enzymes in particular and new antibacterial drug design in pharmaceutical companies in general.


Sign in / Sign up

Export Citation Format

Share Document