scholarly journals Computational ecology as an emerging science

2012 ◽  
Vol 2 (2) ◽  
pp. 241-254 ◽  
Author(s):  
Sergei Petrovskii ◽  
Natalia Petrovskaya

It has long been recognized that numerical modelling and computer simulations can be used as a powerful research tool to understand, and sometimes to predict, the tendencies and peculiarities in the dynamics of populations and ecosystems. It has been, however, much less appreciated that the context of modelling and simulations in ecology is essentially different from those that normally exist in other natural sciences. In our paper, we review the computational challenges arising in modern ecology in the spirit of computational mathematics, i.e. with our main focus on the choice and use of adequate numerical methods. Somewhat paradoxically, the complexity of ecological problems does not always require the use of complex computational methods. This paradox, however, can be easily resolved if we recall that application of sophisticated computational methods usually requires clear and unambiguous mathematical problem statement as well as clearly defined benchmark information for model validation. At the same time, many ecological problems still do not have mathematically accurate and unambiguous description, and available field data are often very noisy, and hence it can be hard to understand how the results of computations should be interpreted from the ecological viewpoint. In this scientific context, computational ecology has to deal with a new paradigm: conventional issues of numerical modelling such as convergence and stability become less important than the qualitative analysis that can be provided with the help of computational techniques. We discuss this paradigm by considering computational challenges arising in several specific ecological applications.

Author(s):  
Wai-Tat Fu ◽  
Mingkun Gao ◽  
Hyo Jin Do

From the Arab Spring to presidential elections, various forms of online social media, forums, and networking platforms have been playing increasing significant roles in our societies. These emerging socio-computer interactions demand new methods of understanding how various design features of online tools may moderate the percolation of information and gradually shape social opinions, influence social choices, and moderate collective action. This chapter starts with a review of the literature on the different ways technologies impact social phenomena, with a special focus on theories that characterize how social processes are moderated by various design features of user interfaces. It then reviews different theory-based computational methods derived from these theories to study socio-computer interaction at various levels. Specific examples of computational techniques are reviewed to illustrate how they can be useful for influencing social processes for various purposes. The chapter ends with how future technologies should be designed to improve socio-computer interaction.


2019 ◽  
Vol 11 (1) ◽  
pp. 59-82 ◽  
Author(s):  
Yongyang Cai

Computational methods are required to solve problems without closed-form solutions in environmental and resource economics. Efficiency, stability, and accuracy are key elements for computational methods. This review discusses state-of-the-art computational methods applied in environmental and resource economics, including optimal control methods for deterministic models, advances in value function iteration and time iteration for general dynamic stochastic problems, nonlinear certainty equivalent approximation, robust decision making, real option analysis, bilevel optimization, solution methods for continuous time problems, and so on. This review also clarifies the so-called curse of dimensionality, and discusses some computational techniques such as approximation methods without the curse of dimensionality and time-dependent approximation domains. Many existing economic models use simplifying and/or unrealistic assumptions with an excuse of computational feasibility, but these assumptions might be able to be relaxed if we choose an efficient computational method discussed in this review.


Molecules ◽  
2020 ◽  
Vol 25 (9) ◽  
pp. 2229 ◽  
Author(s):  
Valentina Tortosa ◽  
Valentina Pietropaolo ◽  
Valentina Brandi ◽  
Gabriele Macari ◽  
Andrea Pasquadibisceglie ◽  
...  

Butylated hydroxytoluene (BHT) is one of the most commonly used synthetic antioxidants in food, cosmetic, pharmaceutical and petrochemical products. BHT is considered safe for human health; however, its widespread use together with the potential toxicological effects have increased consumers concern about the use of this synthetic food additive. In addition, the estimated daily intake of BHT has been demonstrated to exceed the recommended acceptable threshold. In the present work, using BHT as a case study, the usefulness of computational techniques, such as reverse screening and molecular docking, in identifying protein–ligand interactions of food additives at the bases of their toxicological effects has been probed. The computational methods here employed have been useful for the identification of several potential unknown targets of BHT, suggesting a possible explanation for its toxic effects. In silico analyses can be employed to identify new macromolecular targets of synthetic food additives and to explore their functional mechanisms or side effects. Noteworthy, this could be important for the cases in which there is an evident lack of experimental studies, as is the case for BHT.


Molecules ◽  
2020 ◽  
Vol 25 (20) ◽  
pp. 4783
Author(s):  
Reinier Cárdenas ◽  
Javier Martínez-Seoane ◽  
Carlos Amero

Experimental methods are indispensable for the study of the function of biological macromolecules, not just as static structures, but as dynamic systems that change conformation, bind partners, perform reactions, and respond to different stimulus. However, providing a detailed structural interpretation of the results is often a very challenging task. While experimental and computational methods are often considered as two different and separate approaches, the power and utility of combining both is undeniable. The integration of the experimental data with computational techniques can assist and enrich the interpretation, providing new detailed molecular understanding of the systems. Here, we briefly describe the basic principles of how experimental data can be combined with computational methods to obtain insights into the molecular mechanism and expand the interpretation through the generation of detailed models.


2019 ◽  
Vol 21 (10) ◽  
pp. 789-797 ◽  
Author(s):  
Tianyun Wang ◽  
Lei Chen ◽  
Xian Zhao

Aim and Objective: There are several diseases having a complicated mechanism. For such complicated diseases, a single drug cannot treat them very well because these diseases always involve several targets and single targeted drugs cannot modulate these targets simultaneously. Drug combination is an effective way to treat such diseases. However, determination of effective drug combinations is time- and cost-consuming via traditional methods. It is urgent to build quick and cheap methods in this regard. Designing effective computational methods incorporating advanced computational techniques to predict drug combinations is an alternative and feasible way. Method: In this study, we proposed a novel network embedding method, which can extract topological features of each drug combination from a drug network that was constructed using chemical-chemical interaction information retrieved from STITCH. These topological features were combined with individual features of drug combination reported in one previous study. Several advanced computational methods were employed to construct an effective prediction model, such as synthetic minority oversampling technique (SMOTE) that was used to tackle imbalanced dataset, minimum redundancy maximum relevance (mRMR) and incremental feature selection (IFS) methods that were adopted to analyze features and extract optimal features for building an optimal support machine vector (SVM) classifier. Results and Conclusion: The constructed optimal SVM classifier yielded an MCC of 0.806, which is superior to the classifier only using individual features with or without SMOTE. The performance of the classifier can be improved by combining the topological features and essential features of a drug combination.


2005 ◽  
Vol 128 (1) ◽  
pp. 355-359 ◽  
Author(s):  
Abhijit Gosavi ◽  
Shantanu Phatakwala

Background: Form-error measurement is mandatory for the quality assurance of manufactured parts and plays a critical role in precision engineering. There is now a significant literature on analytical methods of form-error measurement, which either use mathematical properties of the relevant objective function or develop a surrogate for the objective function that is more suitable in optimization. On the other hand, computational or numerical methods, which only require the numeric values of the objective function, are less studied in the literature on form-error metrology. Method of Approach: In this paper, we develop a methodology based on the theory of finite-differences derivative descent, which is of a computational nature, for measuring form error in a wide spectrum of features, including straightness, flatness, circularity, sphericity, and cylindricity. For measuring form-error in cylindricity, we also develop a mathematical model that can be used suitably in any computational technique. A goal of this research is to critically evaluate the performance of two computational methods, namely finite-differences and Nelder-Mead, in form-error metrology. Results: Empirically, we find encouraging evidence with the finite-differences approach. Many of the data sets used in experimentation are from the literature. We show that the finite-differences approach outperforms the Nelder-Mead technique in sphericity and cylindricity. Conclusions: Our encouraging empirical evidence with computational methods (like finite differences) indicates that these methods may require closer research attention in the future as the need for more accurate methods increases. A general conclusion from our work is that when analytical methods are unavailable, computational techniques form an efficient route for solving these problems.


2013 ◽  
Vol 93 (2) ◽  
pp. 767-802 ◽  
Author(s):  
Dan Gordon ◽  
Rong Chen ◽  
Shin-Ho Chung

The discovery of new drugs that selectively block or modulate ion channels has great potential to provide new treatments for a host of conditions. One promising avenue revolves around modifying or mimicking certain naturally occurring ion channel modulator toxins. This strategy appears to offer the prospect of designing drugs that are both potent and specific. The use of computational modeling is crucial to this endeavor, as it has the potential to provide lower cost alternatives for exploring the effects of new compounds on ion channels. In addition, computational modeling can provide structural information and theoretical understanding that is not easily derivable from experimental results. In this review, we look at the theory and computational methods that are applicable to the study of ion channel modulators. The first section provides an introduction to various theoretical concepts, including force-fields and the statistical mechanics of binding. We then look at various computational techniques available to the researcher, including molecular dynamics, Brownian dynamics, and molecular docking systems. The latter section of the review explores applications of these techniques, concentrating on pore blocker and gating modifier toxins of potassium and sodium channels. After first discussing the structural features of these channels, and their modes of block, we provide an in-depth review of past computational work that has been carried out. Finally, we discuss prospects for future developments in the field.


2018 ◽  
Vol 15 (138) ◽  
pp. 20170821 ◽  
Author(s):  
Aurore Lyon ◽  
Ana Mincholé ◽  
Juan Pablo Martínez ◽  
Pablo Laguna ◽  
Blanca Rodriguez

Widely developed for clinical screening, electrocardiogram (ECG) recordings capture the cardiac electrical activity from the body surface. ECG analysis can therefore be a crucial first step to help diagnose, understand and predict cardiovascular disorders responsible for 30% of deaths worldwide. Computational techniques, and more specifically machine learning techniques and computational modelling are powerful tools for classification, clustering and simulation, and they have recently been applied to address the analysis of medical data, especially ECG data. This review describes the computational methods in use for ECG analysis, with a focus on machine learning and 3D computer simulations, as well as their accuracy, clinical implications and contributions to medical advances. The first section focuses on heartbeat classification and the techniques developed to extract and classify abnormal from regular beats. The second section focuses on patient diagnosis from whole recordings, applied to different diseases. The third section presents real-time diagnosis and applications to wearable devices. The fourth section highlights the recent field of personalized ECG computer simulations and their interpretation. Finally, the discussion section outlines the challenges of ECG analysis and provides a critical assessment of the methods presented. The computational methods reported in this review are a strong asset for medical discoveries and their translation to the clinical world may lead to promising advances.


2019 ◽  
Vol 21 (5) ◽  
pp. 1676-1696 ◽  
Author(s):  
Zhen Chen ◽  
Pei Zhao ◽  
Fuyi Li ◽  
Yanan Wang ◽  
A Ian Smith ◽  
...  

Abstract RNA post-transcriptional modifications play a crucial role in a myriad of biological processes and cellular functions. To date, more than 160 RNA modifications have been discovered; therefore, accurate identification of RNA-modification sites is fundamental for a better understanding of RNA-mediated biological functions and mechanisms. However, due to limitations in experimental methods, systematic identification of different types of RNA-modification sites remains a major challenge. Recently, more than 20 computational methods have been developed to identify RNA-modification sites in tandem with high-throughput experimental methods, with most of these capable of predicting only single types of RNA-modification sites. These methods show high diversity in their dataset size, data quality, core algorithms, features extracted and feature selection techniques and evaluation strategies. Therefore, there is an urgent need to revisit these methods and summarize their methodologies, in order to improve and further develop computational techniques to identify and characterize RNA-modification sites from the large amounts of sequence data. With this goal in mind, first, we provide a comprehensive survey on a large collection of 27 state-of-the-art approaches for predicting N1-methyladenosine and N6-methyladenosine sites. We cover a variety of important aspects that are crucial for the development of successful predictors, including the dataset quality, operating algorithms, sequence and genomic features, feature selection, model performance evaluation and software utility. In addition, we also provide our thoughts on potential strategies to improve the model performance. Second, we propose a computational approach called DeepPromise based on deep learning techniques for simultaneous prediction of N1-methyladenosine and N6-methyladenosine. To extract the sequence context surrounding the modification sites, three feature encodings, including enhanced nucleic acid composition, one-hot encoding, and RNA embedding, were used as the input to seven consecutive layers of convolutional neural networks (CNNs), respectively. Moreover, DeepPromise further combined the prediction score of the CNN-based models and achieved around 43% higher area under receiver-operating curve (AUROC) for m1A site prediction and 2–6% higher AUROC for m6A site prediction, respectively, when compared with several existing state-of-the-art approaches on the independent test. In-depth analyses of characteristic sequence motifs identified from the convolution-layer filters indicated that nucleotide presentation at proximal positions surrounding the modification sites contributed most to the classification, whereas those at distal positions also affected classification but to different extents. To maximize user convenience, a web server was developed as an implementation of DeepPromise and made publicly available at http://DeepPromise.erc.monash.edu/, with the server accepting both RNA sequences and genomic sequences to allow prediction of two types of putative RNA-modification sites.


2013 ◽  
Vol 09 (01) ◽  
pp. 13-26 ◽  
Author(s):  
AMIT KUMAR ◽  
BABBAR NEETU ◽  
ABHINAV BANSAL

In this paper, we discuss two new computational techniques for solving a generalized fully fuzzy linear system (FFLS) with arbitrary triangular fuzzy numbers (m,α,β). The methods eliminate the non-negative restriction on the fuzzy coefficient matrix that has been considered by almost every method in the literature and relies on the decomposition of the dual FFLS into a crisp linear system that can be further solved by a variety of classical methods. To illustrate the proposed methods, numerical examples are solved and the obtained results are discussed. The methods pose several advantages over the existing methods to solve a simple or dual FFLS.


Sign in / Sign up

Export Citation Format

Share Document