General models of ecological diversification. II. Simulations and empirical applications

Paleobiology ◽  
2016 ◽  
Vol 42 (2) ◽  
pp. 209-239 ◽  
Author(s):  
Philip M. Novack-Gottshall

AbstractModels of functional ecospace diversification within life-habit frameworks (functional-trait spaces) are increasingly used across community ecology, functional ecology, and paleoecology. In general, these models can be represented by four basic processes, three that have driven causes and one that occurs through a passive process. The driven models include redundancy (caused by forms of functional canalization), partitioning (specialization), and expansion (divergent novelty), but they also share important dynamical similarities with the passive neutral model. In this second of two companion articles, Monte Carlo simulations of these models are used to illustrate their basic statistical dynamics across a range of data structures and implementations. Ecospace frameworks with greater numbers of characters (functional traits) and ordered (multistate) character types provide more distinct dynamics and greater ability to distinguish the models, but the general dynamics tend to be congruent across all implementations. Classification-tree methods are proposed as a powerful means to select among multiple candidate models when using multivariate data sets. Well-preserved Late Ordovician (type Cincinnatian) samples from the Kope and Waynesville formations are used to illustrate how these models can be inferred in empirical applications. Initial simulations overestimate the ecological disparity of actual assemblages, confirming that actual life habits are highly constrained. Modifications incorporating more realistic assumptions (such as weighting potential life habits according to actual frequencies and adding a parameter controlling the strength of each model’s rules) provide better correspondence to actual assemblages. Samples from both formations are best fit by partitioning (and to lesser extent redundancy) models, consistent with a role for local processes. When aggregated as an entire formation, the Kope Formation pool remains best fit by the partitioning model, whereas the entire Waynesville pool is better fit by the redundancy model, implying greater beta diversity within this unit. The ‘ecospace’ package is provided to implement the simulations and to calculate their dynamics using the R statistical language.

2020 ◽  
Vol 51 (1) ◽  
pp. 533-560 ◽  
Author(s):  
Joseph A. Tobias ◽  
Jente Ottenburghs ◽  
Alex L. Pigot

The origin, distribution, and function of biological diversity are fundamental themes of ecology and evolutionary biology. Research on birds has played a major role in the history and development of these ideas, yet progress was for many decades limited by a focus on patterns of current diversity, often restricted to particular clades or regions. Deeper insight is now emerging from a recent wave of integrative studies combining comprehensive phylogenetic, environmental, and functional trait data at unprecedented scales. We review these empirical advances and describe how they are reshaping our understanding of global patterns of bird diversity and the processes by which it arises, with implications for avian biogeography and functional ecology. Further expansion and integration of data sets may help to resolve longstanding debates about the evolutionary origins of biodiversity and offer a framework for understanding and predicting the response of ecosystems to environmental change.


Paleobiology ◽  
2016 ◽  
Vol 42 (2) ◽  
pp. 185-208 ◽  
Author(s):  
Philip M. Novack-Gottshall

AbstractEvolutionary paleoecologists have proposed many explanations for Phanerozoic trends in ecospace utilization, including escalation, seafood through time, filling of an empty ecospace, and tiering, among others. These hypotheses can be generalized into four models of functional diversification within a life-habit ecospace framework (functional-trait space). The models also incorporate concepts in community assembly, functional diversity, evolutionary diversification, and morphological disparity. The redundancy model produces an ecospace composed of clusters of functionally similar taxa. The partitioning model produces an ecospace that is progressively subdivided by taxa along life-habit gradients. The expansion model produces an ecospace that becomes progressively enlarged by the accumulation of taxa with novel life habits. These models can be caused by a wide range of ecological and evolutionary processes, but they are all caused by particular “driven” mechanisms. A fourth, neutral model also exists, in which ecospace is filled at random by life habits: this model can serve as a passive null model. Each model produces distinct dynamics for functional diversity/disparity statistics when simulated by stochastic simulations of ecospace diversification. In this first of two companion articles, I summarize the theoretical bases of these models, describe their expected statistical dynamics, and discuss their relevance to important paleoecological trends and theories. Although most synoptic interpretations of Phanerozoic ecological history invoke one or more of the driven models, I argue that this conclusion is premature until tests are conducted that provide better statistical support for them over simpler passive models.


2021 ◽  
Vol 12 (2) ◽  
pp. 317-334
Author(s):  
Omar Alaqeeli ◽  
Li Xing ◽  
Xuekui Zhang

Classification tree is a widely used machine learning method. It has multiple implementations as R packages; rpart, ctree, evtree, tree and C5.0. The details of these implementations are not the same, and hence their performances differ from one application to another. We are interested in their performance in the classification of cells using the single-cell RNA-Sequencing data. In this paper, we conducted a benchmark study using 22 Single-Cell RNA-sequencing data sets. Using cross-validation, we compare packages’ prediction performances based on their Precision, Recall, F1-score, Area Under the Curve (AUC). We also compared the Complexity and Run-time of these R packages. Our study shows that rpart and evtree have the best Precision; evtree is the best in Recall, F1-score and AUC; C5.0 prefers more complex trees; tree is consistently much faster than others, although its complexity is often higher than others.


2017 ◽  
Vol 26 (11) ◽  
pp. 1750124 ◽  
Author(s):  
E. Ebrahimi ◽  
H. Golchin ◽  
A. Mehrabi ◽  
S. M. S. Movahed

In this paper, we investigate ghost dark energy model in the presence of nonlinear interaction between dark energy and dark matter. We also extend the analysis to the so-called generalized ghost dark energy (GGDE) which [Formula: see text]. The model contains three free parameters as [Formula: see text] and [Formula: see text] (the coupling coefficient of interactions). We propose three kinds of nonlinear interaction terms and discuss the behavior of equation of state, deceleration and dark energy density parameters of the model. We also find the squared sound speed and search for signs of stability of the model. To compare the interacting GGDE model with observational data sets, we use more recent observational outcomes, namely SNIa from JLA catalog, Hubble parameter, baryonic acoustic oscillation and the most relevant CMB parameters including, the position of acoustic peaks, shift parameters and redshift to recombination. For GGDE with the first nonlinear interaction, the joint analysis indicates that [Formula: see text], [Formula: see text] and [Formula: see text] at 1 optimal variance error. For the second interaction, the best fit values at [Formula: see text] confidence are [Formula: see text], [Formula: see text] and [Formula: see text]. According to combination of all observational data sets considered in this paper, the best fit values for third nonlinearly interacting model are [Formula: see text], [Formula: see text] and [Formula: see text] at [Formula: see text] confidence interval. Finally, we found that the presence of interaction is compatible in mentioned models via current observational datasets.


Author(s):  
Uzma Raja ◽  
Marietta J. Tretter

Open Source Software (OSS) has reached new levels of sophistication and acceptance by users and commercial software vendors. This research creates tests and validates a model for predicting successful development of OSS projects. Widely available archival data was used for OSS projects from Sourceforge.net. The data is analyzed with multiple Data Mining techniques. Initially three competing models are created using Logistic Regression, Decision Trees and Neural Networks. These models are compared for precision and are refined in several phases. Text Mining is used to create new variables that improve the predictive power of the models. The final model is chosen based on best fit to separate training and validation data sets and the ability to explain the relationship among variables. Model robustness is determined by testing it on a new dataset extracted from the SF repository. The results indicate that end-user involvement, project age, functionality, usage, project management techniques, project type and team communication methods have a significant impact on the development of OSS projects.


2019 ◽  
Vol 26 (2) ◽  
pp. 290-310 ◽  
Author(s):  
Balaraju Jakkula ◽  
Govinda Raj M. ◽  
Murthy Ch.S.N.

Purpose Load haul dumper (LHD) is one of the main ore transporting machineries used in underground mining industry. Reliability of LHD is very significant to achieve the expected targets of production. The performance of the equipment should be maintained at its highest level to fulfill the targets. This can be accomplished only by reducing the sudden breakdowns of component/subsystems in a complex system. The identification of defective component/subsystems can be possible by performing the downtime analysis. Hence, it is very important to develop the proper maintenance strategies for replacement or repair actions of the defective ones. Suitable maintenance management actions improve the performance of the equipment. This paper aims to discuss this issue. Design/methodology/approach Reliability analysis (renewal approach) has been used to analyze the performance of LHD machine. Allocations of best-fit distribution of data sets were made by the utilization of Kolmogorov–Smirnov (K–S) test. Parametric estimation of theoretical probability distributions was made by utilizing the maximum likelihood estimate (MLE) method. Findings Independent and identical distribution (IID) assumption of data sets was validated through trend and serial correlation tests. On the basis of test results, the data sets are in accordance with IID assumption. Therefore, renewal process approach has been utilized for further investigation. Allocations of best-fit distribution of data sets were made by the utilization of Kolmogorov–Smirnov (K–S) test. Parametric estimation of theoretical probability distributions was made by utilizing the MLE method. Reliability of each individual subsystem has been computed according to the best-fit distribution. In respect of obtained reliability results, the reliability-based preventive maintenance (PM) time schedules were calculated for the expected 90 percent reliability level. Research limitations/implications As the reliability analysis is one of the complex techniques, it requires strategic decision making knowledge for the selection of methodology to be used. As the present case study was from a public sector company, operating under financial constraints the conclusions/findings may not be universally applicable. Originality/value The present study throws light on this equipment that need a tailored maintenance schedule, partly due to the peculiar mining conditions, under which they operate. This study mainly focuses on estimating the performance of four numbers of well-mechanized LHD systems with reliability, availability and maintainability (RAM) modeling. Based on the drawn results, reasons for performance drop of each machine were identified. Suitable recommendations were suggested for the enhancement of performance of capital intensive production equipment. As the maintenance management is only the means for performance improvement of the machinery, PM time intervals were estimated with respect to the expected rate of reliability level.


1995 ◽  
Vol 348 (1324) ◽  
pp. 203-209 ◽  

A seven-compartment model of the mixed layer ecosystem was used to fit a time series of observations derived from data obtained during the 1989 JGOFS North Atlantic Bloom Experiment. A nonlinear optimization technique was used to obtain the best fit to the combined observation set. It was discovered that a solution which gave a good fit to primary production gave a bad fit to zooplankton and vice versa. The solution which fitted primary production also showed good agreement with a number of other independent data sets, but overestimated bacterial production. Further development is necessary to create a model capable of reproducing all the important features of the nitrogen flows within the mixed layer.


2019 ◽  
Vol 35 (06) ◽  
pp. 2050021
Author(s):  
Mohammad Nizam ◽  
Suman Bharti ◽  
Suprabh Prakash ◽  
Ushak Rahaman ◽  
S. Uma Sankar

The long baseline neutrino experiments, T2K and NO[Formula: see text]A, have taken significant amount of data in each of the four channels: (a) [Formula: see text] disappearance, (b) [Formula: see text] disappearance, (c) [Formula: see text] appearance, and (d) [Formula: see text] appearance. There is a mild tension between the disappearance and the appearance data sets of T2K. A more serious tension exists between the [Formula: see text] appearance data of T2K and the [Formula: see text] appearance data of NO[Formula: see text]A. This tension is significant enough that T2K rules out the best-fit point of NO[Formula: see text]A at 95% confidence level, whereas, NO[Formula: see text]A rules out T2K best-fit point at 90% confidence level. We explain the reason why these tensions arise. We also do a combined fit of T2K and NO[Formula: see text]A data and comment on the results of this fit.


2015 ◽  
Vol 14 (03) ◽  
pp. 521-533
Author(s):  
M. Sariyar ◽  
A. Borg

Deterministic record linkage (RL) is frequently regarded as a rival to more sophisticated strategies like probabilistic RL. We investigate the effect of combining deterministic linkage with other linkage techniques. For this task, we use a simple deterministic linkage strategy as a preceding filter: a data pair is classified as ‘match' if all values of attributes considered agree exactly, otherwise as ‘nonmatch'. This strategy is separately combined with two probabilistic RL methods based on the Fellegi–Sunter model and with two classification tree methods (CART and Bagging). An empirical comparison was conducted on two real data sets. We used four different partitions into training data and test data to increase the validity of the results. In almost all cases, application of deterministic linkage as a preceding filter leads to better results compared to the omission of such a pre-filter, and overall classification trees exhibited best results. On all data sets, probabilistic RL only profited from deterministic linkage when the underlying probabilities were estimated before applying deterministic linkage. When using a pre-filter for subtracting definite cases, the underlying population of data pairs changes. It is crucial to take this into account for model-based probabilistic RL.


2001 ◽  
Vol 89 (6) ◽  
Author(s):  
K. Vercammen ◽  
M.A. Glaus ◽  
Luc R. Van Loon

The complexation of Th(IV) and Eu(III) by α-isosaccharinic acid (ISA) has been studied in the pH range from 10.7 to 13.3 by batch sorption experiments, and the influence of Ca on the complexation was investigated. Sixteen data sets – each determined at variable ISA concentrations – are used to determine the stoichiometry of the complexation reactions and the stability constants. Based on best-fit analysis of the sorption data, it is postulated that 1:1 Th:ISA complexes are formed in the absence of Ca according to the complexation reaction: Th+ISA↔(ThISA)


Sign in / Sign up

Export Citation Format

Share Document