statistical machine learning
Recently Published Documents


TOTAL DOCUMENTS

284
(FIVE YEARS 166)

H-INDEX

18
(FIVE YEARS 7)

Author(s):  
Harish P. ◽  
Sreedhar S. ◽  
Kunhikoyamu . ◽  
Namboothiri M. ◽  
Devi S. ◽  
...  

Artificial intelligence (AI) can be demonstrated as intelligence demonstrated by machines.AI research has gone through different phases like simulating the brain, modeling human problem solving, formal logic, large databases of knowledge and imitating animal behavior. In the beginning of 21st century, highly mathematical statistical machine learning has dominated the field, was found useful and considered in helping to solve many challenging problems throughout industry and academia. The domain was discovered and work was done on the assumption that human intelligence can be simulated by machines. These initiate some discussions in raising queries about the mind and the ethics of creating artificial beings with human-like intelligence. Myth, fiction, and philosophy are involved in the creation of this field. The debates and discussion also point to concerns of misuse regarding this technology.  


2022 ◽  
pp. 571-599
Author(s):  
Andrew F. Siegel ◽  
Michael R. Wagner

Author(s):  
Osval Antonio Montesinos López ◽  
Abelardo Montesinos López ◽  
Jose Crossa

AbstractThis data preparation chapter is of paramount importance for implementing statistical machine learning methods for genomic selection. We present the basic linear mixed model that gives rise to BLUE and BLUP and explain how to decide when to use fixed or random effects that give rise to best linear unbiased estimates (BLUE or BLUEs) and best linear unbiased predictors (BLUP or BLUPs). The R codes for fitting linear mixed model for the data are given in small examples. We emphasize tools for computing BLUEs and BLUPs for many linear combinations of interest in genomic-enabled prediction and plant breeding. We present tools for cleaning, imputing, and detecting minor and major allele frequency computation, marker recodification, frequency of heterogeneous, frequency of NAs, and three methods for computing the genomic relationship matrix. In addition, scaling and data compression of inputs are important in statistical machine learning. For a more extensive description of linear mixed models, see Chap. 10.1007/978-3-030-89010-0_5.


Author(s):  
Osval Antonio Montesinos López ◽  
Abelardo Montesinos López ◽  
Jose Crossa

AbstractThe overfitting phenomenon happens when a statistical machine learning model learns very well about the noise as well as the signal that is present in the training data. On the other hand, an underfitted phenomenon occurs when only a few predictors are included in the statistical machine learning model that represents the complete structure of the data pattern poorly. This problem also arises when the training data set is too small and thus an underfitted model does a poor job of fitting the training data and unsatisfactorily predicts new data points. This chapter describes the importance of the trade-off between prediction accuracy and model interpretability, as well as the difference between explanatory and predictive modeling: Explanatory modeling minimizes bias, whereas predictive modeling seeks to minimize the combination of bias and estimation variance. We assess the importance and different methods of cross-validation as well as the importance and strategies of tuning that are key to the successful use of some statistical machine learning methods. We explain the most important metrics for evaluating the prediction performance for continuous, binary, categorical, and count response variables.


Author(s):  
Osval Antonio Montesinos López ◽  
Abelardo Montesinos López ◽  
Jose Crossa

AbstractNowadays, huge data quantities are collected and analyzed for delivering deep insights into biological processes and human behavior. This chapter assesses the use of big data for prediction and estimation through statistical machine learning and its applications in agriculture and genetics in general, and specifically, for genome-based prediction and selection. First, we point out the importance of data and how the use of data is reshaping our way of living. We also provide the key elements of genomic selection and its potential for plant improvement. In addition, we analyze elements of modeling with machine learning methods applied to genomic selection and stress their importance as a predictive methodology. Two cultures of model building are analyzed and discussed: prediction and inference; by understanding modeling building, researchers will be able to select the best model/method for each circumstance. Within this context, we explain the differences between nonparametric models (predictors are constructed according to information derived from data) and parametric models (all the predictors take predetermined forms with the response) as well their type of effects: fixed, random, and mixed. Basic elements of linear algebra are provided to facilitate understanding the contents of the book. This chapter also contains examples of the different types of data using supervised, unsupervised, and semi-supervised learning methods.


Author(s):  
Osval Antonio Montesinos López ◽  
Abelardo Montesinos López ◽  
Jose Crossa

AbstractThis chapter gives details of the linear multiple regression model including assumptions and some pros and cons, the maximum likelihood. Gradient descendent methods are described for learning the parameters under this model. Penalized linear multiple regression is derived under Ridge and Lasso penalties, which also emphasizes the estimation of the regularization parameter of importance for its successful implementation. Examples are given for both penalties (Ridge and Lasso) and but not for penalized regression multiple regression framework for illustrating the circumstances when the penalized versions should be preferred. Finally, the fundamentals of penalized and non-penalized logistic regression are provided under a gradient descendent framework. We give examples of logistic regression. Each example comes with the corresponding R codes to facilitate their quick understanding and use.


Author(s):  
Osval Antonio Montesinos López ◽  
Abelardo Montesinos López ◽  
Jose Crossa

AbstractWe provide the fundamentals of convolutional neural networks (CNNs) and include several examples using the Keras library. We give a formal motivation for using CNN that clearly shows the advantages of this topology compared to feedforward networks for processing images. Several practical examples with plant breeding data are provided using CNNs under two scenarios: (a) one-dimensional input data and (b) two-dimensional input data. The examples also illustrate how to tune the hyperparameters to be able to increase the probability of a successful application. Finally, we give comments on the advantages and disadvantages of deep neural networks in general as compared with many other statistical machine learning methodologies.


Author(s):  
Osval Antonio Montesinos López ◽  
Abelardo Montesinos López ◽  
Jose Crossa

AbstractThe fundamentals for Reproducing Kernel Hilbert Spaces (RKHS) regression methods are described in this chapter. We first point out the virtues of RKHS regression methods and why these methods are gaining a lot of acceptance in statistical machine learning. Key elements for the construction of RKHS regression methods are provided, the kernel trick is explained in some detail, and the main kernel functions for building kernels are provided. This chapter explains some loss functions under a fixed model framework with examples of Gaussian, binary, and categorical response variables. We illustrate the use of mixed models with kernels by providing examples for continuous response variables. Practical issues for tuning the kernels are illustrated. We expand the RKHS regression methods under a Bayesian framework with practical examples applied to continuous and categorical response variables and by including in the predictor the main effects of environments, genotypes, and the genotype ×environment interaction. We show examples of multi-trait RKHS regression methods for continuous response variables. Finally, some practical issues of kernel compression methods are provided which are important for reducing the computation cost of implementing conventional RKHS methods.


Sign in / Sign up

Export Citation Format

Share Document