scholarly journals Adding value to food chain information: using data on pig welfare and antimicrobial use on-farm to predict meat inspection outcomes

2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Joana Pessoa ◽  
Conor McAloon ◽  
Maria Rodrigues da Costa ◽  
Edgar García Manzanilla ◽  
Tomas Norton ◽  
...  

Abstract Background Using Food Chain Information data to objectively identify high-risk animals entering abattoirs can represent an important step forward towards improving on-farm animal welfare. We aimed to develop and evaluate the performance of classification models, using Gradient Boosting Machine algorithms that utilise accurate longitudinal on-farm data on pig health and welfare to predict condemnations, pluck lesions and low cold carcass weight at slaughter. Results The accuracy of the models was assessed using the area under the receiver operating characteristics (ROC) curve (AUC). The AUC for the prediction models for pneumonia, dorsocaudal pleurisy, cranial pleurisy, pericarditis, partial and total condemnations, and low cold carcass weight varied from 0.54 for pneumonia and 0.67 for low cold carcass weight. For dorsocaudal pleurisy, ear lesions assessed on pigs aged 12 weeks and antimicrobial treatments (AMT) were the most important prediction variables. Similarly, the most important variable for the prediction of cranial pleurisy was the number of AMT. In the case of pericarditis, ear lesions assessed both at week 12 and 14 were the most important variables and accounted for 33% of the Bernoulli loss reduction. For predicting partial and total condemnations, the presence of hernias on week 18 and lameness on week 12 accounted for 27% and 14% of the Bernoulli loss reduction, respectively. Finally, AMT (37%) and ear lesions assessed on week 12 (15%) were the most important variables for predicting pigs with low cold carcass weight. Conclusions The findings from our study show that on farm assessments of animal-based welfare outcomes and information on antimicrobial treatments have a modest predictive power in relation to the different meat inspection outcomes assessed. New research following the same group of pigs longitudinally from a larger number of farms supplying different slaughterhouses is required to confirm that on farm assessments can add value to Food Chain Information reports.

2020 ◽  
Vol 87 (S1) ◽  
pp. 9-12
Author(s):  
Marta Brščić

AbstractThis Research Reflection raises awareness of the need to broaden perspectives and levels of multidisciplinary and interdisciplinary approaches when considering on-farm dairy cattle welfare. It starts with a brief overview of current animal welfare issues on dairy farms and how they are perceived by different stakeholders. Some divergences in points of view are discussed in more detail and the first steps in networking are mentioned. Particular emphasis is given to both milk and dairy product waste in industrialized countries and the potential effects of its reduction on changes in the production system. The needs for a quantification of such quota and retailer involvement are also analyzed from the perspective that on-farm animal welfare is directly linked to the amount of milk that might be removed from the food chain by adoption of welfare-friendly management, such as cow-calf systems.


2019 ◽  
Vol 21 (9) ◽  
pp. 662-669 ◽  
Author(s):  
Junnan Zhao ◽  
Lu Zhu ◽  
Weineng Zhou ◽  
Lingfeng Yin ◽  
Yuchen Wang ◽  
...  

Background: Thrombin is the central protease of the vertebrate blood coagulation cascade, which is closely related to cardiovascular diseases. The inhibitory constant Ki is the most significant property of thrombin inhibitors. Method: This study was carried out to predict Ki values of thrombin inhibitors based on a large data set by using machine learning methods. Taking advantage of finding non-intuitive regularities on high-dimensional datasets, machine learning can be used to build effective predictive models. A total of 6554 descriptors for each compound were collected and an efficient descriptor selection method was chosen to find the appropriate descriptors. Four different methods including multiple linear regression (MLR), K Nearest Neighbors (KNN), Gradient Boosting Regression Tree (GBRT) and Support Vector Machine (SVM) were implemented to build prediction models with these selected descriptors. Results: The SVM model was the best one among these methods with R2=0.84, MSE=0.55 for the training set and R2=0.83, MSE=0.56 for the test set. Several validation methods such as yrandomization test and applicability domain evaluation, were adopted to assess the robustness and generalization ability of the model. The final model shows excellent stability and predictive ability and can be employed for rapid estimation of the inhibitory constant, which is full of help for designing novel thrombin inhibitors.


2017 ◽  
Vol 46 (5) ◽  
pp. 390-396 ◽  
Author(s):  
Rakesh Malhotra ◽  
Xia Tao ◽  
Yuedong Wang ◽  
Yuqi Chen ◽  
Rebecca H. Apruzzese ◽  
...  

Background: The surprise question (SQ) (“Would you be surprised if this patient were still alive in 6 or 12 months?”) is used as a mortality prognostication tool in hemodialysis (HD) patients. We compared the performance of the SQ with that of prediction models (PMs) for 6- and 12-month mortality prediction. Methods: Demographic, clinical, laboratory, and dialysis treatment indicators were used to model 6- and 12-month mortality probability in a HD patients training cohort (n = 6,633) using generalized linear models (GLMs). A total of 10 nephrologists from 5 HD clinics responded to the SQ in 215 patients followed prospectively for 12 months. The performance of PM was evaluated in the validation (n = 6,634) and SQ cohorts (n = 215) using the areas under receiver operating characteristics curves. We compared sensitivities and specificities of PM and SQ. Results: The PM and SQ cohorts comprised 13,267 (mean age 61 years, 55% men, 54% whites) and 215 (mean age 62 years, 59% men, 50% whites) patients, respectively. During the 12-month follow-up, 1,313 patients died in the prediction model cohort and 22 in the SQ cohort. For 6-month mortality prediction, the GLM had areas under the curve of 0.77 in the validation cohort and 0.77 in the SQ cohort. As for 12-month mortality, areas under the curve were 0.77 and 0.80 in the validation and SQ cohorts, respectively. The 6- and 12-month PMs had sensitivities of 0.62 (95% CI 0.35–0.88) and 0.75 (95% CI 0.56–0.94), respectively. The 6- and 12-month SQ sensitivities were 0.23 (95% CI 0.002–0.46) and 0.35 (95% CI 0.14–0.56), respectively. Conclusion: PMs exhibit superior sensitivity compared to the SQ for mortality prognostication in HD patients.


Author(s):  
Yusuf Durachman ◽  

Current advancements in cellular technologies and computing have provided the basis for the unparalleled exponential development of mobile networking and software availability and quality combined with multiple systems or network software. Using wireless technologies and mobile ad-hoc networks, such systems and technology interact and collect information. To achieve the Quality of Service (QoS) criteria, the growing concern in wireless network performance and the availability of mobile users would support a significant rise in wireless applications. Predicting the mobility of wireless users and systems performs an important role in the effective strategic decision making of wireless network bandwidth service providers. Furthermore, related to the defect-proneness, self-organization, and mobility aspect of such networks, new architecture problems occur. This paper proposes to predict and simulate the mobility of specific nodes on a mobile ad-hoc network, gradient boosting devices defined for the system will help. The proposed model not just to outperform previous mobility prediction models using simulated and real-world mobility instances, but provides better predictive accuracy by an enormous margin. The accuracy obtained helps the suggested mobility indicator in Mobile Adhoc Networks to increase the average level of performance.


2019 ◽  
Vol 15 (2) ◽  
pp. 201-214 ◽  
Author(s):  
Mahmoud Elish

Purpose Effective and efficient software security inspection is crucial as the existence of vulnerabilities represents severe risks to software users. The purpose of this paper is to empirically evaluate the potential application of Stochastic Gradient Boosting Trees (SGBT) as a novel model for enhanced prediction of vulnerable Web components compared to common, popular and recent machine learning models. Design/methodology/approach An empirical study was conducted where the SGBT and 16 other prediction models have been trained, optimized and cross validated using vulnerability data sets from multiple versions of two open-source Web applications written in PHP. The prediction performance of these models have been evaluated and compared based on accuracy, precision, recall and F-measure. Findings The results indicate that the SGBT models offer improved prediction over the other 16 models and thus are more effective and reliable in predicting vulnerable Web components. Originality/value This paper proposed a novel application of SGBT for enhanced prediction of vulnerable Web components and showed its effectiveness.


2020 ◽  
Vol 20 (1) ◽  
Author(s):  
Matthijs Blankers ◽  
Louk F. M. van der Post ◽  
Jack J. M. Dekker

Abstract Background Accurate prediction models for whether patients on the verge of a psychiatric criseis need hospitalization are lacking and machine learning methods may help improve the accuracy of psychiatric hospitalization prediction models. In this paper we evaluate the accuracy of ten machine learning algorithms, including the generalized linear model (GLM/logistic regression) to predict psychiatric hospitalization in the first 12 months after a psychiatric crisis care contact. We also evaluate an ensemble model to optimize the accuracy and we explore individual predictors of hospitalization. Methods Data from 2084 patients included in the longitudinal Amsterdam Study of Acute Psychiatry with at least one reported psychiatric crisis care contact were included. Target variable for the prediction models was whether the patient was hospitalized in the 12 months following inclusion. The predictive power of 39 variables related to patients’ socio-demographics, clinical characteristics and previous mental health care contacts was evaluated. The accuracy and area under the receiver operating characteristic curve (AUC) of the machine learning algorithms were compared and we also estimated the relative importance of each predictor variable. The best and least performing algorithms were compared with GLM/logistic regression using net reclassification improvement analysis and the five best performing algorithms were combined in an ensemble model using stacking. Results All models performed above chance level. We found Gradient Boosting to be the best performing algorithm (AUC = 0.774) and K-Nearest Neighbors to be the least performing (AUC = 0.702). The performance of GLM/logistic regression (AUC = 0.76) was slightly above average among the tested algorithms. In a Net Reclassification Improvement analysis Gradient Boosting outperformed GLM/logistic regression by 2.9% and K-Nearest Neighbors by 11.3%. GLM/logistic regression outperformed K-Nearest Neighbors by 8.7%. Nine of the top-10 most important predictor variables were related to previous mental health care use. Conclusions Gradient Boosting led to the highest predictive accuracy and AUC while GLM/logistic regression performed average among the tested algorithms. Although statistically significant, the magnitude of the differences between the machine learning algorithms was in most cases modest. The results show that a predictive accuracy similar to the best performing model can be achieved when combining multiple algorithms in an ensemble model.


2021 ◽  
Vol 42 (Supplement_1) ◽  
Author(s):  
M Lewis ◽  
J Figueroa

Abstract   Recent health reforms have created incentives for cardiologists and accountable care organizations to participate in value-based care models for heart failure (HF). Accurate risk stratification of HF patients is critical to efficiently deploy interventions aimed at reducing preventable utilization. The goal of this paper was to compare deep learning approaches with traditional logistic regression (LR) to predict preventable utilization among HF patients. We conducted a prognostic study using data on 93,260 HF patients continuously enrolled for 2-years in a large U.S. commercial insurer to develop and validate prediction models for three outcomes of interest: preventable hospitalizations, preventable emergency department (ED) visits, and preventable costs. Patients were split into training, validation, and testing samples. Outcomes were modeled using traditional and enhanced LR and compared to gradient boosting model and deep learning models using sequential and non-sequential inputs. Evaluation metrics included precision (positive predictive value) at k, cost capture, and Area Under the Receiver operating characteristic (AUROC). Deep learning models consistently outperformed LR for all three outcomes with respect to the chosen evaluation metrics. Precision at 1% for preventable hospitalizations was 43% for deep learning compared to 30% for enhanced LR. Precision at 1% for preventable ED visits was 39% for deep learning compared to 33% for enhanced LR. For preventable cost, cost capture at 1% was 30% for sequential deep learning, compared to 18% for enhanced LR. The highest AUROCs for deep learning were 0.778, 0.681 and 0.727, respectively. These results offer a promising approach to identify patients for targeted interventions. FUNDunding Acknowledgement Type of funding sources: Private company. Main funding source(s): internally funded by Diagnostic Robotics Inc.


2020 ◽  
Author(s):  
Zhanyou Xu ◽  
Andreomar Kurek ◽  
Steven B. Cannon ◽  
Williams D. Beavis

AbstractSelection of markers linked to alleles at quantitative trait loci (QTL) for tolerance to Iron Deficiency Chlorosis (IDC) has not been successful. Genomic selection has been advocated for continuous numeric traits such as yield and plant height. For ordinal data types such as IDC, genomic prediction models have not been systematically compared. The objectives of research reported in this manuscript were to evaluate the most commonly used genomic prediction method, ridge regression and it’s equivalent logistic ridge regression method, with algorithmic modeling methods including random forest, gradient boosting, support vector machine, K-nearest neighbors, Naïve Bayes, and artificial neural network using the usual comparator metric of prediction accuracy. In addition we compared the methods using metrics of greater importance for decisions about selecting and culling lines for use in variety development and genetic improvement projects. These metrics include specificity, sensitivity, precision, decision accuracy, and area under the receiver operating characteristic curve. We found that Support Vector Machine provided the best specificity for culling IDC susceptible lines, while Random Forest GP models provided the best combined set of decision metrics for retaining IDC tolerant and culling IDC susceptible lines.


Sign in / Sign up

Export Citation Format

Share Document