scholarly journals The role of technical elements of digitalization in the development of modern animal farming

2021 ◽  
Vol 7 (Special) ◽  
pp. 10-10
Author(s):  
Alexey Shemetov ◽  
◽  
Andrey Ivanov

It is estimated that farmers need to increase production by 70% over the next 50 years to meet the growing global demand for meat and animal products [1]. Since land and other natural resources are limited, more efficient ways of raising more animals per hectare of land will need to be found to meet this growing demand. Today, most animal husbandry methods require manual intervention at some level. This affects production productivity. Previously, digital technologies were expensive and could not be applied on a large scale. Today, sensors, big data, and machine learning algorithms have significant cost advantages over these older detection methods. Currently, the sensors offered by the market are significantly limited in reliable forecasting and disease management in animal husbandry due to continuous automated monitoring in real time. In addition, there are certain technical problems, such as the location of the sensors, what the sampling rate will be, and how the data will be transmitted. All of these considerations affect the accuracy of the algorithms, as well as the scalability and practicality of a solution that ultimately can be used on a livestock farm. In real-time systems, large feature sets can be problematic due to computational complexity and higher storage requirements. In light of the still existing pandemic, when restrictions prevent veterinarians and producers from visiting farms, cowsheds and feed mills (but there is a need for 24/7 information on activities, consumption and production of products in real time), then the current and practically only possible solution is the introduction of digital technologies. Keywords: LIVESTOCK, SMART FARMING, SENSORS, SENSORS, MACHINE LEARNING

2021 ◽  
Vol 28 (1) ◽  
pp. e100251
Author(s):  
Ian Scott ◽  
Stacey Carter ◽  
Enrico Coiera

Machine learning algorithms are being used to screen and diagnose disease, prognosticate and predict therapeutic responses. Hundreds of new algorithms are being developed, but whether they improve clinical decision making and patient outcomes remains uncertain. If clinicians are to use algorithms, they need to be reassured that key issues relating to their validity, utility, feasibility, safety and ethical use have been addressed. We propose a checklist of 10 questions that clinicians can ask of those advocating for the use of a particular algorithm, but which do not expect clinicians, as non-experts, to demonstrate mastery over what can be highly complex statistical and computational concepts. The questions are: (1) What is the purpose and context of the algorithm? (2) How good were the data used to train the algorithm? (3) Were there sufficient data to train the algorithm? (4) How well does the algorithm perform? (5) Is the algorithm transferable to new clinical settings? (6) Are the outputs of the algorithm clinically intelligible? (7) How will this algorithm fit into and complement current workflows? (8) Has use of the algorithm been shown to improve patient care and outcomes? (9) Could the algorithm cause patient harm? and (10) Does use of the algorithm raise ethical, legal or social concerns? We provide examples where an algorithm may raise concerns and apply the checklist to a recent review of diagnostic imaging applications. This checklist aims to assist clinicians in assessing algorithm readiness for routine care and identify situations where further refinement and evaluation is required prior to large-scale use.


2020 ◽  
Vol 8 (Suppl 3) ◽  
pp. A62-A62
Author(s):  
Dattatreya Mellacheruvu ◽  
Rachel Pyke ◽  
Charles Abbott ◽  
Nick Phillips ◽  
Sejal Desai ◽  
...  

BackgroundAccurately identified neoantigens can be effective therapeutic agents in both adjuvant and neoadjuvant settings. A key challenge for neoantigen discovery has been the availability of accurate prediction models for MHC peptide presentation. We have shown previously that our proprietary model based on (i) large-scale, in-house mono-allelic data, (ii) custom features that model antigen processing, and (iii) advanced machine learning algorithms has strong performance. We have extended upon our work by systematically integrating large quantities of high-quality, publicly available data, implementing new modelling algorithms, and rigorously testing our models. These extensions lead to substantial improvements in performance and generalizability. Our algorithm, named Systematic HLA Epitope Ranking Pan Algorithm (SHERPA™), is integrated into the ImmunoID NeXT Platform®, our immuno-genomics and transcriptomics platform specifically designed to enable the development of immunotherapies.MethodsIn-house immunopeptidomic data was generated using stably transfected HLA-null K562 cells lines that express a single HLA allele of interest, followed by immunoprecipitation using W6/32 antibody and LC-MS/MS. Public immunopeptidomics data was downloaded from repositories such as MassIVE and processed uniformly using in-house pipelines to generate peptide lists filtered at 1% false discovery rate. Other metrics (features) were either extracted from source data or generated internally by re-processing samples utilizing the ImmunoID NeXT Platform.ResultsWe have generated large-scale and high-quality immunopeptidomics data by using approximately 60 mono-allelic cell lines that unambiguously assign peptides to their presenting alleles to create our primary models. Briefly, our primary ‘binding’ algorithm models MHC-peptide binding using peptide and binding pockets while our primary ‘presentation’ model uses additional features to model antigen processing and presentation. Both primary models have significantly higher precision across all recall values in multiple test data sets, including mono-allelic cell lines and multi-allelic tissue samples. To further improve the performance of our model, we expanded the diversity of our training set using high-quality, publicly available mono-allelic immunopeptidomics data. Furthermore, multi-allelic data was integrated by resolving peptide-to-allele mappings using our primary models. We then trained a new model using the expanded training data and a new composite machine learning architecture. The resulting secondary model further improves performance and generalizability across several tissue samples.ConclusionsImproving technologies for neoantigen discovery is critical for many therapeutic applications, including personalized neoantigen vaccines, and neoantigen-based biomarkers for immunotherapies. Our new and improved algorithm (SHERPA) has significantly higher performance compared to a state-of-the-art public algorithm and furthers this objective.


2021 ◽  
Author(s):  
Arturo Magana-Mora ◽  
Mohammad AlJubran ◽  
Jothibasu Ramasamy ◽  
Mohammed AlBassam ◽  
Chinthaka Gooneratne ◽  
...  

Abstract Objective/Scope. Lost circulation events (LCEs) are among the top causes for drilling nonproductive time (NPT). The presence of natural fractures and vugular formations causes loss of drilling fluid circulation. Drilling depleted zones with incorrect mud weights can also lead to drilling induced losses. LCEs can also develop into additional drilling hazards, such as stuck pipe incidents, kicks, and blowouts. An LCE is traditionally diagnosed only when there is a reduction in mud volume in mud pits in the case of moderate losses or reduction of mud column in the annulus in total losses. Using machine learning (ML) for predicting the presence of a loss zone and the estimation of fracture parameters ahead is very beneficial as it can immediately alert the drilling crew in order for them to take the required actions to mitigate or cure LCEs. Methods, Procedures, Process. Although different computational methods have been proposed for the prediction of LCEs, there is a need to further improve the models and reduce the number of false alarms. Robust and generalizable ML models require a sufficiently large amount of data that captures the different parameters and scenarios representing an LCE. For this, we derived a framework that automatically searches through historical data, locates LCEs, and extracts the surface drilling and rheology parameters surrounding such events. Results, Observations, and Conclusions. We derived different ML models utilizing various algorithms and evaluated them using the data-split technique at the level of wells to find the most suitable model for the prediction of an LCE. From the model comparison, random forest classifier achieved the best results and successfully predicted LCEs before they occurred. The developed LCE model is designed to be implemented in the real-time drilling portal as an aid to the drilling engineers and the rig crew to minimize or avoid NPT. Novel/Additive Information. The main contribution of this study is the analysis of real-time surface drilling parameters and sensor data to predict an LCE from a statistically representative number of wells. The large-scale analysis of several wells that appropriately describe the different conditions before an LCE is critical for avoiding model undertraining or lack of model generalization. Finally, we formulated the prediction of LCEs as a time-series problem and considered parameter trends to accurately determine the early signs of LCEs.


2019 ◽  
Vol 9 (6) ◽  
pp. 1154 ◽  
Author(s):  
Ganjar Alfian ◽  
Muhammad Syafrudin ◽  
Bohan Yoon ◽  
Jongtae Rhee

Radio frequency identification (RFID) is an automated identification technology that can be utilized to monitor product movements within a supply chain in real-time. However, one problem that occurs during RFID data capturing is false positives (i.e., tags that are accidentally detected by the reader but not of interest to the business process). This paper investigates using machine learning algorithms to filter false positives. Raw RFID data were collected based on various tagged product movements, and statistical features were extracted from the received signal strength derived from the raw RFID data. Abnormal RFID data or outliers may arise in real cases. Therefore, we utilized outlier detection models to remove outlier data. The experiment results showed that machine learning-based models successfully classified RFID readings with high accuracy, and integrating outlier detection with machine learning models improved classification accuracy. We demonstrated the proposed classification model could be applied to real-time monitoring, ensuring false positives were filtered and hence not stored in the database. The proposed model is expected to improve warehouse management systems by monitoring delivered products to other supply chain partners.


Energies ◽  
2022 ◽  
Vol 15 (2) ◽  
pp. 582
Author(s):  
Holger Behrends ◽  
Dietmar Millinger ◽  
Werner Weihs-Sedivy ◽  
Anže Javornik ◽  
Gerold Roolfs ◽  
...  

Faults and unintended conditions in grid-connected photovoltaic systems often cause a change of the residual current. This article describes a novel machine learning based approach to detecting anomalies in the residual current of a photovoltaic system. It can be used to detect faults or critical states at an early stage and extends conventional threshold-based detection methods. For this study, a power-hardware-in-the-loop approach was carried out, in which typical faults have been injected under ideal and realistic operating conditions. The investigation shows that faults in a photovoltaic converter system cause a unique behaviour of the residual current and fault patterns can be detected and identified by using pattern recognition and variational autoencoder machine learning algorithms. In this context, it was found that the residual current is not only affected by malfunctions of the system, but also by volatile external influences. One of the main challenges here is to separate the regular residual currents caused by the interferences from those caused by faults. Compared to conventional methods, which respond to absolute changes in residual current, the two machine learning models detect faults that do not affect the absolute value of the residual current.


2021 ◽  
Vol 11 (4) ◽  
pp. 251-264
Author(s):  
Radhika Bhagwat ◽  
Yogesh Dandawate

Plant diseases cause major yield and economic losses. To detect plant disease at early stages, selecting appropriate techniques is imperative as it affects the cost, diagnosis time, and accuracy. This research gives a comprehensive review of various plant disease detection methods based on the images used and processing algorithms applied. It systematically analyzes various traditional machine learning and deep learning algorithms used for processing visible and spectral range images, and comparatively evaluates the work done in literature in terms of datasets used, various image processing techniques employed, models utilized, and efficiency achieved. The study discusses the benefits and restrictions of each method along with the challenges to be addressed for rapid and accurate plant disease detection. Results show that for plant disease detection, deep learning outperforms traditional machine learning algorithms while visible range images are more widely used compared to spectral images.


Sign in / Sign up

Export Citation Format

Share Document