scholarly journals Learn to Train: Improving Training Data for a Neural Network to Detect Pecking Injuries in Turkeys

Animals ◽  
2021 ◽  
Vol 11 (9) ◽  
pp. 2655
Author(s):  
Nina Volkmann ◽  
Johannes Brünger ◽  
Jenny Stracke ◽  
Claudius Zelenka ◽  
Reinhard Koch ◽  
...  

This study aimed to develop a camera-based system using artificial intelligence for automated detection of pecking injuries in turkeys. Videos were recorded and split into individual images for further processing. Using specifically developed software, the injuries visible on these images were marked by humans, and a neural network was trained with these annotations. Due to unacceptable agreement between the annotations of humans and the network, several work steps were initiated to improve the training data. First, a costly work step was used to create high-quality annotations (HQA) for which multiple observers evaluated already annotated injuries. Therefore, each labeled detection had to be validated by three observers before it was saved as “finished”, and for each image, all detections had to be verified three times. Then, a network was trained with these HQA to assist observers in annotating more data. Finally, the benefit of the work step generating HQA was tested, and it was shown that the value of the agreement between the annotations of humans and the network could be doubled. Although the system is not yet capable of ensuring adequate detection of pecking injuries, the study demonstrated the importance of such validation steps in order to obtain good training data.

Mathematics ◽  
2020 ◽  
Vol 8 (7) ◽  
pp. 1104
Author(s):  
Alexey Alexeev ◽  
Georgy Kukharev ◽  
Yuri Matveev ◽  
Anton Matveev

We investigate a neural network–based solution for the Automatic Meter Reading detection problem, applied to analog dial gauges. We employ a convolutional neural network with a non-linear Network in Network kernel. Presently, there is a significant interest in systems for automatic detection of analog dial gauges, particularly in the energy and household sectors, but the problem is not yet sufficiently addressed in research. Our method is a universal three-level model that takes an image as an input and outputs circular bounding areas, object classes, grids of reference points for all symbols on the front panel of the device and positions of display pointers. Since all analog pointer meters have a common nature, this multi-cascade model can serve various types of devices if its capacity is sufficient. The model is using global regression for locations of symbols, which provides resilient results even for low image quality and overlapping symbols. In this work, we do not focus on the pointer location detection since it heavily depends on the shape of the pointer. We prepare training data and benchmark the algorithm with our own framework a3net, not relying on third-party neural network solutions. The experimental results demonstrate the versatility of the proposed methods, high accuracy, and resilience of reference points detection.


Author(s):  
Nan Cao ◽  
Xin Yan ◽  
Yang Shi ◽  
Chaoran Chen

Sketch drawings play an important role in assisting humans in communication and creative design since ancient period. This situation has motivated the development of artificial intelligence (AI) techniques for automatically generating sketches based on user input. Sketch-RNN, a sequence-to-sequence variational autoencoder (VAE) model, was developed for this purpose and known as a state-of-the-art technique. However, it suffers from limitations, including the generation of lowquality results and its incapability to support multi-class generations. To address these issues, we introduced AI-Sketcher, a deep generative model for generating high-quality multiclass sketches. Our model improves drawing quality by employing a CNN-based autoencoder to capture the positional information of each stroke at the pixel level. It also introduces an influence layer to more precisely guide the generation of each stroke by directly referring to the training data. To support multi-class sketch generation, we provided a conditional vector that can help differentiate sketches under various classes. The proposed technique was evaluated based on two large-scale sketch datasets, and results demonstrated its power in generating high-quality sketches.


2020 ◽  
Author(s):  
Zhaoping Xiong ◽  
Ziqiang Cheng ◽  
Chi Xu ◽  
Xinyuan Lin ◽  
Xiaohong Liu ◽  
...  

AbstractArtificial intelligence (AI) models usually require large amounts of high-quality training data, which is in striking contrast to the situation of small and biased data faced by current drug discovery pipelines. The concept of federated learning has been proposed to utilize distributed data from different sources without leaking sensitive information of these data. This emerging decentralized machine learning paradigm is expected to dramatically improve the success of AI-powered drug discovery. We here simulate the federated learning process with 7 aqueous solubility datasets from different sources, among which there are overlapping molecules with high or low biases in the recorded values. Beyond the benefit of gaining more data, we also demonstrate federated training has a regularization effect making it superior than centralized training on the pooled datasets with high biases. Further, two more cases are studied to test the usability of federated learning in drug discovery. Our work demonstrates the application of federated learning in predicting drug related properties, but also highlights its promising role in addressing the small data and biased data dilemma in drug discovery.


2021 ◽  
Vol 11 (2) ◽  
pp. 16-24
Author(s):  
Furkan Kayım ◽  
Atınç Yılmaz

In ancient times, trade was carried out by barter. With the use of money and similar means, the concept of financial instruments emerged. Financial instruments are tools and documents used in the economy. Financial instruments can be foreign exchange rates, securities, crypto currency, index and funds. There are many methods used in financial instrument forecast. These methods include technical analysis methods, basic analysis methods, forecasts carried out using variables and formulas, time-series algorithms and artificial intelligence algorithms. Within the scope of this study, the importance of the use of artificial intelligence algorithms in the financial instrument forecast is studied. Since financial instruments are used as a means of investment and trade by all sections of the society, namely individuals, families, institutions, and states, it is highly important to know about their future.  Financial instrument forecast can bring about profitability such as increased income welfare, more economical adjustment of maturities, creation of large finances, minimization of risks, spreading of ownership to the grassroots, and more balanced income distribution. Within the scope of this study, financial instrument forecast is carried out by applying a new methods of Long Short Term Memory (LSTM), Recurrent Neural Network (RNN), Convolutional Neural Network (CNN), Autoregressive Integrated Moving Average (ARIMA) algorithms and Ensemble Classification Boosting Method. Financial instrument forecast is carried out by creating a network compromising LSTM and RNN algorithm, an LSTM layer, and an RNN output layer. With the ensemble classification boosting method, a new method that gives a more successful result compared to the other algorithm forecast results was applied. At the conclusion of the study, alternative algorithm forecast results were competed against each other and the algorithm that gave the most successful forecast was suggested. The success rate of the forecast results was increased by comparing the results with different time intervals and training data sets. Furthermore, a new method was developed using the ensemble classification boosting method, and this method yielded a more successful result than the most successful algorithm result.


Author(s):  
Caroline Bivik Stadler ◽  
Martin Lindvall ◽  
Claes Lundström ◽  
Anna Bodén ◽  
Karin Lindman ◽  
...  

Abstract Artificial intelligence (AI) holds much promise for enabling highly desired imaging diagnostics improvements. One of the most limiting bottlenecks for the development of useful clinical-grade AI models is the lack of training data. One aspect is the large amount of cases needed and another is the necessity of high-quality ground truth annotation. The aim of the project was to establish and describe the construction of a database with substantial amounts of detail-annotated oncology imaging data from pathology and radiology. A specific objective was to be proactive, that is, to support undefined subsequent AI training across a wide range of tasks, such as detection, quantification, segmentation, and classification, which puts particular focus on the quality and generality of the annotations. The main outcome of this project was the database as such, with a collection of labeled image data from breast, ovary, skin, colon, skeleton, and liver. In addition, this effort also served as an exploration of best practices for further scalability of high-quality image collections, and a main contribution of the study was generic lessons learned regarding how to successfully organize efforts to construct medical imaging databases for AI training, summarized as eight guiding principles covering team, process, and execution aspects.


Author(s):  
Johannes Rueckel ◽  
Christian Huemmer ◽  
Andreas Fieselmann ◽  
Florin-Cristian Ghesu ◽  
Awais Mansoor ◽  
...  

Abstract Objectives Diagnostic accuracy of artificial intelligence (AI) pneumothorax (PTX) detection in chest radiographs (CXR) is limited by the noisy annotation quality of public training data and confounding thoracic tubes (TT). We hypothesize that in-image annotations of the dehiscent visceral pleura for algorithm training boosts algorithm’s performance and suppresses confounders. Methods Our single-center evaluation cohort of 3062 supine CXRs includes 760 PTX-positive cases with radiological annotations of PTX size and inserted TTs. Three step-by-step improved algorithms (differing in algorithm architecture, training data from public datasets/clinical sites, and in-image annotations included in algorithm training) were characterized by area under the receiver operating characteristics (AUROC) in detailed subgroup analyses and referenced to the well-established “CheXNet” algorithm. Results Performances of established algorithms exclusively trained on publicly available data without in-image annotations are limited to AUROCs of 0.778 and strongly biased towards TTs that can completely eliminate algorithm’s discriminative power in individual subgroups. Contrarily, our final “algorithm 2” which was trained on a lower number of images but additionally with in-image annotations of the dehiscent pleura achieved an overall AUROC of 0.877 for unilateral PTX detection with a significantly reduced TT-related confounding bias. Conclusions We demonstrated strong limitations of an established PTX-detecting AI algorithm that can be significantly reduced by designing an AI system capable of learning to both classify and localize PTX. Our results are aimed at drawing attention to the necessity of high-quality in-image localization in training data to reduce the risks of unintentionally biasing the training process of pathology-detecting AI algorithms. Key Points • Established pneumothorax-detecting artificial intelligence algorithms trained on public training data are strongly limited and biased by confounding thoracic tubes. • We used high-quality in-image annotated training data to effectively boost algorithm performance and suppress the impact of confounding thoracic tubes. • Based on our results, we hypothesize that even hidden confounders might be effectively addressed by in-image annotations of pathology-related image features.


2020 ◽  
Author(s):  
Nils Wagner ◽  
Fynn Beuttenmueller ◽  
Nils Norlin ◽  
Jakob Gierten ◽  
Juan Carlos Boffi ◽  
...  

Light-field microscopy (LFM) has emerged as a powerful tool for fast volumetric image acquisition in biology, but its effective throughput and widespread use has been hampered by a computationally demanding and artefact-prone image reconstruction process. Here, we present a novel framework consisting of a hybrid light-field light-sheet microscope and deep learning-based volume reconstruction, where single light-sheet acquisitions continuously serve as training data and validation for the convolutional neural network reconstructing the LFM volume. Our network delivers high-quality reconstructions at video-rate throughput and we demonstrate the capabilities of our approach by imaging medaka heart dynamics and zebrafish neural activity.


2018 ◽  
Vol 28 (03) ◽  
pp. 1850011
Author(s):  
Peizhi Yan ◽  
Yi Feng

Gomoku is an ancient board game. The traditional approach to solving the Gomoku game is to apply tree search on a Gomoku game tree. Although the rules of Gomoku are straightforward, the game tree complexity is enormous. Unlike many other board games such as chess and Shogun, the Gomoku board state is more intuitive. That is to say, analyzing the visual patterns on a Gomoku game board is fundamental to play this game. In this paper, we designed a deep convolutional neural network model to help the machine learn from the training data (collected from human players). Based on this original neural network model, we made some changes and get two variant neural networks. We compared the performance of the original neural network with its variants in our experiments. Our original neural network model got 69% accuracy on the training data and 38% accuracy on the testing data. Because the decision made by the neural network is intuitive, we also designed a hard-coded convolution-based Gomoku evaluation function to assist the neural network in making decisions. This hybrid Gomoku artificial intelligence (AI) further improved the performance of a pure neural network-based Gomoku AI.


Sign in / Sign up

Export Citation Format

Share Document