scholarly journals EnsemblePigDet: Ensemble Deep Learning for Accurate Pig Detection

2021 ◽  
Vol 11 (12) ◽  
pp. 5577
Author(s):  
Hanse Ahn ◽  
Seungwook Son ◽  
Heegon Kim ◽  
Sungju Lee ◽  
Yongwha Chung ◽  
...  

Automated pig monitoring is important for smart pig farms; thus, several deep-learning-based pig monitoring techniques have been proposed recently. In applying automated pig monitoring techniques to real pig farms, however, practical issues such as detecting pigs from overexposed regions, caused by strong sunlight through a window, should be considered. Another practical issue in applying deep-learning-based techniques to a specific pig monitoring application is the annotation cost for pig data. In this study, we propose a method for managing these two practical issues. Using annotated data obtained from training images without overexposed regions, we first generated augmented data to reduce the effect of overexposure. Then, we trained YOLOv4 with both the annotated and augmented data and combined the test results from two YOLOv4 models in a bounding box level to further improve the detection accuracy. We propose accuracy metrics for pig detection in a closed pig pen to evaluate the accuracy of the detection without box-level annotation. Our experimental results with 216,000 “unseen” test data from overexposed regions in the same pig pen show that the proposed ensemble method can significantly improve the detection accuracy of the baseline YOLOv4, from 79.93% to 94.33%, with additional execution time.

2021 ◽  
Vol 133 (1029) ◽  
pp. 115001
Author(s):  
Ming Zhou ◽  
Guanru Lv ◽  
Jian Li ◽  
Zengxiang Zhou ◽  
Zhigang Liu ◽  
...  

Abstract The double revolving fiber positioning unit (FPU) is one of the key technologies of The Large Sky Area Multi-Object Fiber Spectroscope Telescope (LAMOST). The positioning accuracy of the computer controlled FPU depends on robot accuracy as well as the initial parameters of FPU. These initial parameters may deteriorate with time when FPU is running in non-supervision mode, which would lead to bad fiber position accuracy and further efficiency degradation in the subsequent surveys. In this paper, we present an algorithm based on deep learning to detect the FPU’s initial angle using the front illuminated image of LAMOST focal plane. Preliminary test results show that the detection accuracy of the FPU initial angle is better than 2.°5, which is good enough to distinguish those obvious bad FPUs. Our results are further well verified by direct measurement of fiber position from the back illuminated image and the correlation analysis of the spectral flux in LAMOST survey data.


Repositor ◽  
2020 ◽  
Vol 2 (6) ◽  
pp. 795
Author(s):  
Mochamad Arifin ◽  
Wahyu Andhyka Kusuma ◽  
Syaifuddin Syaifuddin

Abstrak               Berlari merupakan frekuensi langkah yang dipercepat sehingga pada waktu berlari terdapat kecenderungan badan melayang. Pada saat berlari kedua kaki tidak menyentuh sekurang – kurangnya satu kaki tetap menyentuh tanah. Seiring perkembangan teknologi yang semakin pesat dan maju, untuk mengukur suatu percepatan ketika berlari dapat menggunakan teknologi accelerometer. Accelerometer dapat digunakan sebagai alat bantu manusia yang memiliki beberapa kelebihan terutama untuk pengecekan percepatan dan jarak tempuh. Selain itu, accelerometer digunakan untuk mengukur percepatan, mendeteksi getaran, dan bisa juga untuk percepatan gravitasi. Pendeteksian gerakan berdasarkan pada 3 sumbu yaitu kanan-kiri, atas-bawah dan depan-belakang. Pada penelitian ini, besaran akselerasi pada sumbu x, y, dan z dari sensor accelerometer dengan menggunakan parameter jarak meliputi 5 meter, 10 meter , 15 meter dan 20 meter. Dari hasil pengujian yang diperoleh sebanyak 5 responden maka dapat diambil kesimpulan bahwa kecocokan data pengujian yang diambil secara manual dengan aplikasi memiliki perbedaan. Dari jarak pengujian 5 meter diperoleh hasil galat presentase error sebanyak 7,96%. Jarak 10 meter diperoleh sebanyak 6,4%. Jarak 15 meter diperoleh 13,68% meter. Selanjutnya, jarak 20 meter yaitu 11 %. Pengujian dilakukan dengan menggunakan aplikasi monitoring pada smartphone yang telah terinstall dan diletakkan pada saku celana responden sehingga akan diperoleh nilai data pada sumbu x,y dan z pada aplikasi yang kemudian di konversikan pada grafik gelombang sinus dan perhitungan manual berupa perhitungan jarak dan galat presentase error. Abstract               Running is an accelerated frequency of steps so that when running there is a tendency for the body to float. When running both feet do not touch at least - one foot still touches the ground. Along with the development of technology that is increasingly rapid and advanced, to measure an acceleration when running can use the accelerometer technology. Accelerometer can be used as a human aid which has several advantages, especially for checking acceleration and mileage. In addition, the accelerometer is used to measure acceleration, detect vibrations, and can also be used for accelerating gravity. Motion detection is based on 3 axes namely right-left, top-bottom and front-back. In this study, the amount of acceleration on the x, y, and z axis of the accelerometer sensor using distance parameters includes 5 meters, 10 meters, 15 meters and 20 meters. From the test results obtained as many as 5 respondents, it can be concluded that the suitability of the test data taken manually with the application has a difference. A distance of 10 meters was obtained as much as 6.4%. A distance of 15 meters obtained 13.68% meters. Furthermore, the distance of 20 meters is 11%. The test is done by using a monitoring application on a smartphone that has been installed and placed in the pocket of the respondent's pants so that the data values on the x, y and z axes in the application are then converted to a sine wave graph and manual calculations in the form of distance and error percentage errors.


2021 ◽  
Author(s):  
Hye-Won Hwang ◽  
Jun-Ho Moon ◽  
Min-Gyu Kim ◽  
Richard E. Donatelli ◽  
Shin-Jae Lee

ABSTRACT Objectives To compare an automated cephalometric analysis based on the latest deep learning method of automatically identifying cephalometric landmarks (AI) with previously published AI according to the test style of the worldwide AI challenges at the International Symposium on Biomedical Imaging conferences held by the Institute of Electrical and Electronics Engineers (IEEE ISBI). Materials and Methods This latest AI was developed by using a total of 1983 cephalograms as training data. In the training procedures, a modification of a contemporary deep learning method, YOLO version 3 algorithm, was applied. Test data consisted of 200 cephalograms. To follow the same test style of the AI challenges at IEEE ISBI, a human examiner manually identified the IEEE ISBI-designated 19 cephalometric landmarks, both in training and test data sets, which were used as references for comparison. Then, the latest AI and another human examiner independently detected the same landmarks in the test data set. The test results were compared by the measures that appeared at IEEE ISBI: the success detection rate (SDR) and the success classification rates (SCR). Results SDR of the latest AI in the 2-mm range was 75.5% and SCR was 81.5%. These were greater than any other previous AIs. Compared to the human examiners, AI showed a superior success classification rate in some cephalometric analysis measures. Conclusions This latest AI seems to have superior performance compared to previous AI methods. It also seems to demonstrate cephalometric analysis comparable to human examiners.


2015 ◽  
Vol 4 (3) ◽  
Author(s):  
Seruni Seruni ◽  
Nurul Hikmah

<p>The purpose of this study is to find and analyze the effect of feedback on <br />learning outcomes in mathematics and an interest in basic statistics course. The <br />population in this study are affordable Information Technology Student cademic Year 2012/2013 Semester II Indraprasta PGRI University of South Jakarta. Sample The study sample was obtained through random sampling. This study used an experimental method to the analysis using the MANOVA test. This study has three variables, consisting of: one independent variable, namely the provision of feedback (immediate and delayed), and two dependent variable is the result of interest in the study of mathematics and basic statistics course. The data was collected for the test results to learn mathematics, and a questionnaire for the interest in basic statistics course. Collected data were analyzed using the MANOVA test. Before the data were analyzed, first performed descriptive statistical analysis and test data analysis requirements (test data normality and homogeneity of covariance matrices). The results show that the learning outcomes of interest in mathematics and basic statistics course for students who are given immediate feedback higher than students given feedback delayed. <br /><br /></p>


2020 ◽  
Author(s):  
Jinseok Lee

BACKGROUND The coronavirus disease (COVID-19) has explosively spread worldwide since the beginning of 2020. According to a multinational consensus statement from the Fleischner Society, computed tomography (CT) can be used as a relevant screening tool owing to its higher sensitivity for detecting early pneumonic changes. However, physicians are extremely busy fighting COVID-19 in this era of worldwide crisis. Thus, it is crucial to accelerate the development of an artificial intelligence (AI) diagnostic tool to support physicians. OBJECTIVE We aimed to quickly develop an AI technique to diagnose COVID-19 pneumonia and differentiate it from non-COVID pneumonia and non-pneumonia diseases on CT. METHODS A simple 2D deep learning framework, named fast-track COVID-19 classification network (FCONet), was developed to diagnose COVID-19 pneumonia based on a single chest CT image. FCONet was developed by transfer learning, using one of the four state-of-art pre-trained deep learning models (VGG16, ResNet50, InceptionV3, or Xception) as a backbone. For training and testing of FCONet, we collected 3,993 chest CT images of patients with COVID-19 pneumonia, other pneumonia, and non-pneumonia diseases from Wonkwang University Hospital, Chonnam National University Hospital, and the Italian Society of Medical and Interventional Radiology public database. These CT images were split into a training and a testing set at a ratio of 8:2. For the test dataset, the diagnostic performance to diagnose COVID-19 pneumonia was compared among the four pre-trained FCONet models. In addition, we tested the FCONet models on an additional external testing dataset extracted from the embedded low-quality chest CT images of COVID-19 pneumonia in recently published papers. RESULTS Of the four pre-trained models of FCONet, the ResNet50 showed excellent diagnostic performance (sensitivity 99.58%, specificity 100%, and accuracy 99.87%) and outperformed the other three pre-trained models in testing dataset. In additional external test dataset using low-quality CT images, the detection accuracy of the ResNet50 model was the highest (96.97%), followed by Xception, InceptionV3, and VGG16 (90.71%, 89.38%, and 87.12%, respectively). CONCLUSIONS The FCONet, a simple 2D deep learning framework based on a single chest CT image, provides excellent diagnostic performance in detecting COVID-19 pneumonia. Based on our testing dataset, the ResNet50-based FCONet might be the best model, as it outperformed other FCONet models based on VGG16, Xception, and InceptionV3.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4736
Author(s):  
Sk. Tanzir Mehedi ◽  
Adnan Anwar ◽  
Ziaur Rahman ◽  
Kawsar Ahmed

The Controller Area Network (CAN) bus works as an important protocol in the real-time In-Vehicle Network (IVN) systems for its simple, suitable, and robust architecture. The risk of IVN devices has still been insecure and vulnerable due to the complex data-intensive architectures which greatly increase the accessibility to unauthorized networks and the possibility of various types of cyberattacks. Therefore, the detection of cyberattacks in IVN devices has become a growing interest. With the rapid development of IVNs and evolving threat types, the traditional machine learning-based IDS has to update to cope with the security requirements of the current environment. Nowadays, the progression of deep learning, deep transfer learning, and its impactful outcome in several areas has guided as an effective solution for network intrusion detection. This manuscript proposes a deep transfer learning-based IDS model for IVN along with improved performance in comparison to several other existing models. The unique contributions include effective attribute selection which is best suited to identify malicious CAN messages and accurately detect the normal and abnormal activities, designing a deep transfer learning-based LeNet model, and evaluating considering real-world data. To this end, an extensive experimental performance evaluation has been conducted. The architecture along with empirical analyses shows that the proposed IDS greatly improves the detection accuracy over the mainstream machine learning, deep learning, and benchmark deep transfer learning models and has demonstrated better performance for real-time IVN security.


Diagnostics ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. 1127
Author(s):  
Ji Hyung Nam ◽  
Dong Jun Oh ◽  
Sumin Lee ◽  
Hyun Joo Song ◽  
Yun Jeong Lim

Capsule endoscopy (CE) quality control requires an objective scoring system to evaluate the preparation of the small bowel (SB). We propose a deep learning algorithm to calculate SB cleansing scores and verify the algorithm’s performance. A 5-point scoring system based on clarity of mucosal visualization was used to develop the deep learning algorithm (400,000 frames; 280,000 for training and 120,000 for testing). External validation was performed using additional CE cases (n = 50), and average cleansing scores (1.0 to 5.0) calculated using the algorithm were compared to clinical grades (A to C) assigned by clinicians. Test results obtained using 120,000 frames exhibited 93% accuracy. The separate CE case exhibited substantial agreement between the deep learning algorithm scores and clinicians’ assessments (Cohen’s kappa: 0.672). In the external validation, the cleansing score decreased with worsening clinical grade (scores of 3.9, 3.2, and 2.5 for grades A, B, and C, respectively, p < 0.001). Receiver operating characteristic curve analysis revealed that a cleansing score cut-off of 2.95 indicated clinically adequate preparation. This algorithm provides an objective and automated cleansing score for evaluating SB preparation for CE. The results of this study will serve as clinical evidence supporting the practical use of deep learning algorithms for evaluating SB preparation quality.


2021 ◽  
Vol 13 (14) ◽  
pp. 2822
Author(s):  
Zhe Lin ◽  
Wenxuan Guo

An accurate stand count is a prerequisite to determining the emergence rate, assessing seedling vigor, and facilitating site-specific management for optimal crop production. Traditional manual counting methods in stand assessment are labor intensive and time consuming for large-scale breeding programs or production field operations. This study aimed to apply two deep learning models, the MobileNet and CenterNet, to detect and count cotton plants at the seedling stage with unmanned aerial system (UAS) images. These models were trained with two datasets containing 400 and 900 images with variations in plant size and soil background brightness. The performance of these models was assessed with two testing datasets of different dimensions, testing dataset 1 with 300 by 400 pixels and testing dataset 2 with 250 by 1200 pixels. The model validation results showed that the mean average precision (mAP) and average recall (AR) were 79% and 73% for the CenterNet model, and 86% and 72% for the MobileNet model with 900 training images. The accuracy of cotton plant detection and counting was higher with testing dataset 1 for both CenterNet and MobileNet models. The results showed that the CenterNet model had a better overall performance for cotton plant detection and counting with 900 training images. The results also indicated that more training images are required when applying object detection models on images with different dimensions from training datasets. The mean absolute percentage error (MAPE), coefficient of determination (R2), and the root mean squared error (RMSE) values of the cotton plant counting were 0.07%, 0.98 and 0.37, respectively, with testing dataset 1 for the CenterNet model with 900 training images. Both MobileNet and CenterNet models have the potential to accurately and timely detect and count cotton plants based on high-resolution UAS images at the seedling stage. This study provides valuable information for selecting the right deep learning tools and the appropriate number of training images for object detection projects in agricultural applications.


2021 ◽  
Vol 13 (10) ◽  
pp. 1909
Author(s):  
Jiahuan Jiang ◽  
Xiongjun Fu ◽  
Rui Qin ◽  
Xiaoyan Wang ◽  
Zhifeng Ma

Synthetic Aperture Radar (SAR) has become one of the important technical means of marine monitoring in the field of remote sensing due to its all-day, all-weather advantage. National territorial waters to achieve ship monitoring is conducive to national maritime law enforcement, implementation of maritime traffic control, and maintenance of national maritime security, so ship detection has been a hot spot and focus of research. After the development from traditional detection methods to deep learning combined methods, most of the research always based on the evolving Graphics Processing Unit (GPU) computing power to propose more complex and computationally intensive strategies, while in the process of transplanting optical image detection ignored the low signal-to-noise ratio, low resolution, single-channel and other characteristics brought by the SAR image imaging principle. Constantly pursuing detection accuracy while ignoring the detection speed and the ultimate application of the algorithm, almost all algorithms rely on powerful clustered desktop GPUs, which cannot be implemented on the frontline of marine monitoring to cope with the changing realities. To address these issues, this paper proposes a multi-channel fusion SAR image processing method that makes full use of image information and the network’s ability to extract features; it is also based on the latest You Only Look Once version 4 (YOLO-V4) deep learning framework for modeling architecture and training models. The YOLO-V4-light network was tailored for real-time and implementation, significantly reducing the model size, detection time, number of computational parameters, and memory consumption, and refining the network for three-channel images to compensate for the loss of accuracy due to light-weighting. The test experiments were completed entirely on a portable computer and achieved an Average Precision (AP) of 90.37% on the SAR Ship Detection Dataset (SSDD), simplifying the model while ensuring a lead over most existing methods. The YOLO-V4-lightship detection algorithm proposed in this paper has great practical application in maritime safety monitoring and emergency rescue.


Sign in / Sign up

Export Citation Format

Share Document