scholarly journals Underwater Fish Detection and Counting Using Mask Regional Convolutional Neural Network

Water ◽  
2022 ◽  
Vol 14 (2) ◽  
pp. 222
Author(s):  
Teh Hong Khai ◽  
Siti Norul Huda Sheikh Abdullah ◽  
Mohammad Kamrul Hasan ◽  
Ahmad Tarmizi

Fish production has become a roadblock to the development of fish farming, and one of the issues encountered throughout the hatching process is the counting procedure. Previous research has mainly depended on the use of non-machine learning-based and machine learning-based counting methods and so was unable to provide precise results. In this work, we used a robotic eye camera to capture shrimp photos on a shrimp farm to train the model. The image data were classified into three categories based on the density of shrimps: low density, medium density, and high density. We used the parameter calibration strategy to discover the appropriate parameters and provided an improved Mask Regional Convolutional Neural Network (Mask R-CNN) model. As a result, the enhanced Mask R-CNN model can reach an accuracy rate of up to 97.48%.

2020 ◽  
pp. 808-817
Author(s):  
Vinh Pham ◽  
◽  
Eunil Seo ◽  
Tai-Myoung Chung

Identifying threats contained within encrypted network traffic poses a great challenge to Intrusion Detection Systems (IDS). Because traditional approaches like deep packet inspection could not operate on encrypted network traffic, machine learning-based IDS is a promising solution. However, machine learning-based IDS requires enormous amounts of statistical data based on network traffic flow as input data and also demands high computing power for processing, but is slow in detecting intrusions. We propose a lightweight IDS that transforms raw network traffic into representation images. We begin by inspecting the characteristics of malicious network traffic of the CSE-CIC-IDS2018 dataset. We then adapt methods for effectively representing those characteristics into image data. A Convolutional Neural Network (CNN) based detection model is used to identify malicious traffic underlying within image data. To demonstrate the feasibility of the proposed lightweight IDS, we conduct three simulations on two datasets that contain encrypted traffic with current network attack scenarios. The experiment results show that our proposed IDS is capable of achieving 95% accuracy with a reasonable detection time while requiring relatively small size training data.


2021 ◽  
Author(s):  
Wael Alnahari

Abstract In this paper, I proposed an iris recognition system by using deep learning via neural networks (CNN). Although CNN is used for machine learning, the recognition is achieved by building a non-trained CNN network with multiple layers. The main objective of the code the test pictures’ category (aka person name) with a high accuracy rate after having extracted enough features from training pictures of the same category which are obtained from a that I added to the code. I used IITD iris which included 10 iris pictures for 223 people.


2020 ◽  
Vol 32 ◽  
pp. 01012
Author(s):  
Mayank Shete ◽  
Saahil Sabnis ◽  
Srijan Rai ◽  
Gajanan Birajdar

Diabetic Retinopathy is one of the most prominent eye diseases and is the leading cause of blindness amongst adults. Automatic detection of Diabetic Retinopathy is important to prevent irreversible damage to the eye-sight. Existing feature learning methods have a lesser accuracy rate in computer aided diagnostics; this paper proposes a method to further increase the accuracy. Machine learning can be used effectively for the diagnosis of this disease. CNN and transfer learning are used for the severity classification and have achieved an accuracy of 73.9 percent. The use of XGBoost classifier yielded an accuracy of 76.5 percent.


2021 ◽  
Vol 6 (1) ◽  
pp. 18
Author(s):  
Diny Melsye Nurul Fajri

Kenaf fiber is mainly used for forest wood substitute industrial products. Thus, the kenaf fiber can be promoted as the main composition of environmentally friendly goods. Unfortunately, there are several Kenaf gardens that have been stricken with the disease-causing a lack of yield. By utilizing advances in technology, it was felt to be able to help kenaf farmers quickly and accurately detect which pests or diseases attacked their crops. This paper will discuss the application of the machine learning method which is a Convolutional Neural Network (CNN) that can provide results for inputting leaf images into the results of temporary diagnoses. The data used are 838 image data for 4 classes. The average results prove that with CNN an accuracy value of 73% can be achieved for the detection of diseases and plant pests in Kenaf plants.


2021 ◽  
Vol 19 ◽  
pp. 16-27
Author(s):  
Gil Gabornes Dialogo ◽  
Larmie Santos Feliscuzo ◽  
Elmer Asilo Maravillas

This study presents an application that employs a machine-learning algorithm to identify fish species found in Leyte Gulf. It aims to help students and marine scientists with their identification and data collection. The application supports 467 fish species in which 6,918 fish images are used for training, validating, and testing the generated model. The model is trained for a total of 4,000 epochs. Using convolutional neural network (CNN) algorithm, the best model during training is observed at epoch 3,661 with an accuracy rate of 96.49% and a loss value of 0.1359. It obtains 82.81% with a loss value of 1.868 during validation and 80.58% precision during testing. The result shows that the model performs well in predicting Malatindok and Sapsap species, after obtaining the highest precision of 100%. However, Hangit is sometimes misclassified by the model after attaining 55% accuracy rate from the testing results because of its feature similarity to other species.


Entropy ◽  
2021 ◽  
Vol 23 (7) ◽  
pp. 816
Author(s):  
Pingping Liu ◽  
Xiaokang Yang ◽  
Baixin Jin ◽  
Qiuzhan Zhou

Diabetic retinopathy (DR) is a common complication of diabetes mellitus (DM), and it is necessary to diagnose DR in the early stages of treatment. With the rapid development of convolutional neural networks in the field of image processing, deep learning methods have achieved great success in the field of medical image processing. Various medical lesion detection systems have been proposed to detect fundus lesions. At present, in the image classification process of diabetic retinopathy, the fine-grained properties of the diseased image are ignored and most of the retinopathy image data sets have serious uneven distribution problems, which limits the ability of the network to predict the classification of lesions to a large extent. We propose a new non-homologous bilinear pooling convolutional neural network model and combine it with the attention mechanism to further improve the network’s ability to extract specific features of the image. The experimental results show that, compared with the most popular fundus image classification models, the network model we proposed can greatly improve the prediction accuracy of the network while maintaining computational efficiency.


Author(s):  
Satoru Tsuiki ◽  
Takuya Nagaoka ◽  
Tatsuya Fukuda ◽  
Yuki Sakamoto ◽  
Fernanda R. Almeida ◽  
...  

Abstract Purpose In 2-dimensional lateral cephalometric radiographs, patients with severe obstructive sleep apnea (OSA) exhibit a more crowded oropharynx in comparison with non-OSA. We tested the hypothesis that machine learning, an application of artificial intelligence (AI), could be used to detect patients with severe OSA based on 2-dimensional images. Methods A deep convolutional neural network was developed (n = 1258; 90%) and tested (n = 131; 10%) using data from 1389 (100%) lateral cephalometric radiographs obtained from individuals diagnosed with severe OSA (n = 867; apnea hypopnea index > 30 events/h sleep) or non-OSA (n = 522; apnea hypopnea index < 5 events/h sleep) at a single center for sleep disorders. Three kinds of data sets were prepared by changing the area of interest using a single image: the original image without any modification (full image), an image containing a facial profile, upper airway, and craniofacial soft/hard tissues (main region), and an image containing part of the occipital region (head only). A radiologist also performed a conventional manual cephalometric analysis of the full image for comparison. Results The sensitivity/specificity was 0.87/0.82 for full image, 0.88/0.75 for main region, 0.71/0.63 for head only, and 0.54/0.80 for the manual analysis. The area under the receiver-operating characteristic curve was the highest for main region 0.92, for full image 0.89, for head only 0.70, and for manual cephalometric analysis 0.75. Conclusions A deep convolutional neural network identified individuals with severe OSA with high accuracy. Future research on this concept using AI and images can be further encouraged when discussing triage of OSA.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Peter M. Maloca ◽  
Philipp L. Müller ◽  
Aaron Y. Lee ◽  
Adnan Tufail ◽  
Konstantinos Balaskas ◽  
...  

AbstractMachine learning has greatly facilitated the analysis of medical data, while the internal operations usually remain intransparent. To better comprehend these opaque procedures, a convolutional neural network for optical coherence tomography image segmentation was enhanced with a Traceable Relevance Explainability (T-REX) technique. The proposed application was based on three components: ground truth generation by multiple graders, calculation of Hamming distances among graders and the machine learning algorithm, as well as a smart data visualization (‘neural recording’). An overall average variability of 1.75% between the human graders and the algorithm was found, slightly minor to 2.02% among human graders. The ambiguity in ground truth had noteworthy impact on machine learning results, which could be visualized. The convolutional neural network balanced between graders and allowed for modifiable predictions dependent on the compartment. Using the proposed T-REX setup, machine learning processes could be rendered more transparent and understandable, possibly leading to optimized applications.


Plants ◽  
2020 ◽  
Vol 10 (1) ◽  
pp. 31
Author(s):  
Jia-Rong Xiao ◽  
Pei-Che Chung ◽  
Hung-Yi Wu ◽  
Quoc-Hung Phan ◽  
Jer-Liang Andrew Yeh ◽  
...  

The strawberry (Fragaria × ananassa Duch.) is a high-value crop with an annual cultivated area of ~500 ha in Taiwan. Over 90% of strawberry cultivation is in Miaoli County. Unfortunately, various diseases significantly decrease strawberry production. The leaf and fruit disease became an epidemic in 1986. From 2010 to 2016, anthracnose crown rot caused the loss of 30–40% of seedlings and ~20% of plants after transplanting. The automation of agriculture and image recognition techniques are indispensable for detecting strawberry diseases. We developed an image recognition technique for the detection of strawberry diseases using a convolutional neural network (CNN) model. CNN is a powerful deep learning approach that has been used to enhance image recognition. In the proposed technique, two different datasets containing the original and feature images are used for detecting the following strawberry diseases—leaf blight, gray mold, and powdery mildew. Specifically, leaf blight may affect the crown, leaf, and fruit and show different symptoms. By using the ResNet50 model with a training period of 20 epochs for 1306 feature images, the proposed CNN model achieves a classification accuracy rate of 100% for leaf blight cases affecting the crown, leaf, and fruit; 98% for gray mold cases, and 98% for powdery mildew cases. In 20 epochs, the accuracy rate of 99.60% obtained from the feature image dataset was higher than that of 1.53% obtained from the original one. This proposed model provides a simple, reliable, and cost-effective technique for detecting strawberry diseases.


Sensors ◽  
2019 ◽  
Vol 19 (1) ◽  
pp. 210 ◽  
Author(s):  
Zied Tayeb ◽  
Juri Fedjaev ◽  
Nejla Ghaboosi ◽  
Christoph Richter ◽  
Lukas Everding ◽  
...  

Non-invasive, electroencephalography (EEG)-based brain-computer interfaces (BCIs) on motor imagery movements translate the subject’s motor intention into control signals through classifying the EEG patterns caused by different imagination tasks, e.g., hand movements. This type of BCI has been widely studied and used as an alternative mode of communication and environmental control for disabled patients, such as those suffering from a brainstem stroke or a spinal cord injury (SCI). Notwithstanding the success of traditional machine learning methods in classifying EEG signals, these methods still rely on hand-crafted features. The extraction of such features is a difficult task due to the high non-stationarity of EEG signals, which is a major cause by the stagnating progress in classification performance. Remarkable advances in deep learning methods allow end-to-end learning without any feature engineering, which could benefit BCI motor imagery applications. We developed three deep learning models: (1) A long short-term memory (LSTM); (2) a spectrogram-based convolutional neural network model (CNN); and (3) a recurrent convolutional neural network (RCNN), for decoding motor imagery movements directly from raw EEG signals without (any manual) feature engineering. Results were evaluated on our own publicly available, EEG data collected from 20 subjects and on an existing dataset known as 2b EEG dataset from “BCI Competition IV”. Overall, better classification performance was achieved with deep learning models compared to state-of-the art machine learning techniques, which could chart a route ahead for developing new robust techniques for EEG signal decoding. We underpin this point by demonstrating the successful real-time control of a robotic arm using our CNN based BCI.


Sign in / Sign up

Export Citation Format

Share Document