scholarly journals Identifying the species of harvested tuna and billfish using deep convolutional neural networks

2019 ◽  
Vol 77 (4) ◽  
pp. 1318-1329 ◽  
Author(s):  
Yi-Chin Lu ◽  
Chen Tung ◽  
Yan-Fu Kuo

Abstract Fish catch species provide essential information for marine resource management. Some international organizations demand fishing vessels to report the species statistics of fish catch. Conventionally, the statistics are recorded manually by observers or fishermen. The accuracy of these statistics is, however, questionable due to the possibility of underreporting or misreporting. This paper proposes to automatically identify the species of common tuna and billfish using machine vision. The species include albacore (Thunnus alalunga), bigeye tuna (Thunnus obesus), yellowfin tuna (Thunnus albacares), blue marlin (Makaira nigricans), Indo-pacific sailfish (Istiophorus platypterus), and swordfish (Xiphias gladius). In this approach, the images of fish catch are acquired on the decks of fishing vessels. Deep convolutional neural network models are then developed to identify the species from the images. The proposed approach achieves an accuracy of at least 96.24%.

<em>Abstract.</em>—The Cooperative Tagging Center (CTC) of the National Marine Fisheries Service’s Southeast Fisheries Science Center operates one of the largest and oldest fish tagging programs of its type in the world. Since 1954, more than 35,000 recreational and commercial fishing constituents have voluntarily participated in the CTC, and this has resulted in tagging more than 245,000 fish of 123 species. Although some tagging activities have been conducted by scientists, most of the tag release and recovery activities were achieved by recreational and commercial fishery constituents. Five large highly migratory species have historically represented the Program’s primary target species, including Atlantic bluefin tuna <em> Thunnus thynnus</em>, blue marlin <em> Makaira nigricans</em>, white marlin <em> Tetrapturus albidus</em>, sailfish <em> Istiophorus platypterus</em>, and broadbill swordfish <em> Xiphias gladius</em>. Tagging equipment and procedures for catching, tagging, and resuscitation of species too large to be brought aboard fishing vessels have evolved and improved considerably over the years. This paper presents a review of the development of the most efficient tagging, handling, and dehooking techniques used on a variety of large, highly migratory species in the CTC. In addition, the results of a comparative tag retention study on billfish are presented, comparing stainless steel dart tags used for nearly 30 years with a hydroscopic nylon double-barb dart tag, recently developed in conjunction with The Billfish Foundation. Recommendations are made on the best techniques, procedures, and equipment for in-water tagging of large, highly migratory species.


2020 ◽  
Vol 36 (12) ◽  
pp. 3693-3702 ◽  
Author(s):  
Dandan Zheng ◽  
Guansong Pang ◽  
Bo Liu ◽  
Lihong Chen ◽  
Jian Yang

Abstract Motivation Identification of virulence factors (VFs) is critical to the elucidation of bacterial pathogenesis and prevention of related infectious diseases. Current computational methods for VF prediction focus on binary classification or involve only several class(es) of VFs with sufficient samples. However, thousands of VF classes are present in real-world scenarios, and many of them only have a very limited number of samples available. Results We first construct a large VF dataset, covering 3446 VF classes with 160 495 sequences, and then propose deep convolutional neural network models for VF classification. We show that (i) for common VF classes with sufficient samples, our models can achieve state-of-the-art performance with an overall accuracy of 0.9831 and an F1-score of 0.9803; (ii) for uncommon VF classes with limited samples, our models can learn transferable features from auxiliary data and achieve good performance with accuracy ranging from 0.9277 to 0.9512 and F1-score ranging from 0.9168 to 0.9446 when combined with different predefined features, outperforming traditional classifiers by 1–13% in accuracy and by 1–16% in F1-score. Availability and implementation All of our datasets are made publicly available at http://www.mgc.ac.cn/VFNet/, and the source code of our models is publicly available at https://github.com/zhengdd0422/VFNet. Supplementary information Supplementary data are available at Bioinformatics online.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Anil Johny ◽  
K. N. Madhusoodanan

Diagnosis of different breast cancer stages using histopathology whole slide images (WSI) is the gold standard in determining the grade of tissue metastasis. Computer-aided diagnosis (CAD) assists medical experts as a second opinion tool in early detection to prevent further proliferation. The field of pathology has advanced so rapidly that it is possible to obtain high-quality images from glass slides. Patches from the region of interest in histopathology images are extracted and trained using artificial neural network models. The trained model primarily analyzes and predicts the histology images for the benign or malignant class to which it belongs. Classification of medical images focuses on the training of models with layers of abstraction to distinguish between these two classes with less false-positive rates. The learning rate is the crucial hyperparameter used during the training of deep convolutional neural networks (DCNN) to improve model accuracy. This work emphasizes the relevance of the dynamic learning rate than the fixed learning rate during the training of networks. The dynamic learning rate varies with preset conditions between the lower and upper boundaries and repeats at different iterations. The performance of the model thus improves and attains comparatively high accuracy with fewer iterations.


2021 ◽  
Author(s):  
Vivian Kimie Isuyama ◽  
Bruno De Carvalho Albertini

In recent years mobile devices have become an important part of our daily lives and Deep Convolutional Neural Networks have been performing well in the task of image classification. Some considerations have to be made when running a Neural Network inside a mobile device such as computational complexity and storage size. In this paper, common architectures for image classification were analyzed to retrieve the values of accuracy rate, model complexity, memory usage, and inference time. Those values were compared and it was possible to show which architecture to choose from considering mobile restrictions.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Namgyu Ho ◽  
Yoon-Chul Kim

AbstractIn computer-aided analysis of cardiac MRI data, segmentations of the left ventricle (LV) and myocardium are performed to quantify LV ejection fraction and LV mass, and they are performed after the identification of a short axis slice coverage, where automatic classification of the slice range of interest is preferable. Standard cardiac image post-processing guidelines indicate the importance of the correct identification of a short axis slice range for accurate quantification. We investigated the feasibility of applying transfer learning of deep convolutional neural networks (CNNs) as a means to automatically classify the short axis slice range, as transfer learning is well suited to medical image data where labeled data is scarce and expensive to obtain. The short axis slice images were classified into out-of-apical, apical-to-basal, and out-of-basal, on the basis of short axis slice location in the LV. We developed a custom user interface to conveniently label image slices into one of the three categories for the generation of training data and evaluated the performance of transfer learning in nine popular deep CNNs. Evaluation with unseen test data indicated that among the CNNs the fine-tuned VGG16 produced the highest values in all evaluation categories considered and appeared to be the most appropriate choice for the cardiac slice range classification.


2020 ◽  
Vol 5 ◽  
pp. 140-147 ◽  
Author(s):  
T.N. Aleksandrova ◽  
◽  
E.K. Ushakov ◽  
A.V. Orlova ◽  
◽  
...  

The neural network models series used in the development of an aggregated digital twin of equipment as a cyber-physical system are presented. The twins of machining accuracy, chip formation and tool wear are examined in detail. On their basis, systems for stabilization of the chip formation process during cutting and diagnose of the cutting too wear are developed. Keywords cyberphysical system; neural network model of equipment; big data, digital twin of the chip formation; digital twin of the tool wear; digital twin of nanostructured coating choice


Author(s):  
Ann-Sophie Barwich

How much does stimulus input shape perception? The common-sense view is that our perceptions are representations of objects and their features and that the stimulus structures the perceptual object. The problem for this view concerns perceptual biases as responsible for distortions and the subjectivity of perceptual experience. These biases are increasingly studied as constitutive factors of brain processes in recent neuroscience. In neural network models the brain is said to cope with the plethora of sensory information by predicting stimulus regularities on the basis of previous experiences. Drawing on this development, this chapter analyses perceptions as processes. Looking at olfaction as a model system, it argues for the need to abandon a stimulus-centred perspective, where smells are thought of as stable percepts, computationally linked to external objects such as odorous molecules. Perception here is presented as a measure of changing signal ratios in an environment informed by expectancy effects from top-down processes.


Sign in / Sign up

Export Citation Format

Share Document