scholarly journals Automatic, operational, high-resolution monitoring of fish length and catch numbers from landings using deep learning

2022 ◽  
Vol 246 ◽  
pp. 106166
Author(s):  
Miquel Palmer ◽  
Amaya Álvarez-Ellacuría ◽  
Vicenç Moltó ◽  
Ignacio A. Catalán
2021 ◽  
Vol 13 (12) ◽  
pp. 2326
Author(s):  
Xiaoyong Li ◽  
Xueru Bai ◽  
Feng Zhou

A deep-learning architecture, dubbed as the 2D-ADMM-Net (2D-ADN), is proposed in this article. It provides effective high-resolution 2D inverse synthetic aperture radar (ISAR) imaging under scenarios of low SNRs and incomplete data, by combining model-based sparse reconstruction and data-driven deep learning. Firstly, mapping from ISAR images to their corresponding echoes in the wavenumber domain is derived. Then, a 2D alternating direction method of multipliers (ADMM) is unrolled and generalized to a deep network, where all adjustable parameters in the reconstruction layers, nonlinear transform layers, and multiplier update layers are learned by an end-to-end training through back-propagation. Since the optimal parameters of each layer are learned separately, 2D-ADN exhibits more representation flexibility and preferable reconstruction performance than model-driven methods. Simultaneously, it is able to better facilitate ISAR imaging with limited training samples than data-driven methods owing to its simple structure and small number of adjustable parameters. Additionally, benefiting from the good performance of 2D-ADN, a random phase error estimation method is proposed, through which well-focused imaging can be acquired. It is demonstrated by experiments that although trained by only a few simulated images, the 2D-ADN shows good adaptability to measured data and favorable imaging results with a clear background can be obtained in a short time.


2021 ◽  
Author(s):  
H. Chen ◽  
J.H. Gao ◽  
Z.Q. Gao ◽  
S.A. Shen ◽  
Z.Q. Wang ◽  
...  

2018 ◽  
Vol 2018 ◽  
pp. 1-9 ◽  
Author(s):  
Yinghao Chu ◽  
Chen Huang ◽  
Xiaodan Xie ◽  
Bohai Tan ◽  
Shyam Kamal ◽  
...  

This study proposes a multilayer hybrid deep-learning system (MHS) to automatically sort waste disposed of by individuals in the urban public area. This system deploys a high-resolution camera to capture waste image and sensors to detect other useful feature information. The MHS uses a CNN-based algorithm to extract image features and a multilayer perceptrons (MLP) method to consolidate image features and other feature information to classify wastes as recyclable or the others. The MHS is trained and validated against the manually labelled items, achieving overall classification accuracy higher than 90% under two different testing scenarios, which significantly outperforms a reference CNN-based method relying on image-only inputs.


Sign in / Sign up

Export Citation Format

Share Document