scholarly journals Development of a Fully Convolutional Neural Network to Derive Surf-Zone Bathymetry from Close-Range Imagery of Waves in Duck, NC

2021 ◽  
Vol 13 (23) ◽  
pp. 4907
Author(s):  
Adam M. Collins ◽  
Matthew P. Geheran ◽  
Tyler J. Hesser ◽  
Andrew Spicer Bak ◽  
Katherine L. Brodie ◽  
...  

Timely observations of nearshore water depths are important for a variety of coastal research and management topics, yet this information is expensive to collect using in situ survey methods. Remote methods to estimate bathymetry from imagery include using either ratios of multi-spectral reflectance bands or inversions from wave processes. Multi-spectral methods work best in waters with low turbidity, and wave-speed-based methods work best when wave breaking is minimal. In this work, we build on the wave-based inversion approaches, by exploring the use of a fully convolutional neural network (FCNN) to infer nearshore bathymetry from imagery of the sea surface and local wave statistics. We apply transfer learning to adapt a CNN originally trained on synthetic imagery generated from a Boussinesq numerical wave model to utilize tower-based imagery collected in Duck, North Carolina, at the U.S. Army Engineer Research and Development Center’s Field Research Facility. We train the model on sea-surface imagery, wave conditions, and associated surveyed bathymetry using three years of observations, including times with significant wave breaking in the surf zone. This is the first time, to the authors’ knowledge, an FCNN has been successfully applied to infer bathymetry from surf-zone sea-surface imagery. Model results from a separate one-year test period generally show good agreement with survey-derived bathymetry (0.37 m root-mean-squared error, with a max depth of 6.7 m) under diverse wave conditions with wave heights up to 3.5 m. Bathymetry results quantify nearshore bathymetric evolution including bar migration and transitions between single- and double-barred morphologies. We observe that bathymetry estimates are most accurate when time-averaged input images feature visible wave breaking and/or individual images display wave crests. An investigation of activation maps, which show neuron activity on a layer-by-layer basis, suggests that the model is responsive to visible coherent wave structures in the input images.

Genes ◽  
2019 ◽  
Vol 10 (11) ◽  
pp. 862
Author(s):  
Tong Liu ◽  
Zheng Wang

We present a deep-learning package named HiCNN2 to learn the mapping between low-resolution and high-resolution Hi-C (a technique for capturing genome-wide chromatin interactions) data, which can enhance the resolution of Hi-C interaction matrices. The HiCNN2 package includes three methods each with a different deep learning architecture: HiCNN2-1 is based on one single convolutional neural network (ConvNet); HiCNN2-2 consists of an ensemble of two different ConvNets; and HiCNN2-3 is an ensemble of three different ConvNets. Our evaluation results indicate that HiCNN2-enhanced high-resolution Hi-C data achieve smaller mean squared error and higher Pearson’s correlation coefficients with experimental high-resolution Hi-C data compared with existing methods HiCPlus and HiCNN. Moreover, all of the three HiCNN2 methods can recover more significant interactions detected by Fit-Hi-C compared to HiCPlus and HiCNN. Based on our evaluation results, we would recommend using HiCNN2-1 and HiCNN2-3 if recovering more significant interactions from Hi-C data is of interest, and HiCNN2-2 and HiCNN if the goal is to achieve higher reproducibility scores between the enhanced Hi-C matrix and the real high-resolution Hi-C matrix.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 1963
Author(s):  
Tomasz Hachaj ◽  
Łukasz Bibrzycki ◽  
Marcin Piekarczyk

In this paper, we describe the convolutional neural network (CNN)-based approach to the problems of categorization and artefact reduction of cosmic ray images obtained from CMOS sensors used in mobile phones. As artefacts, we understand all images that cannot be attributed to particles’ passage through sensor but rather result from the deficiencies of the registration procedure. The proposed deep neural network is composed of a pretrained CNN and neural-network-based approximator, which models the uncertainty of image class assignment. The network was trained using a transfer learning approach with a mean squared error loss function. We evaluated our approach on a data set containing 2350 images labelled by five judges. The most accurate results were obtained using the VGG16 CNN architecture; the recognition rate (RR) was 85.79% ± 2.24% with a mean squared error (MSE) of 0.03 ± 0.00. After applying the proposed threshold scheme to eliminate less probable class assignments, we obtained a RR of 96.95% ± 1.38% for a threshold of 0.9, which left about 62.60% ± 2.88% of the overall data. Importantly, the research and results presented in this paper are part of the pioneering field of the application of citizen science in the recognition of cosmic rays and, to the best of our knowledge, this analysis is performed on the largest freely available cosmic ray hit dataset.


2021 ◽  
Vol 13 (14) ◽  
pp. 2681
Author(s):  
Xiuyi Zhao ◽  
Ying Yang ◽  
Kun-Shan Chen

Conventional direction-of-arrival (DOA) estimation methods are primarily used in point source scenarios and based on array signal processing. However, due to the local scattering caused by sea surface, signals observed from radar antenna cannot be regarded as a point source but rather as a spatially dispersed source. Besides, with the advantages of flexibility and comparably low cost, synthetic aperture radar (SAR) is the present and future trend of space-based systems. This paper proposes a novel DOA estimation approach for SAR systems using the simulated radar measurement of the sea surface at different operating frequencies and wind speeds. This article’s forward model is an advanced integral equation model (AIEM) to calculate the electromagnetic scattered from the sea surface. To solve the DOA estimation problem, we introduce a convolutional neural network (CNN) framework to estimate the transmitter’s incident angle and incident azimuth angle. Results demonstrate that the CNN can achieve a good performance in DOA estimation at a wide range of frequencies and sea wind speeds.


2021 ◽  
Author(s):  
Amit Kumar ◽  
Nagabhushana Rao Vadlamani

Abstract In this paper, we compare the efficacy of two neural network based models: Convolutional Neural Network (CNN) and Deep Neural Networks (DNN) to inverse design the airfoil shapes. Given the pressure distribution over the airfoil in pictorial (for CNN) or numerical form (for DNN), the trained neural networks predict the airfoil shapes. During the training phase, the critical hyper-parameters of both the models, namely — learning rate, number of epochs and batch size, are tuned to reduce the mean squared error (MSE) and increase the prediction accuracy. The training parameters in DNN are an order of magnitude lower than that of CNN and hence the DNN model is found to be ≈ 7× faster than the CNN. In addition, the accuracy of DNN is also observed to be superior to that of CNN. After processing the raw airfoil shapes, the smoothed airfoils are shown to yield the target pressure distribution thereby validating the framework.


2019 ◽  
Vol 40 (11) ◽  
pp. 2240-2253 ◽  
Author(s):  
Jia Guo ◽  
Enhao Gong ◽  
Audrey P Fan ◽  
Maged Goubran ◽  
Mohammad M Khalighi ◽  
...  

To improve the quality of MRI-based cerebral blood flow (CBF) measurements, a deep convolutional neural network (dCNN) was trained to combine single- and multi-delay arterial spin labeling (ASL) and structural images to predict gold-standard 15O-water PET CBF images obtained on a simultaneous PET/MRI scanner. The dCNN was trained and tested on 64 scans in 16 healthy controls (HC) and 16 cerebrovascular disease patients (PT) with 4-fold cross-validation. Fidelity to the PET CBF images and the effects of bias due to training on different cohorts were examined. The dCNN significantly improved CBF image quality compared with ASL alone (mean ± standard deviation): structural similarity index (0.854 ± 0.036 vs. 0.743 ± 0.045 [single-delay] and 0.732 ± 0.041 [multi-delay], P <  0.0001); normalized root mean squared error (0.209 ± 0.039 vs. 0.326 ± 0.050 [single-delay] and 0.344 ± 0.055 [multi-delay], P <  0.0001). The dCNN also yielded mean CBF with reduced estimation error in both HC and PT ( P <  0.001), and demonstrated better correlation with PET. The dCNN trained with the mixed HC and PT cohort performed the best. The results also suggested that models should be trained on cases representative of the target population.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Binglin Niu

High-resolution remote sensing images usually contain complex semantic information and confusing targets, so their semantic segmentation is an important and challenging task. To resolve the problem of inadequate utilization of multilayer features by existing methods, a semantic segmentation method for remote sensing images based on convolutional neural network and mask generation is proposed. In this method, the boundary box is used as the initial foreground segmentation profile, and the edge information of the foreground object is obtained by using the multilayer feature of the convolutional neural network. In order to obtain the rough object segmentation mask, the general shape and position of the foreground object are estimated by using the high-level features in the process of layer-by-layer iteration. Then, based on the obtained rough mask, the mask is updated layer by layer using the neural network characteristics to obtain a more accurate mask. In order to solve the difficulty of deep neural network training and the problem of degeneration after convergence, a framework based on residual learning was adopted, which can simplify the training of those very deep networks and improve the accuracy of the network. For comparison with other advanced algorithms, the proposed algorithm was tested on the Potsdam and Vaihingen datasets. Experimental results show that, compared with other algorithms, the algorithm in this article can effectively improve the overall precision of semantic segmentation of high-resolution remote sensing images and shorten the overall training time and segmentation time.


Sign in / Sign up

Export Citation Format

Share Document