Fast Gaussian Process Estimation for Large-Scale In Situ Inference using Convolutional Neural Networks

Author(s):  
Divya Banesh ◽  
Nishant Panda ◽  
Ayan Biswas ◽  
Luke Van Roekel ◽  
Diane Oyen ◽  
...  
Author(s):  
Y. A. Lumban-Gaol ◽  
K. A. Ohori ◽  
R. Y. Peters

Abstract. Satellite-Derived Bathymetry (SDB) has been used in many applications related to coastal management. SDB can efficiently fill data gaps obtained from traditional measurements with echo sounding. However, it still requires numerous training data, which is not available in many areas. Furthermore, the accuracy problem still arises considering the linear model could not address the non-relationship between reflectance and depth due to bottom variations and noise. Convolutional Neural Networks (CNN) offers the ability to capture the connection between neighbouring pixels and the non-linear relationship. These CNN characteristics make it compelling to be used for shallow water depth extraction. We investigate the accuracy of different architectures using different window sizes and band combinations. We use Sentinel-2 Level 2A images to provide reflectance values, and Lidar and Multi Beam Echo Sounder (MBES) datasets are used as depth references to train and test the model. A set of Sentinel-2 and in-situ depth subimage pairs are extracted to perform CNN training. The model is compared to the linear transform and applied to two other study areas. Resulting accuracy ranges from 1.3 m to 1.94 m, and the coefficient of determination reaches 0.94. The SDB model generated using a window size of 9x9 indicates compatibility with the reference depths, especially at areas deeper than 15 m. The addition of both short wave infrared bands to the four visible bands in training improves the overall accuracy of SDB. The implementation of the pre-trained model to other study areas provides similar results depending on the water conditions.


Weed Science ◽  
2018 ◽  
Vol 67 (2) ◽  
pp. 239-245 ◽  
Author(s):  
Shaun M. Sharpe ◽  
Arnold W. Schumann ◽  
Nathan S. Boyd

AbstractWeed interference during crop establishment is a serious concern for Florida strawberry [Fragaria×ananassa(Weston) Duchesne ex Rozier (pro sp.) [chiloensis×virginiana]] producers. In situ remote detection for precision herbicide application reduces both the risk of crop injury and herbicide inputs. Carolina geranium (Geranium carolinianumL.) is a widespread broadleaf weed within Florida strawberry production with sensitivity to clopyralid, the only available POST broadleaf herbicide.Geranium carolinianumleaf structure is distinct from that of the strawberry plant, which makes it an ideal candidate for pattern recognition in digital images via convolutional neural networks (CNNs). The study objective was to assess the precision of three CNNs in detectingG. carolinianum. Images ofG. carolinianumgrowing in competition with strawberry were gathered at four sites in Hillsborough County, FL. Three CNNs were compared, including object detection–based DetectNet, image classification–based VGGNet, and GoogLeNet. Two DetectNet networks were trained to detect either leaves or canopies ofG. carolinianum. Image classification using GoogLeNet and VGGNet was largely unsuccessful during validation with whole images (Fscore<0.02). CNN training using cropped images increasedG. carolinianumdetection during validation for VGGNet (Fscore=0.77) and GoogLeNet (Fscore=0.62). TheG. carolinianumleaf–trained DetectNet achieved the highestFscore(0.94) for plant detection during validation. Leaf-based detection led to more consistent detection ofG. carolinianumwithin the strawberry canopy and reduced recall-related errors encountered in canopy-based training. The smaller target of leaf-based DetectNet did increase false positives, but such errors can be overcome with additional training images for network desensitization training. DetectNet was the most viable CNN tested for image-based remote sensing ofG. carolinianumin competition with strawberry. Future research will identify the optimal approach for in situ detection and integrate the detection technology with a precision sprayer.


BMC Genomics ◽  
2019 ◽  
Vol 20 (S9) ◽  
Author(s):  
Yang-Ming Lin ◽  
Ching-Tai Chen ◽  
Jia-Ming Chang

Abstract Background Tandem mass spectrometry allows biologists to identify and quantify protein samples in the form of digested peptide sequences. When performing peptide identification, spectral library search is more sensitive than traditional database search but is limited to peptides that have been previously identified. An accurate tandem mass spectrum prediction tool is thus crucial in expanding the peptide space and increasing the coverage of spectral library search. Results We propose MS2CNN, a non-linear regression model based on deep convolutional neural networks, a deep learning algorithm. The features for our model are amino acid composition, predicted secondary structure, and physical-chemical features such as isoelectric point, aromaticity, helicity, hydrophobicity, and basicity. MS2CNN was trained with five-fold cross validation on a three-way data split on the large-scale human HCD MS2 dataset of Orbitrap LC-MS/MS downloaded from the National Institute of Standards and Technology. It was then evaluated on a publicly available independent test dataset of human HeLa cell lysate from LC-MS experiments. On average, our model shows better cosine similarity and Pearson correlation coefficient (0.690 and 0.632) than MS2PIP (0.647 and 0.601) and is comparable with pDeep (0.692 and 0.642). Notably, for the more complex MS2 spectra of 3+ peptides, MS2PIP is significantly better than both MS2PIP and pDeep. Conclusions We showed that MS2CNN outperforms MS2PIP for 2+ and 3+ peptides and pDeep for 3+ peptides. This implies that MS2CNN, the proposed convolutional neural network model, generates highly accurate MS2 spectra for LC-MS/MS experiments using Orbitrap machines, which can be of great help in protein and peptide identifications. The results suggest that incorporating more data for deep learning model may improve performance.


Sign in / Sign up

Export Citation Format

Share Document