scholarly journals Convolution Neural Network for the Prediction of Cochlodinium polykrikoides Bloom in the South Sea of Korea

2021 ◽  
Vol 10 (1) ◽  
pp. 31
Author(s):  
Youngjin Choi ◽  
Youngmin Park ◽  
Weol-Ae Lim ◽  
Seung-Hwan Min ◽  
Joon-Soo Lee

In this study, the occurrence of Cochlodinium polykrikoides bloom was predicted based on spatial information. The South Sea of Korea (SSK), where C. polykrikoides bloom occurs every year, was divided into three concentrated areas. For each domain, the optimal model configuration was determined by designing a verification experiment with 1–3 convolutional neural network (CNN) layers and 50–300 training times. Finally, we predicted the occurrence of C. polykrikoides bloom based on 3 CNN layers and 300 training times that showed the best results. The experimental results for the three areas showed that the average pixel accuracy was 96.22%, mean accuracy was 91.55%, mean IU was 81.5%, and frequency weighted IU was 84.57%, all of which showed above 80% prediction accuracy, indicating the achievement of appropriate performance. Our results show that the occurrence of C. polykrikoides bloom can be derived from atmosphere and ocean forecast information.

2020 ◽  
Vol 9 (2) ◽  
pp. 74
Author(s):  
Eric Hsueh-Chan Lu ◽  
Jing-Mei Ciou

With the rapid development of surveying and spatial information technologies, more and more attention has been given to positioning. In outdoor environments, people can easily obtain positioning services through global navigation satellite systems (GNSS). In indoor environments, the GNSS signal is often lost, while other positioning problems, such as dead reckoning and wireless signals, will face accumulated errors and signal interference. Therefore, this research uses images to realize a positioning service. The main concept of this work is to establish a model for an indoor field image and its coordinate information and to judge its position by image eigenvalue matching. Based on the architecture of PoseNet, the image is input into a 23-layer convolutional neural network according to various sizes to train end-to-end location identification tasks, and the three-dimensional position vector of the camera is regressed. The experimental data are taken from the underground parking lot and the Palace Museum. The preliminary experimental results show that this new method designed by us can effectively improve the accuracy of indoor positioning by about 20% to 30%. In addition, this paper also discusses other architectures, field sizes, camera parameters, and error corrections for this neural network system. The preliminary experimental results show that the angle error correction method designed by us can effectively improve positioning by about 20%.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Tianjun Liu ◽  
Deling Yang

AbstractMotor Imagery is a classical method of Brain Computer Interaction, in which electroencephalogram (EEG) signal features evoked by the imaginary body movements are recognized, and relevant information is extracted. Recently, various deep learning methods are being focused on finding an easy-to-use EEG representation method that can preserve both temporal information as well as spatial information. To further utilize the spatial and temporal features of EEG signals, we proposed a 3D representation of EEG and an end-to-end EEG three-branch 3D convolutional neural network, to address the class imbalance problem (dataset show unequal distribution among their classes), we proposed a class balance cropped strategy. Experimental results indicated that there are also a problem of the different classification difficulty for different classes in motor stages classification tasks, we introduce focal loss to address problem of ‘easy-hard’ examples, when trained with the focal loss, the three-branch 3D-CNN network achieve good performance (relatively more balanced classification accuracy of binary classifications) on the WAY-EEG-GAL data set. Experimental results show that the proposed method is a good method, which can improve classification effect of different motor stages classification.


2021 ◽  
Vol 16 (1) ◽  
Author(s):  
Hideaki Hirashima ◽  
Mitsuhiro Nakamura ◽  
Pascal Baillehache ◽  
Yusuke Fujimoto ◽  
Shota Nakagawa ◽  
...  

Abstract Background This study aimed to (1) develop a fully residual deep convolutional neural network (CNN)-based segmentation software for computed tomography image segmentation of the male pelvic region and (2) demonstrate its efficiency in the male pelvic region. Methods A total of 470 prostate cancer patients who had undergone intensity-modulated radiotherapy or volumetric-modulated arc therapy were enrolled. Our model was based on FusionNet, a fully residual deep CNN developed to semantically segment biological images. To develop the CNN-based segmentation software, 450 patients were randomly selected and separated into the training, validation and testing groups (270, 90, and 90 patients, respectively). In Experiment 1, to determine the optimal model, we first assessed the segmentation accuracy according to the size of the training dataset (90, 180, and 270 patients). In Experiment 2, the effect of varying the number of training labels on segmentation accuracy was evaluated. After determining the optimal model, in Experiment 3, the developed software was used on the remaining 20 datasets to assess the segmentation accuracy. The volumetric dice similarity coefficient (DSC) and the 95th-percentile Hausdorff distance (95%HD) were calculated to evaluate the segmentation accuracy for each organ in Experiment 3. Results In Experiment 1, the median DSC for the prostate were 0.61 for dataset 1 (90 patients), 0.86 for dataset 2 (180 patients), and 0.86 for dataset 3 (270 patients), respectively. The median DSCs for all the organs increased significantly when the number of training cases increased from 90 to 180 but did not improve upon further increase from 180 to 270. The number of labels applied during training had a little effect on the DSCs in Experiment 2. The optimal model was built by 270 patients and four organs. In Experiment 3, the median of the DSC and the 95%HD values were 0.82 and 3.23 mm for prostate; 0.71 and 3.82 mm for seminal vesicles; 0.89 and 2.65 mm for the rectum; 0.95 and 4.18 mm for the bladder, respectively. Conclusions We have developed a CNN-based segmentation software for the male pelvic region and demonstrated that the CNN-based segmentation software is efficient for the male pelvic region.


Author(s):  
Sachin B. Jadhav

<span lang="EN-US">Plant pathologists desire soft computing technology for accurate and reliable diagnosis of plant diseases. In this study, we propose an efficient soybean disease identification method based on a transfer learning approach by using a pre-trained convolutional neural network (CNN’s) such as AlexNet, GoogleNet, VGG16, ResNet101, and DensNet201. The proposed convolutional neural networks were trained using 1200 plant village image dataset of diseased and healthy soybean leaves, to identify three soybean diseases out of healthy leaves. Pre-trained CNN used to enable a fast and easy system implementation in practice. We used the five-fold cross-validation strategy to analyze the performance of networks. In this study, we used a pre-trained convolutional neural network as feature extractors and classifiers. The experimental results based on the proposed approach using pre-trained AlexNet, GoogleNet, VGG16, ResNet101, and DensNet201 networks achieve an accuracy of 95%, 96.4 %, 96.4 %, 92.1%, 93.6% respectively. The experimental results for the identification of soybean diseases indicated that the proposed networks model achieves the highest accuracy</span>


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Longzhi Zhang ◽  
Dongmei Wu

Grasp detection based on convolutional neural network has gained some achievements. However, overfitting of multilayer convolutional neural network still exists and leads to poor detection precision. To acquire high detection accuracy, a single target grasp detection network that generalizes the fitting of angle and position, based on the convolution neural network, is put forward here. The proposed network regards the image as input and grasping parameters including angle and position as output, with the detection manner of end-to-end. Particularly, preprocessing dataset is to achieve the full coverage to input of model and transfer learning is to avoid overfitting of network. Importantly, a series of experimental results indicate that, for single object grasping, our network has good detection results and high accuracy, which proves that the proposed network has strong generalization in direction and category.


2020 ◽  
Author(s):  
Florian Dupuy ◽  
Olivier Mestre ◽  
Léo Pfitzner

&lt;p&gt;Cloud cover is a crucial information for many applications such as planning land observation missions from space. However, cloud cover remains a challenging variable to forecast, and Numerical Weather Prediction (NWP) models suffer from significant biases, hence justifying the use of statistical post-processing techniques. In our application, the ground truth is a gridded cloud cover product derived from satellite observations over Europe, and predictors are spatial fields of various variables produced by ARPEGE (M&amp;#233;t&amp;#233;o-France global NWP) at the corresponding lead time.&lt;/p&gt;&lt;p&gt;In this study, ARPEGE cloud cover is post-processed using a convolutional neural network (CNN). CNN is the most popular machine learning tool to deal with images. In our case, CNN allows to integrate spatial information contained in NWP outputs. We show that a simple U-Net architecture produces significant improvements over Europe. Compared to the raw ARPEGE forecasts, MAE drops from 25.1 % to 17.8 % and RMSE decreases from 37.0 % to 31.6 %. Considering specific needs for earth observation, special interest was put on forecasts with low cloud cover conditions (&lt; 10 %). For this particular nebulosity class, we show that hit rate jumps from 40.6 to 70.7 (which is the order of magnitude of what can be achieved using classical machine learning algorithms such as random forests) while false alarm decreases from 38.2 to 29.9. This is an excellent result, since improving hit rates by means of random forests usually also results in a slight increase of false alarms.&lt;/p&gt;


Author(s):  
Liyang Xiao ◽  
Wei Li ◽  
Ju Huyan ◽  
Zhaoyun Sun ◽  
Susan Tighe

This paper aims to develop a method of crack grid detection based on convolutional neural network. First, an image denoising operation is conducted to improve image quality. Next, the processed images are divided into grids of different, and each grid is fed into a convolutional neural network for detection. The pieces of the grids with cracks are marked and then returned to the original images. Finally, on the basis of the detection results, threshold segmentation is performed only on the marked grids. Information about the crack parameters is obtained via pixel scanning and calculation, which realises complete crack detection. The experimental results show that 30×30 grids perform the best with the accuracy value of 97.33%. The advantage of automatic crack grid detection is that it can avoid fracture phenomenon in crack identification and ensure the integrity of cracks.


2019 ◽  
Vol 2019 ◽  
pp. 1-12 ◽  
Author(s):  
Yu Wang ◽  
Xiaofei Wang ◽  
Junfan Jian

Landslides are a type of frequent and widespread natural disaster. It is of great significance to extract location information from the landslide in time. At present, most articles still select single band or RGB bands as the feature for landslide recognition. To improve the efficiency of landslide recognition, this study proposed a remote sensing recognition method based on the convolutional neural network of the mixed spectral characteristics. Firstly, this paper tried to add NDVI (normalized difference vegetation index) and NIRS (near-infrared spectroscopy) to enhance the features. Then, remote sensing images (predisaster and postdisaster images) with same spatial information but different time series information regarding landslide are taken directly from GF-1 satellite as input images. By combining the 4 bands (red + green + blue + near-infrared) of the prelandslide remote sensing images with the 4 bands of the postlandslide images and NDVI images, images with 9 bands were obtained, and the band values reflecting the changing characteristics of the landslide were determined. Finally, a deep learning convolutional neural network (CNN) was introduced to solve the problem. The proposed method was tested and verified with remote sensing data from the 2015 large-scale landslide event in Shanxi, China, and 2016 large-scale landslide event in Fujian, China. The results showed that the accuracy of the method was high. Compared with the traditional methods, the recognition efficiency was improved, proving the effectiveness and feasibility of the method.


2020 ◽  
Vol 2020 ◽  
pp. 1-8
Author(s):  
Bo Liu ◽  
Qilin Wu ◽  
Yiwen Zhang ◽  
Qian Cao

Pruning is a method of compressing the size of a neural network model, which affects the accuracy and computing time when the model makes a prediction. In this paper, the hypothesis that the pruning proportion is positively correlated with the compression scale of the model but not with the prediction accuracy and calculation time is put forward. For testing the hypothesis, a group of experiments are designed, and MNIST is used as the data set to train a neural network model based on TensorFlow. Based on this model, pruning experiments are carried out to investigate the relationship between pruning proportion and compression effect. For comparison, six different pruning proportions are set, and the experimental results confirm the above hypothesis.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Leilei Kong ◽  
Zhongyuan Han ◽  
Yong Han ◽  
Haoliang Qi

Paraphrase identification is central to many natural language applications. Based on the insight that a successful paraphrase identification model needs to adequately capture the semantics of the language objects as well as their interactions, we present a deep paraphrase identification model interacting semantics with syntax (DPIM-ISS) for paraphrase identification. DPIM-ISS introduces the linguistic features manifested in syntactic features to produce more explicit structures and encodes the semantic representation of sentence on different syntactic structures by means of interacting semantics with syntax. Then, DPIM-ISS learns the paraphrase pattern from this representation interacting the semantics with syntax by exploiting a convolutional neural network with convolution-pooling structure. Experiments are conducted on the corpus of Microsoft Research Paraphrase (MSRP), PAN 2010 corpus, and PAN 2012 corpus for paraphrase plagiarism detection. The experimental results demonstrate that DPIM-ISS outperforms the classical word-matching approaches, the syntax-similarity approaches, the convolution neural network-based models, and some deep paraphrase identification models.


Sign in / Sign up

Export Citation Format

Share Document