scholarly journals Integration of Convolutional Neural Network and Error Correction for Indoor Positioning

2020 ◽  
Vol 9 (2) ◽  
pp. 74
Author(s):  
Eric Hsueh-Chan Lu ◽  
Jing-Mei Ciou

With the rapid development of surveying and spatial information technologies, more and more attention has been given to positioning. In outdoor environments, people can easily obtain positioning services through global navigation satellite systems (GNSS). In indoor environments, the GNSS signal is often lost, while other positioning problems, such as dead reckoning and wireless signals, will face accumulated errors and signal interference. Therefore, this research uses images to realize a positioning service. The main concept of this work is to establish a model for an indoor field image and its coordinate information and to judge its position by image eigenvalue matching. Based on the architecture of PoseNet, the image is input into a 23-layer convolutional neural network according to various sizes to train end-to-end location identification tasks, and the three-dimensional position vector of the camera is regressed. The experimental data are taken from the underground parking lot and the Palace Museum. The preliminary experimental results show that this new method designed by us can effectively improve the accuracy of indoor positioning by about 20% to 30%. In addition, this paper also discusses other architectures, field sizes, camera parameters, and error corrections for this neural network system. The preliminary experimental results show that the angle error correction method designed by us can effectively improve positioning by about 20%.

2021 ◽  
Vol 10 (1) ◽  
pp. 31
Author(s):  
Youngjin Choi ◽  
Youngmin Park ◽  
Weol-Ae Lim ◽  
Seung-Hwan Min ◽  
Joon-Soo Lee

In this study, the occurrence of Cochlodinium polykrikoides bloom was predicted based on spatial information. The South Sea of Korea (SSK), where C. polykrikoides bloom occurs every year, was divided into three concentrated areas. For each domain, the optimal model configuration was determined by designing a verification experiment with 1–3 convolutional neural network (CNN) layers and 50–300 training times. Finally, we predicted the occurrence of C. polykrikoides bloom based on 3 CNN layers and 300 training times that showed the best results. The experimental results for the three areas showed that the average pixel accuracy was 96.22%, mean accuracy was 91.55%, mean IU was 81.5%, and frequency weighted IU was 84.57%, all of which showed above 80% prediction accuracy, indicating the achievement of appropriate performance. Our results show that the occurrence of C. polykrikoides bloom can be derived from atmosphere and ocean forecast information.


2021 ◽  
Vol 13 (3) ◽  
pp. 535
Author(s):  
Weisheng Li ◽  
Xuesong Liang ◽  
Meilin Dong

With the rapid development of deep neural networks in the field of remote sensing image fusion, the pan-sharpening method based on convolutional neural networks has achieved remarkable effects. However, because remote sensing images contain complex features, existing methods cannot fully extract spatial features while maintaining spectral quality, resulting in insufficient reconstruction capabilities. To produce high-quality pan-sharpened images, a multiscale perception dense coding convolutional neural network (MDECNN) is proposed. The network is based on dual-stream input, designing multiscale blocks to separately extract the rich spatial information contained in panchromatic (PAN) images, designing feature enhancement blocks and dense coding structures to fully learn the feature mapping relationship, and proposing comprehensive loss constraint expectations. Spectral mapping is used to maintain spectral quality and obtain high-quality fused images. Experiments on different satellite datasets show that this method is superior to the existing methods in both subjective and objective evaluations.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Tianjun Liu ◽  
Deling Yang

AbstractMotor Imagery is a classical method of Brain Computer Interaction, in which electroencephalogram (EEG) signal features evoked by the imaginary body movements are recognized, and relevant information is extracted. Recently, various deep learning methods are being focused on finding an easy-to-use EEG representation method that can preserve both temporal information as well as spatial information. To further utilize the spatial and temporal features of EEG signals, we proposed a 3D representation of EEG and an end-to-end EEG three-branch 3D convolutional neural network, to address the class imbalance problem (dataset show unequal distribution among their classes), we proposed a class balance cropped strategy. Experimental results indicated that there are also a problem of the different classification difficulty for different classes in motor stages classification tasks, we introduce focal loss to address problem of ‘easy-hard’ examples, when trained with the focal loss, the three-branch 3D-CNN network achieve good performance (relatively more balanced classification accuracy of binary classifications) on the WAY-EEG-GAL data set. Experimental results show that the proposed method is a good method, which can improve classification effect of different motor stages classification.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Sangmin Jeon ◽  
Kyungmin Clara Lee

Abstract Objective The rapid development of artificial intelligence technologies for medical imaging has recently enabled automatic identification of anatomical landmarks on radiographs. The purpose of this study was to compare the results of an automatic cephalometric analysis using convolutional neural network with those obtained by a conventional cephalometric approach. Material and methods Cephalometric measurements of lateral cephalograms from 35 patients were obtained using an automatic program and a conventional program. Fifteen skeletal cephalometric measurements, nine dental cephalometric measurements, and two soft tissue cephalometric measurements obtained by the two methods were compared using paired t test and Bland-Altman plots. Results A comparison between the measurements from the automatic and conventional cephalometric analyses in terms of the paired t test confirmed that the saddle angle, linear measurements of maxillary incisor to NA line, and mandibular incisor to NB line showed statistically significant differences. All measurements were within the limits of agreement based on the Bland-Altman plots. The widths of limits of agreement were wider in dental measurements than those in the skeletal measurements. Conclusions Automatic cephalometric analyses based on convolutional neural network may offer clinically acceptable diagnostic performance. Careful consideration and additional manual adjustment are needed for dental measurements regarding tooth structures for higher accuracy and better performance.


Entropy ◽  
2021 ◽  
Vol 23 (7) ◽  
pp. 816
Author(s):  
Pingping Liu ◽  
Xiaokang Yang ◽  
Baixin Jin ◽  
Qiuzhan Zhou

Diabetic retinopathy (DR) is a common complication of diabetes mellitus (DM), and it is necessary to diagnose DR in the early stages of treatment. With the rapid development of convolutional neural networks in the field of image processing, deep learning methods have achieved great success in the field of medical image processing. Various medical lesion detection systems have been proposed to detect fundus lesions. At present, in the image classification process of diabetic retinopathy, the fine-grained properties of the diseased image are ignored and most of the retinopathy image data sets have serious uneven distribution problems, which limits the ability of the network to predict the classification of lesions to a large extent. We propose a new non-homologous bilinear pooling convolutional neural network model and combine it with the attention mechanism to further improve the network’s ability to extract specific features of the image. The experimental results show that, compared with the most popular fundus image classification models, the network model we proposed can greatly improve the prediction accuracy of the network while maintaining computational efficiency.


Author(s):  
Sachin B. Jadhav

<span lang="EN-US">Plant pathologists desire soft computing technology for accurate and reliable diagnosis of plant diseases. In this study, we propose an efficient soybean disease identification method based on a transfer learning approach by using a pre-trained convolutional neural network (CNN’s) such as AlexNet, GoogleNet, VGG16, ResNet101, and DensNet201. The proposed convolutional neural networks were trained using 1200 plant village image dataset of diseased and healthy soybean leaves, to identify three soybean diseases out of healthy leaves. Pre-trained CNN used to enable a fast and easy system implementation in practice. We used the five-fold cross-validation strategy to analyze the performance of networks. In this study, we used a pre-trained convolutional neural network as feature extractors and classifiers. The experimental results based on the proposed approach using pre-trained AlexNet, GoogleNet, VGG16, ResNet101, and DensNet201 networks achieve an accuracy of 95%, 96.4 %, 96.4 %, 92.1%, 93.6% respectively. The experimental results for the identification of soybean diseases indicated that the proposed networks model achieves the highest accuracy</span>


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Longzhi Zhang ◽  
Dongmei Wu

Grasp detection based on convolutional neural network has gained some achievements. However, overfitting of multilayer convolutional neural network still exists and leads to poor detection precision. To acquire high detection accuracy, a single target grasp detection network that generalizes the fitting of angle and position, based on the convolution neural network, is put forward here. The proposed network regards the image as input and grasping parameters including angle and position as output, with the detection manner of end-to-end. Particularly, preprocessing dataset is to achieve the full coverage to input of model and transfer learning is to avoid overfitting of network. Importantly, a series of experimental results indicate that, for single object grasping, our network has good detection results and high accuracy, which proves that the proposed network has strong generalization in direction and category.


2020 ◽  
Author(s):  
Florian Dupuy ◽  
Olivier Mestre ◽  
Léo Pfitzner

&lt;p&gt;Cloud cover is a crucial information for many applications such as planning land observation missions from space. However, cloud cover remains a challenging variable to forecast, and Numerical Weather Prediction (NWP) models suffer from significant biases, hence justifying the use of statistical post-processing techniques. In our application, the ground truth is a gridded cloud cover product derived from satellite observations over Europe, and predictors are spatial fields of various variables produced by ARPEGE (M&amp;#233;t&amp;#233;o-France global NWP) at the corresponding lead time.&lt;/p&gt;&lt;p&gt;In this study, ARPEGE cloud cover is post-processed using a convolutional neural network (CNN). CNN is the most popular machine learning tool to deal with images. In our case, CNN allows to integrate spatial information contained in NWP outputs. We show that a simple U-Net architecture produces significant improvements over Europe. Compared to the raw ARPEGE forecasts, MAE drops from 25.1 % to 17.8 % and RMSE decreases from 37.0 % to 31.6 %. Considering specific needs for earth observation, special interest was put on forecasts with low cloud cover conditions (&lt; 10 %). For this particular nebulosity class, we show that hit rate jumps from 40.6 to 70.7 (which is the order of magnitude of what can be achieved using classical machine learning algorithms such as random forests) while false alarm decreases from 38.2 to 29.9. This is an excellent result, since improving hit rates by means of random forests usually also results in a slight increase of false alarms.&lt;/p&gt;


Author(s):  
Liyang Xiao ◽  
Wei Li ◽  
Ju Huyan ◽  
Zhaoyun Sun ◽  
Susan Tighe

This paper aims to develop a method of crack grid detection based on convolutional neural network. First, an image denoising operation is conducted to improve image quality. Next, the processed images are divided into grids of different, and each grid is fed into a convolutional neural network for detection. The pieces of the grids with cracks are marked and then returned to the original images. Finally, on the basis of the detection results, threshold segmentation is performed only on the marked grids. Information about the crack parameters is obtained via pixel scanning and calculation, which realises complete crack detection. The experimental results show that 30×30 grids perform the best with the accuracy value of 97.33%. The advantage of automatic crack grid detection is that it can avoid fracture phenomenon in crack identification and ensure the integrity of cracks.


Sign in / Sign up

Export Citation Format

Share Document