scholarly journals Deep Learned Quantization-Based Codec for 3D Airborne LiDAR Point Cloud Images

2021 ◽  
Vol 8 ◽  
Author(s):  
A. Christoper Tamilmathi ◽  
P. L. Chithra

This paper introduces a novel deep learned quantization-based coding for 3D Airborne LiDAR (Light detection and ranging) point cloud (pcd) image (DLQCPCD). The raw pcd signals are sampled and transformed by applying the Nyquist signal sampling and Min-max signal transformation techniques, respectively for improving the efficiency of the training process. Then, the transformed signals are feed into the deep learned quantization module for compressing the data. To the best of our knowledge, this proposed DLQCPCD is the first deep learning-based model for 3D airborne LiDAR pcd compression. The functions of Mean Squared Error and Stochastic Gradient Descent optimization function enhance the quality of the decompressed image by 67.01 percent on average, compared to other functions. The model’s efficiency has been validated with established well-known compression techniques such as the 7-Zip, WinRAR, and tensor tucker decomposition algorithm on the three inconsistent airborne datasets. The experimental results show that the proposed model compresses every pcd image into constant 16 Number of Neurons of data and decompresses the image with approximately 160 dB of PSNR value, 174.46 s execution time with 0.6 s execution speed per instruction, and proved that it outperforms the other existing algorithms regarding space and time.

2014 ◽  
Vol 2 (2) ◽  
pp. 47-58
Author(s):  
Ismail Sh. Baqer

A two Level Image Quality enhancement is proposed in this paper. In the first level, Dualistic Sub-Image Histogram Equalization DSIHE method decomposes the original image into two sub-images based on median of original images. The second level deals with spikes shaped noise that may appear in the image after processing. We presents three methods of image enhancement GHE, LHE and proposed DSIHE that improve the visual quality of images. A comparative calculations is being carried out on above mentioned techniques to examine objective and subjective image quality parameters e.g. Peak Signal-to-Noise Ratio PSNR values, entropy H and mean squared error MSE to measure the quality of gray scale enhanced images. For handling gray-level images, convenient Histogram Equalization methods e.g. GHE and LHE tend to change the mean brightness of an image to middle level of the gray-level range limiting their appropriateness for contrast enhancement in consumer electronics such as TV monitors. The DSIHE methods seem to overcome this disadvantage as they tend to preserve both, the brightness and contrast enhancement. Experimental results show that the proposed technique gives better results in terms of Discrete Entropy, Signal to Noise ratio and Mean Squared Error values than the Global and Local histogram-based equalization methods


2021 ◽  
Author(s):  
Kun-Cheng Ke ◽  
Ming-Shyan Huang

Abstract Injection molding has been broadly used in the mass production of plastic parts and must meet the requirements of efficiency and quality consistency. Machine learning can effectively predict the quality of injection molded part. However, the performance of machine learning models largely depends on the accuracy of the training. Hyperparameters such as activation functions, momentum, and learning rate are crucial to the accuracy and efficiency of model training. This research further analyzed the influence of hyperparameters on testing accuracy, explored the corresponding optimal learning rate, and provided the optimal training model for predicting the quality of injection molded parts. In this study, stochastic gradient descent (SGD) and stochastic gradient descent with momentum were used to optimize the artificial neural network model. Through optimization of these training model hyperparameters, the width testing accuracy of the injection product improved. The experimental results indicated that in the absence of momentum effects, all five activation functions can achieve more than 90% of the training accuracy with a learning rate of 0.1. Moreover, when optimized with the SGD, the learning rate of the Sigmoid activation function was 0.1, and the testing accuracy reached 95.8%. Although momentum had the least influence on accuracy, it affected the convergence speed of the Sigmoid function, which reduced the number of required learning iterations (82.4% reduction rate). Optimizing hyperparameter settings can improve the accuracy of model testing and markedly reduce training time.


2020 ◽  
Vol 10 (24) ◽  
pp. 8904
Author(s):  
Ana Isabel Montoya-Munoz ◽  
Oscar Mauricio Caicedo Rendon

The reliability in data collection is essential in Smart Farming supported by the Internet of Things (IoT). Several IoT and Fog-based works consider the reliability concept, but they fall short in providing a network’s edge mechanisms for detecting and replacing outliers. Making decisions based on inaccurate data can diminish the quality of crops and, consequently, lose money. This paper proposes an approach for providing reliable data collection, which focuses on outlier detection and treatment in IoT-based Smart Farming. Our proposal includes an architecture based on the continuum IoT-Fog-Cloud, which incorporates a mechanism based on Machine Learning to detect outliers and another based on interpolation for inferring data intended to replace outliers. We located the data cleaning at the Fog to Smart Farming applications functioning in the farm operate with reliable data. We evaluate our approach by carrying out a case study in a network based on the proposed architecture and deployed at a Colombian Coffee Smart Farm. Results show our mechanisms achieve high Accuracy, Precision, and Recall as well as low False Alarm Rate and Root Mean Squared Error when detecting and replacing outliers with inferred data. Considering the obtained results, we conclude that our approach provides reliable data collection in Smart Farming.


Entropy ◽  
2018 ◽  
Vol 20 (12) ◽  
pp. 968
Author(s):  
Baobin Wang ◽  
Ting Hu

The minimum error entropy principle (MEE) is an alternative of the classical least squares for its robustness to non-Gaussian noise. This paper studies the gradient descent algorithm for MEE with a semi-supervised approach and distributed method, and shows that using the additional information of unlabeled data can enhance the learning ability of the distributed MEE algorithm. Our result proves that the mean squared error of the distributed gradient descent MEE algorithm can be minimax optimal for regression if the number of local machines increases polynomially as the total datasize.


2014 ◽  
Vol 571-572 ◽  
pp. 717-720
Author(s):  
De Kun Hu ◽  
Yong Hong Liu ◽  
Li Zhang ◽  
Gui Duo Duan

A deep Neural Network model was trained to classify the facial expression in unconstrained images, which comprises nine layers, including input layer, convolutional layer, pooling layer, fully connected layers and output layer. In order to optimize the model, rectified linear units for the nonlinear transformation, weights sharing for reducing the complexity, “mean” and “max” pooling for subsample, “dropout” for sparsity are applied in the forward processing. With large amounts of hard training faces, the model was trained via back propagation method with stochastic gradient descent. The results of shows the proposed model achieves excellent performance.


2007 ◽  
Vol 89 (3) ◽  
pp. 135-153 ◽  
Author(s):  
JINLIANG WANG

SummaryKnowledge of the genetic relatedness among individuals is essential in diverse research areas such as behavioural ecology, conservation biology, quantitative genetics and forensics. How to estimate relatedness accurately from genetic marker information has been explored recently by many methodological studies. In this investigation I propose a new likelihood method that uses the genotypes of a triad of individuals in estimating pairwise relatedness (r). The idea is to use a third individual as a control (reference) in estimating the r between two other individuals, thus reducing the chance of genes identical in state being mistakenly inferred as identical by descent. The new method allows for inbreeding and accounts for genotype errors in data. Analyses of both simulated and human microsatellite and SNP datasets show that the quality of r estimates (measured by the root mean squared error, RMSE) is generally improved substantially by the new triadic likelihood method (TL) over the dyadic likelihood method and five moment estimators. Simulations also show that genotyping errors/mutations, when ignored, result in underestimates of r for related dyads, and that incorporating a model of typing errors in the TL method improves r estimates for highly related dyads but impairs those for loosely related or unrelated dyads. The effects of inbreeding were also investigated through simulations. It is concluded that, because most dyads in a natural population are unrelated or only loosely related, the overall performance of the new triadic likelihood method is the best, offering r estimates with a RMSE that is substantially smaller than the five commonly used moment estimators and the dyadic likelihood method.


Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7058
Author(s):  
Heesang Eom ◽  
Jongryun Roh ◽  
Yuli Sun Hariyani ◽  
Suwhan Baek ◽  
Sukho Lee ◽  
...  

Wearable technologies are known to improve our quality of life. Among the various wearable devices, shoes are non-intrusive, lightweight, and can be used for outdoor activities. In this study, we estimated the energy consumption and heart rate in an environment (i.e., running on a treadmill) using smart shoes equipped with triaxial acceleration, triaxial gyroscope, and four-point pressure sensors. The proposed model uses the latest deep learning architecture which does not require any separate preprocessing. Moreover, it is possible to select the optimal sensor using a channel-wise attention mechanism to weigh the sensors depending on their contributions to the estimation of energy expenditure (EE) and heart rate (HR). The performance of the proposed model was evaluated using the root mean squared error (RMSE), mean absolute error (MAE), and coefficient of determination (R2). Moreover, the RMSE was 1.05 ± 0.15, MAE 0.83 ± 0.12 and R2 0.922 ± 0.005 in EE estimation. On the other hand, and RMSE was 7.87 ± 1.12, MAE 6.21 ± 0.86, and R2 0.897 ± 0.017 in HR estimation. In both estimations, the most effective sensor was the z axis of the accelerometer and gyroscope sensors. Through these results, it is demonstrated that the proposed model could contribute to the improvement of the performance of both EE and HR estimations by effectively selecting the optimal sensors during the active movements of participants.


Author(s):  
Z. Hui ◽  
P. Cheng ◽  
L. Wang ◽  
Y. Xia ◽  
H. Hu ◽  
...  

<p><strong>Abstract.</strong> Denoising is a key pre-processing step for many airborne LiDAR point cloud applications. However, the previous algorithms have a number of problems, which affect the quality of point cloud post-processing, such as DTM generation. In this paper, a novel automated denoising algorithm is proposed based on empirical mode decomposition to remove outliers from airborne LiDAR point cloud. Comparing with traditional point cloud denoising algorithms, the proposed method can detect outliers from a signal processing perspective. Firstly, airborne LiDAR point clouds are decomposed into a series of intrinsic mode functions with the help of morphological operations, which would significantly decrease the computational complexity. By applying OTSU algorithm to these intrinsic mode functions, noise-dominant components can be detected and filtered. Finally, outliers are detected automatically by comparing observed elevations and reconstructed elevations. Three datasets located at three different cities in China were used to verify the validity and robustness of the proposed method. The experimental results demonstrate that the proposed method removes both high and low outliers effectively with various terrain features while preserving useful ground details.</p>


Author(s):  
Calvin Omind Munna

Currently, there a growing demand of data produced and stored in clinical domains. Therefore, for effective dealings of massive sets of data, a fusion methodology needs to be analyzed by considering the algorithmic complexities. For effective minimization of the severance of image content, hence minimizing the capacity to store and communicate data in optimal forms, image processing methodology has to be involved. In that case, in this research, two compression methodologies: lossy compression and lossless compression were utilized for the purpose of compressing images, which maintains the quality of images. Also, a number of sophisticated approaches to enhance the quality of the fused images have been applied. The methodologies have been assessed and various fusion findings have been presented. Lastly, performance parameters were obtained and evaluated with respect to sophisticated approaches. Structure Similarity Index Metric (SSIM), Mean Squared Error (MSE), Peak Signal-to-Noise Ratio (PSNR) are the metrics, which were utilized for the sample clinical pictures. Critical analysis of the measurement parameters shows higher efficiency compared to numerous image processing methods. This research draws understanding to these approaches and enables scientists to choose effective methodologies of a particular application.


Author(s):  
Derek Driggs ◽  
Matthias J. Ehrhardt ◽  
Carola-Bibiane Schönlieb

Abstract Variance reduction is a crucial tool for improving the slow convergence of stochastic gradient descent. Only a few variance-reduced methods, however, have yet been shown to directly benefit from Nesterov’s acceleration techniques to match the convergence rates of accelerated gradient methods. Such approaches rely on “negative momentum”, a technique for further variance reduction that is generally specific to the SVRG gradient estimator. In this work, we show for the first time that negative momentum is unnecessary for acceleration and develop a universal acceleration framework that allows all popular variance-reduced methods to achieve accelerated convergence rates. The constants appearing in these rates, including their dependence on the number of functions n, scale with the mean-squared-error and bias of the gradient estimator. In a series of numerical experiments, we demonstrate that versions of SAGA, SVRG, SARAH, and SARGE using our framework significantly outperform non-accelerated versions and compare favourably with algorithms using negative momentum.


Sign in / Sign up

Export Citation Format

Share Document