normalized cross correlation
Recently Published Documents


TOTAL DOCUMENTS

199
(FIVE YEARS 49)

H-INDEX

20
(FIVE YEARS 2)

Author(s):  
Ismael Lopez Sanchez ◽  
Miguel Angel Rosas Galaviz ◽  
Damian Gomez Herrera ◽  
Luis Rizo Dominguez

2021 ◽  
Vol 2107 (1) ◽  
pp. 012036
Author(s):  
Sharifah Nurul Husna Syed Hanapi ◽  
S A A Shukor ◽  
Jalal Johari

Abstract Tree crown detection and counting from remote sensing data such as images from Unmanned Aerial Vehicle (UAV) shows significant role in this modern era for vegetation monitoring. Since the data processing would depends on raw data available and for this case the RGB data, thus a suitable method such as template matching is presented. Normalized cross correlation is widely used as an effective measure in similarity between template image and the source or testing images. This paper focuses on six (6) steps involved in the overall process which are: (1) image acquisition, (2) template optimisation, (3) normalized cross correlation, (4) sliding window, (5) matched image and counting, and (6) accuracy assessment. Normalized cross correlation and sliding window techniques proposed for this work resulted in 80% to 89% F-measure values. This result indicates that UAV image data with appropriate image processing method/s have the potential to provide vital information for oil palm tree counting. This would be beneficial in plantation management to estimate yield and productivity. However, there are still rooms for improvement to achieve better results.


2021 ◽  
Vol 5 (2) ◽  
pp. 510-514
Author(s):  
Helmy Mukti Wijaya ◽  
Teguh Hariyanto ◽  
Hepi Hapsari Handayani

The Interior Orientation is a set of parameters that have been determined to transform the coordinates of the camera photo, that is the coordinates of the pixel leading to the coordinates of the image. This parameter is used to calibrate the camera before use so as to produce a precise measurement from an aerial photograph. This orientation parameter consists of a calibrated and equivalent camera focal length, lens distortion, principal point, fiducial mark location, camera resolution, and flatness of the focal plane. All of these parameters are attached to or contained on the camera sensor and the values of these parameters can usually be known from the camera's report page. In this work, the author wants to obtain pixel coordinates from the Fiducial Mark in the base image (Window Search) automatically, therefore a Fiducial Mark template was created which is formed from a piece of a photo image frame to determine the Fiducial Mark coordinate values from the base image ( Window Search), the basis of this programming is to use the concept of photogrammetry, which uses Image Matching techniques. The Image Matching process was developed from the C ++ Language programming algorithm platform, this was done in order to speed up computational results. There are a number of techniques for doing Image Matching, in this study the authors conducted using the Normalized Cross-Correlation Image Matching. In statistics Normalized Cross-Correlation is between two random variables by determining the size of how closely the two variables are different simultaneously. Similarly, Normalized Cross-Correlation in Image Matching is a measurement by calculating the degree of similarity between two images. This level of similarity is determined by Normalized Cross-Correlation (NCC). The Least Square Image Matching method is used to increase the accuracy of the coordinates of the conjugation points.


2021 ◽  
Author(s):  
Mirko Salaris ◽  
Andrea Damiani ◽  
Edoardo Putti ◽  
Luca Stornaiuolo

2021 ◽  
Vol 5 ◽  
pp. 93-103
Author(s):  
Telman Aliev ◽  
◽  
Nailya Musaeva ◽  
◽  

It is shown that when noisy signals are formed, the condition for the absence of correlation between the useful signal and the noise is often violated. This causes certain errors of correlation analysis of these signals, resulting in the inadequacy of the results obtained. In addition, the existing correlation analysis technologies do not allow using the noise as a carrier of valuable information. Therefore, the full use of the colossal information potential of noisy signals requires new technologies that would exclude the loss of valuable information, both when the known classical conditions are met and when they are not. Algorithms are developed for determining the estimate of the correlation coefficient between the useful signal and the noise, which cannot be measured directly or isolated from a noisy signal. For this purpose, the normalized cross-correlation function between the useful signal and the noise is used. An algorithm for calculating the estimates of the normalized cross-correlation function between the useful signal and the noise is developed using the estimates of the relay correlation function of the noisy signal. It is shown that the value of this estimate, calculated at a zero time shift, is an estimate of the correlation coefficient between the useful signal and the noise. A technology for conducting computational experiments is proposed, a comparative analysis is carried out, and the reliability of the proposed algorithms and technologies is confirmed. It is shown that under the normal technical condition of the object, the estimates of the relay cross-correlation function and the correlation coefficient between the useful signal and the noise will be close to zero. With the emergence of various defects preceding malfunctions at the object, these estimates will change depending on the degree of damage. Therefore, it is the estimates of the cross-correlation function and the correlation coefficient between the useful signal and the noise that should be used in monitoring and control systems as informative attributes for signaling and monitoring the beginning of changes in the technical condition of objects and the dynamics of their malfunctions. The use of these new effective informative attributes makes it possible to increase the degree of accuracy and reliability of operation of modern information systems.


2021 ◽  
pp. 2784-2795
Author(s):  
Esraa Abd Alsalam ◽  
Shaymaa Ahmed Razoqi ◽  
Eman Fathi Ahmed

Compression of speech signal is an essential field in signal processing. Speech compression is very important in today’s world, due to the limited bandwidth transmission and storage capacity. This paper explores a Contourlet transformation based methodology for the compression of the speech signal. In this methodology, the speech signal is analysed using Contourlet transformation coefficients with statistic methods as threshold values, such as Interquartile Filter (IQR), Average Absolute Deviation (AAD), Median Absolute Deviation (MAD) and standard deviation (STD), followed by the application of (Run length encoding) They are exploited for recording speech in different times (5, 30, and 120 seconds). A comparative study of performance of different transforms is made in terms of (Signal to Noise Ratio,Peak Signal to Noise Ratio,Normalized Cross-Correlation, Normalized Cross-Correlation) and the compression ratio (CR). The best stable result of implementing our algorithm for compressing speech is at level1 with   AAD or MAD, adopting Matlab 2013a language.


Sign in / Sign up

Export Citation Format

Share Document