MATCHING OF URBAN PATHWAYS IN A MULTI-SCALE DATABASE USING FUZZY REASONING

2017 ◽  
Vol 43 (3) ◽  
pp. 92-104 ◽  
Author(s):  
Ali DEHGHANI ◽  
Alireza CHEHREGHAN ◽  
Rahim ALI ABBASPOUR

One of the main steps of acquiring and handling data in a multi-scale database is generation of automatic links between corresponding objects in different scales, which is provided by matching them in the datasets. The basic concept of this process is to detect and measure the spatial similarity between various objects, which differ from one application to another, largely depends on the intrinsic properties of the input data. In fact, spatial similarity index, which is a function of other criteria such as geometric, topological, and semantic ones, is to some extent uncertain. Therefore, the present study aims to provide a matching algorithm based on fuzzy reasoning, while considering human spatial cognition. The proposed algorithm runs on two road datasets of Yazd city in Iran, which are in the scales of 1:5000 and 1:25000. The evaluation results show that matching rate and correctness of the algorithm is 92.7% and 88%, respectively, which validates the appropriate function of the proposed algorithm in matching.

Electronics ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 1269
Author(s):  
Jiabin Luo ◽  
Wentai Lei ◽  
Feifei Hou ◽  
Chenghao Wang ◽  
Qiang Ren ◽  
...  

Ground-penetrating radar (GPR), as a non-invasive instrument, has been widely used in civil engineering. In GPR B-scan images, there may exist random noise due to the influence of the environment and equipment hardware, which complicates the interpretability of the useful information. Many methods have been proposed to eliminate or suppress the random noise. However, the existing methods have an unsatisfactory denoising effect when the image is severely contaminated by random noise. This paper proposes a multi-scale convolutional autoencoder (MCAE) to denoise GPR data. At the same time, to solve the problem of training dataset insufficiency, we designed the data augmentation strategy, Wasserstein generative adversarial network (WGAN), to increase the training dataset of MCAE. Experimental results conducted on both simulated, generated, and field datasets demonstrated that the proposed scheme has promising performance for image denoising. In terms of three indexes: the peak signal-to-noise ratio (PSNR), the time cost, and the structural similarity index (SSIM), the proposed scheme can achieve better performance of random noise suppression compared with the state-of-the-art competing methods (e.g., CAE, BM3D, WNNM).


Electronics ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 319
Author(s):  
Yi Wang ◽  
Xiao Song ◽  
Guanghong Gong ◽  
Ni Li

Due to the rapid development of deep learning and artificial intelligence techniques, denoising via neural networks has drawn great attention due to their flexibility and excellent performances. However, for most convolutional network denoising methods, the convolution kernel is only one layer deep, and features of distinct scales are neglected. Moreover, in the convolution operation, all channels are treated equally; the relationships of channels are not considered. In this paper, we propose a multi-scale feature extraction-based normalized attention neural network (MFENANN) for image denoising. In MFENANN, we define a multi-scale feature extraction block to extract and combine features at distinct scales of the noisy image. In addition, we propose a normalized attention network (NAN) to learn the relationships between channels, which smooths the optimization landscape and speeds up the convergence process for training an attention model. Moreover, we introduce the NAN to convolutional network denoising, in which each channel gets gain; channels can play different roles in the subsequent convolution. To testify the effectiveness of the proposed MFENANN, we used both grayscale and color image sets whose noise levels ranged from 0 to 75 to do the experiments. The experimental results show that compared with some state-of-the-art denoising methods, the restored images of MFENANN have larger peak signal-to-noise ratios (PSNR) and structural similarity index measure (SSIM) values and get better overall appearance.


Author(s):  
Haowen Yan ◽  
Liming Zhang ◽  
Zhonghui Wang ◽  
Weifang Yang ◽  
Tao Liu ◽  
...  

2021 ◽  
Vol 2089 (1) ◽  
pp. 012008
Author(s):  
B Padmaja ◽  
P Naga Shyam Bhargav ◽  
H Ganga Sagar ◽  
B Diwakar Nayak ◽  
M Bhushan Rao

Abstract Visually impaired and senior citizens find it difficult to identify different banknotes, driving the need for an automated system to recognize currency notes. This study proposes recognizing Indian currency notes of various denominations using Deep Learning through the CNN model. While not recognizing currency notes is one issue, identifying fake notes is another major issue. Currency counterfeiting is the illegal imitation of currency to deceive its recipient. The current existing methodologies for identifying a phony note rely on hardware. A method completely devoid of hardware that relies on specific security features to help distinguish a legitimate currency note from an illegitimate one is much needed. These features are extracted using the boundary box region of interest (ROI) and Canny Edge detection in OpenCV implemented in Python, and the multi scale template matching algorithm is applied to match the security features and differentiate fake notes from legitimate notes.


2017 ◽  
pp. 711-723
Author(s):  
Vikrant Bhateja ◽  
Abhinav Krishn ◽  
Himanshi Patel ◽  
Akanksha Sahu

Medical image fusion facilitates the retrieval of complementary information from medical images and has been employed diversely for computer-aided diagnosis of life threatening diseases. Fusion has been performed using various approaches such as Pyramidal, Multi-resolution, multi-scale etc. Each and every approach of fusion depicts only a particular feature (i.e. the information content or the structural properties of an image). Therefore, this paper presents a comparative analysis and evaluation of multi-modal medical image fusion methodologies employing wavelet as a multi-resolution approach and ridgelet as a multi-scale approach. The current work tends to highlight upon the utility of these approaches according to the requirement of features in the fused image. Principal Component Analysis (PCA) based fusion algorithm has been employed in both ridgelet and wavelet domains for purpose of minimisation of redundancies. Simulations have been performed for different sets of MR and CT-scan images taken from ‘The Whole Brain Atlas'. The performance evaluation has been carried out using different parameters of image quality evaluation like: Entropy (E), Fusion Factor (FF), Structural Similarity Index (SSIM) and Edge Strength (QFAB). The outcome of this analysis highlights the trade-off between the retrieval of information content and the morphological details in finally fused image in wavelet and ridgelet domains.


Author(s):  
Shenghan Mei ◽  
Xiaochun Liu ◽  
Shuli Mei

The locust slice images have all the features such as strong self-similarity, piecewise smoothness and nonlinear texture structure. Multi-scale interpolation operator is an effective tool to describe such structures, but it cannot overcome the influence of noise on images. Therefore, this research designed the Shannon–Cosine wavelet which possesses all the excellent properties such as interpolation, smoothness, compact support and normalization, then constructing multi-scale wavelet interpolative operator, the operator can be applied to decompose and reconstruct the images adaptively. Combining the operator with the local filter operator (mean and median), a multi-scale Shannon–Cosine wavelet denoising algorithm based on cell filtering is constructed in this research. The algorithm overcomes the disadvantages of multi-scale interpolation wavelet, which is only suitable for describing smooth signals, and realizes multi-scale noise reduction of locust slice images. The experimental results show that the proposed method can keep all kinds of texture structures in the slice image of locust. In the experiments, the locust slice images with mixture noise of Gaussian and salt–pepper are taken as examples to compare the performances of the proposed method and other typical denoising methods. The experimental results show that the Peak Signal-To-Noise Ratio (PSNR) of the denoised images obtained by the proposed method is greater 27.3%, 24.6%, 2.94%, 22.9% than Weiner filter, wavelet transform method, median and average filtering, respectively; and the Structural Similarity Index (SSIM) for measuring image quality is greater 31.1%, 31.3%, 15.5%, 10.2% than other four methods, respectively. As the variance of Gaussian white noise increases from 0.02 to 0.1, the values of PSNR and SSIM obtained by the proposed method only decrease by 11.94% and 13.33%, respectively, which are much less than other 4 methods. This shows that the proposed method possesses stronger adaptability.


IEEE Access ◽  
2018 ◽  
Vol 6 ◽  
pp. 4904-4915 ◽  
Author(s):  
Jianhua Wu ◽  
Yangyang Wan ◽  
Yao-Yi Chiang ◽  
Zhongliang Fu ◽  
Min Deng

2021 ◽  
Author(s):  
Elizabeth Ing-Simmons ◽  
Nick Machnik ◽  
Juan M Vaquerizas

We previously presented Comparison of Hi-C Experiments using Structural Similarity (CHESS), an approach that applies the concept of the structural similarity index (SSIM) to Hi-C matrices, and demonstrated that it could be used to identify both regions with similar 3D chromatin conformation across species, and regions with different chromatin conformation in different conditions. In contrast to the claim of Lee et al. that the SSIM output of CHESS is independent of the input data, here we confirm that SSIM depends on both local and global properties of the input Hi-C matrices. We provide two approaches for using CHESS to highlight regions of differential genome organisation for further investigation, and expanded guidelines for choosing appropriate parameters and controls for these analyses.


Sign in / Sign up

Export Citation Format

Share Document