central pixel
Recently Published Documents


TOTAL DOCUMENTS

47
(FIVE YEARS 18)

H-INDEX

8
(FIVE YEARS 2)

2021 ◽  
Vol 13 (23) ◽  
pp. 4927
Author(s):  
Zhao Wang ◽  
Fenlong Jiang ◽  
Tongfei Liu ◽  
Fei Xie ◽  
Peng Li

Joint analysis of spatial and spectral features has always been an important method for change detection in hyperspectral images. However, many existing methods cannot extract effective spatial features from the data itself. Moreover, when combining spatial and spectral features, a rough uniform global combination ratio is usually required. To address these problems, in this paper, we propose a novel attention-based spatial and spectral network with PCA-guided self-supervised feature extraction mechanism to detect changes in hyperspectral images. The whole framework is divided into two steps. First, a self-supervised mapping from each patch of the difference map to the principal components of the central pixel of each patch is established. By using the multi-layer convolutional neural network, the main spatial features of differences can be extracted. In the second step, the attention mechanism is introduced. Specifically, the weighting factor between the spatial and spectral features of each pixel is adaptively calculated from the concatenated spatial and spectral features. Then, the calculated factor is applied proportionally to the corresponding features. Finally, by the joint analysis of the weighted spatial and spectral features, the change status of pixels in different positions can be obtained. Experimental results on several real hyperspectral change detection data sets show the effectiveness and advancement of the proposed method.


2021 ◽  
Vol 12 (1) ◽  
pp. 9-21
Author(s):  
Xiang-Song Zhang ◽  
Wei-Xin Gao ◽  
Shi-Ling Zhu

In order to eliminate the salt pepper and Gaussian mixed noise in X-ray weld image, the extreme value characteristics of salt and pepper noise are used to separate the mixed noise, and the non local mean filtering algorithm is used to denoise it. Because the smoothness of the exponential weighted kernel function is too large, it is easy to cause the image details fuzzy, so the cosine coefficient based on the function is adopted. An improved non local mean image denoising algorithm is designed by using weighted Gaussian kernel function. The experimental results show that the new algorithm reduces the noise and retains the details of the original image, and the peak signal-to-noise ratio is increased by 1.5 dB. An adaptive salt and pepper noise elimination algorithm is proposed, which can automatically adjust the filtering window to identify the noise probability. Firstly, the median filter is applied to the image, and the filtering results are compared with the pre filtering results to get the noise points. Then the weighted average of the middle three groups of data under each filtering window is used to estimate the image noise probability. Before filtering, the obvious noise points are removed by threshold method, and then the central pixel is estimated by the reciprocal square of the distance from the center pixel of the window. Finally, according to Takagi Sugeno (T-S) fuzzy rules, the output estimates of different models are fused by using noise probability. Experimental results show that the algorithm has the ability of automatic noise estimation and adaptive window adjustment. After filtering, the standard mean square deviation can be reduced by more than 20%, and the speed can be increased more than twice. In the enhancement part, a nonlinear image enhancement method is proposed, which can adjust the parameters adaptively and enhance the weld area automatically instead of the background area. The enhancement effect achieves the best personal visual effect. Compared with the traditional method, the enhancement effect is better and more in line with the needs of industrial field.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 607
Author(s):  
Igor Pušnik ◽  
Gregor Geršak

In numerous applications, including current body temperature monitoring in viral pandemic management, thermal imaging cameras are used for quantitative measurements. These require determination of the measurement accuracy (error) and its traceability (measurement uncertainty). Within error estimation, the size-of-source effect (SSE) is an important error source. The SSE is the relation between the physical size of a target and the instrument’s nominal target size. This study presents a direct evaluation of the error due to the SSE. A stable and uniform temperature, generated by blackbodies, was measured by a high-quality thermal imager. To limit the generated radiation, custom-made blocking tiles with different apertures were used. Effects of aperture shapes and positions, camera-target distances and temperature levels on the error were investigated. The study findings suggest that due to the SSE the measured temperatures are too low, especially at longer camera-target distances. The SSE error depends on the number of pixels available and included into the region of interest, for which the accurate measurement is about to be performed. For an accurate temperature measurement, an array of at least 10 × 10 pixels should be exposed to the observed target radiation, while 3 × 3 central pixel area should be included in the temperature calculation.


2020 ◽  
Author(s):  
Xiang-Song Zhang ◽  
Wei-Xin Gao ◽  
Shi-Ling Zhu

In order to eliminate the salt pepper and Gaussian mixed noise in X-ray weld image, the extreme value characteristics of salt and pepper noise are used to separate the mixed noise, and the non local mean filtering algorithm is used to denoise it. Because the smoothness of the exponential weighted kernel function is too large, it is easy to cause the image details fuzzy, so the cosine coefficient based on the function is adopted. An improved non local mean image denoising algorithm is designed by using weighted Gaussian kernel function. The experimental results show that the new algorithm reduces the noise and retains the details of the original image, and the peak signal-to-noise ratio is increased by 1.5 dB. An adaptive salt and pepper noise elimination algorithm is proposed, which can automatically adjust the filtering window to identify the noise probability. Firstly, the median filter is applied to the image, and the filtering results are compared with the pre filtering results to get the noise points. Then the weighted average of the middle three groups of data under each filtering window is used to estimate the image noise probability. Before filtering, the obvious noise points are removed by threshold method, and then the central pixel is estimated by the reciprocal square of the distance from the center pixel of the window. Finally, according to Takagi Sugeno (T-S) fuzzy rules, the output estimates of different models are fused by using noise probability. Experimental results show that the algorithm has the ability of automatic noise estimation and adaptive window adjustment. After filtering, the standard mean square deviation can be reduced by more than 20%, and the speed can be increased more than twice. In the enhancement part, a nonlinear image enhancement method is proposed, which can adjust the parameters adaptively and enhance the weld area automatically instead of the background area. The enhancement effect achieves the best personal visual effect. Compared with the traditional method, the enhancement effect is better and more in line with the needs of industrial field.


2020 ◽  
Vol 12 (21) ◽  
pp. 3673
Author(s):  
Mengxue Liu ◽  
Xiangnan Liu ◽  
Xiaobin Dong ◽  
Bingyu Zhao ◽  
Xinyu Zou ◽  
...  

The use of the spatiotemporal data fusion method as an effective data interpolation method has received extensive attention in remote sensing (RS) academia. The enhanced spatial and temporal adaptive reflectance fusion model (ESTARFM) is one of the most famous spatiotemporal data fusion methods, as it is widely used to generate synthetic data. However, the ESTARFM algorithm uses moving windows with a fixed size to get the information around the central pixel, which hampers the efficiency and precision of spatiotemporal data fusion. In this paper, a modified ESTARFM data fusion algorithm that integrated the surface spatial information via a statistical method was developed. In the modified algorithm, the local variance of pixels around the central one was used as an index to adaptively determine the window size. Satellite images from two regions were acquired by employing the ESTARFM and modified algorithm. Results showed that the images predicted using the modified algorithm obtained more details than ESTARFM, as the frequency of pixels with the absolute difference of mean value of six bands’ reflectance between true observed image and predicted between 0 and 0.04 were 78% by ESTARFM and 85% by modified algorithm, respectively. In addition, the efficiency of the modified algorithm improved and the verification test showed the robustness of the modified algorithm. These promising results demonstrated the superiority of the modified algorithm to provide synthetic images compared with ESTARFM. Our research enriches the spatiotemporal data fusion method, and the automatic selection of moving window strategy lays the foundation of automatic processing of spatiotemporal data fusion on a large scale.


2020 ◽  
Vol 12 (21) ◽  
pp. 3577
Author(s):  
Siyong Chen ◽  
Xiaoyan Wang ◽  
Hui Guo ◽  
Peiyao Xie ◽  
Jian Wang ◽  
...  

Seasonal snow cover is closely related to regional climate and hydrological processes. In this study, Moderate Resolution Imaging Spectroradiometer (MODIS) daily snow cover products from 2001 to 2018 were applied to analyze the snow cover variation in northern Xinjiang, China. As cloud obscuration causes significant spatiotemporal discontinuities in the binary snow cover extent (SCE), we propose a conditional probability interpolation method based on a space-time cube (STCPI) to remove clouds completely after combining Terra and Aqua data. First, the conditional probability that the central pixel and every neighboring pixel in a space-time cube of 5 × 5 × 5 with the same snow condition is counted. Then the snow probability of the cloud pixels reclassified as snow is calculated based on the space-time cube. Finally, the snow condition of the cloud pixels can be recovered by snow probability. The validation experiments with the cloud assumption indicate that STCPI can remove clouds completely and achieve an overall accuracy of 97.44% under different cloud fractions. The generated daily cloud-free MODIS SCE products have a high agreement with the Landsat–8 OLI image, for which the overall accuracy is 90.34%. The snow cover variation in northern Xinjiang, China, from 2001 to 2018 was investigated based on the snow cover area (SCA) and snow cover days (SCD). The results show that the interannual change of SCA gradually decreases as the elevation increases, and the SCD and elevation have a positive correlation. Furthermore, the interannual SCD variation shows that the area of increase is higher than that of decrease during the 18 years.


Author(s):  
John Hoang ◽  
Tarek Hassan ◽  
Luis Angel Tejedor ◽  
Juan Abel Barrio ◽  
Marcos López ◽  
...  
Keyword(s):  

2020 ◽  
Vol 13 (6) ◽  
Author(s):  
Giovanni Donato Aquaro ◽  
Chrysanthos Grigoratos ◽  
Antonio Bracco ◽  
Alberto Proclemer ◽  
Giancarlo Todiere ◽  
...  

Background: Late gadolinium enhancement (LGE) is an important prognostic marker in hypertrophic cardiomyopathy and an extent >15% it is associated with high risk of sudden cardiac death. We proposed a novel method, the LGE-dispersion mapping, to assess heterogeneity of scar, and evaluated its prognostic role in patients with hypertrophic cardiomyopathy. Methods: One hundred eighty-three patients with hypertrophic cardiomyopathy and a low- or intermediate 5-year risk of sudden cardiac death underwent cardiac magnetic resonance imaging. A parametric map was generated from each LGE image. A score from 0 to 8 was assigned at every pixel of these maps, indicating the number of the surrounding pixels having different quality (nonenhancement, mild-enhancement, or hyperenhancement) from the central pixel. The Global Dispersion Score (GDS) was calculated as the average score of all the pixels of the images. Results: During a median follow-up time of 6 (25th–75th, 4–10) years, 22 patients had hard cardiac events (sudden cardiac death, appropriate implantable cardioverter-defibrillator therapy, resuscitated cardiac arrest, and sustained ventricular tachycardia). Kaplan-Meier analysis showed that patients with GDS>0.86 had worse prognosis than those with lower GDS ( P <0.0001). GDS>0.86 was the only independent predictor of cardiac events (hazard ratio, 9.9 [95% CI, 2.9–34.6], P =0.0003). When compared with LGE extent >15%, GDS improved the classification of risk in these patients (net reclassification improvement, 0.39 [95% CI, 0.11–0.72], P <0.019). Conclusions: LGE-dispersion mapping is a marker of scar heterogeneity and provides a better risk stratification than LGE presence and its extent in patients with hypertrophic cardiomyopathy and a low-intermediate 5-year risk of sudden cardiac death.


2020 ◽  
Vol 15 (3) ◽  
pp. 204-211 ◽  
Author(s):  
Muhammad Tahir ◽  
Adnan Idris

Background: The knowledge of subcellular location of proteins is essential to the comprehension of numerous protein functions. Objective: Accurate as well as computationally efficient and reliable automated analysis of protein localization imagery greatly depend on the calculation of features from these images. Methods: In the current work, a novel method termed as MD-LBP is proposed for feature extraction from fluorescence microscopy protein images. For a given neighborhood, the value of central pixel is computed as the difference of global and local means of the input image that is further used as threshold to generate a binary pattern for that neighborhood. Results: The performance of our method is assessed for 2D HeLa dataset using 5-fold crossvalidation protocol. The performance of MD-LBP method with RBF-SVM as base classifier, is superior to that of standard LBP algorithm, Threshold Adjacency Statistics, and Haralick texture features. Conclusion: Development of specialized systems for different kinds of medical imagery will certainly pave the path for effective drug discovery in pharmaceutical industry. Furthermore, biological and bioinformatics based procedures can be simplified to facilitate pharmaceutical industry for drug designing.


2020 ◽  
Vol 12 (5) ◽  
pp. 798
Author(s):  
Honghan Zheng ◽  
Zhipeng Gui ◽  
Huayi Wu ◽  
Aihong Song

Exploring the relationship between nighttime light and land use is of great significance to understanding human nighttime activities and studying socioeconomic phenomena. Models have been studied to explain the relationships, but the existing studies seldom consider the spatial autocorrelation of night light data, which leads to large regression residuals and an inaccurate regression correlation between night light and land use. In this paper, two non-negative spatial autoregressive models are proposed for the spatial lag model and spatial error model, respectively, which use a spatial adjacency matrix to calculate the spatial autocorrelation effect of light in adjacent pixels on the central pixel. The application scenarios of the two models were analyzed, and the contribution of various land use types to nighttime light in different study areas are further discussed. Experiments in Berlin, Massachusetts and Shenzhen showed that the proposed methods have better correlations with the reference data compared with the non-negative least-squares method, better reflecting the luminous situation of different land use types at night. Furthermore, the proposed model and the obtained relationship between nighttime light and land use types can be utilized for other applications of nighttime light images in the population, GDP and carbon emissions for better exploring the relationship between nighttime remote sensing brightness and socioeconomic activities.


Sign in / Sign up

Export Citation Format

Share Document