smoothing effects
Recently Published Documents


TOTAL DOCUMENTS

127
(FIVE YEARS 21)

H-INDEX

21
(FIVE YEARS 2)

2021 ◽  
pp. 1-10
Author(s):  
K. Seethalakshmi ◽  
S. Valli

Deep learning using fuzzy is highly modular and more accurate. Adaptive Fuzzy Anisotropy diffusion filter (FADF) is used to remove noise from the image while preserving edges, lines and improve smoothing effects. By detecting edge and noise information through pre-edge detection using fuzzy contrast enhancement, post-edge detection using fuzzy morphological gradient filter and noise detection technique. Convolution Neural Network (CNN) ResNet-164 architecture is used for automatic feature extraction. The resultant feature vectors are classified using ANFIS deep learning. Top-1 error rate is reduced from 21.43% to 18.8%. Top-5 error rate is reduced to 2.68%. The proposed work results in high accuracy rate with low computation cost. The recognition rate of 99.18% and accuracy of 98.24% is achieved on standard dataset. Compared to the existing techniques the proposed work outperforms in all aspects. Experimental results provide better result than the existing techniques on FACES 94, Feret, Yale-B, CMU-PIE, JAFFE dataset and other state-of-art dataset.


2021 ◽  
Vol 14 (11) ◽  
pp. 7355-7368
Author(s):  
Sheng Li ◽  
Ke Du

Abstract. Optical remote sensing (ORS) combined with the computerized tomography (CT) technique is a powerful tool to retrieve a two-dimensional concentration map over an area under investigation. Whereas medical CT usually uses a beam number of hundreds of thousands, ORS-CT usually uses a beam number of dozens, thus severely limiting the spatial resolution and the quality of the reconstructed map. The smoothness a priori information is, therefore, crucial for ORS-CT. Algorithms that produce smooth reconstructions include smooth basis function minimization, grid translation and multiple grid (GT-MG), and low third derivative (LTD), among which the LTD algorithm is promising because of the fast speed. However, its theoretical basis must be clarified to better understand the characteristics of its smoothness constraints. Moreover, the computational efficiency and reconstruction quality need to be improved for practical applications. This paper first treated the LTD algorithm as a special case of the Tikhonov regularization that uses the approximation of the third-order derivative as the regularization term. Then, to seek more flexible smoothness constraints, we successfully incorporated the smoothness seminorm used in variational interpolation theory into the reconstruction problem. Thus, the smoothing effects can be well understood according to the close relationship between the variational approach and the spline functions. Furthermore, other algorithms can be formulated by using different seminorms. On the basis of this idea, we propose a new minimum curvature (MC) algorithm by using a seminorm approximating the sum of the squares of the curvature, which reduces the number of linear equations to half that in the LTD algorithm. The MC algorithm was compared with the non-negative least square (NNLS), GT-MG, and LTD algorithms by using multiple test maps. The MC algorithm, compared with the LTD algorithm, shows similar performance in terms of reconstruction quality but requires only approximately 65 % the computation time. It is also simpler to implement than the GT-MG algorithm because it directly uses high-resolution grids during the reconstruction process. Compared with the traditional NNLS algorithm, it shows better performance in the following three aspects: (1) the nearness of reconstructed maps is improved by more than 50 %, (2) the peak location accuracy is improved by 1–2 m, and (3) the exposure error is improved by 2 to 5 times. Testing results indicated the effectiveness of the new algorithm according to the variational approach. More specific algorithms could be similarly further formulated and evaluated. This study promotes the practical application of ORS-CT mapping of atmospheric chemicals.


Author(s):  
Anh Tuan Nguyen ◽  
Tomás Caraballo ◽  
Nguyen Huy Tuan

In this study, we investigate the intial value problem (IVP) for a time-fractional fourth-order equation with nonlinear source terms. More specifically, we consider the time-fractional biharmonic with exponential nonlinearity and the time-fractional Cahn–Hilliard equation. By using the Fourier transform concept, the generalized formula for the mild solution as well as the smoothing effects of resolvent operators are proved. For the IVP associated with the first one, by using the Orlicz space with the function $\Xi (z)={\textrm {e}}^{|z|^{p}}-1$ and some embeddings between it and the usual Lebesgue spaces, we prove that the solution is a global-in-time solution or it shall blow up in a finite time if the initial value is regular. In the case of singular initial data, the local-in-time/global-in-time existence and uniqueness are derived. Also, the regularity of the mild solution is investigated. For the IVP associated with the second one, some modifications to the generalized formula are made to deal with the nonlinear term. We also establish some important estimates for the derivatives of resolvent operators, they are the basis for using the Picard sequence to prove the local-in-time existence of the solution.


2021 ◽  
pp. 1-18
Author(s):  
Tiejun Yang ◽  
Lu Tang ◽  
Qi Tang ◽  
Lei Li

OBJECTIVE: In order to solve the blurred structural details and over-smoothing effects in sparse representation dictionary learning reconstruction algorithm, this study aims to test sparse angle CT reconstruction with weighted dictionary learning algorithm based on adaptive Group-Sparsity Regularization (AGSR-SART). METHODS: First, a new similarity measure is defined in which Covariance is introduced into Euclidean distance, Non-local image patches are adaptively divided into groups of different sizes as the basic unit of sparse representation. Second, the weight factor of the regular constraint terms is designed through the residuals represented by the dictionary, so that the algorithm takes different smoothing effects on different regions of the image during the iterative process. The sparse reconstructed image is modified according to the difference between the estimated value and the intermediate image. Last, The SBI (Split Bregman Iteration) iterative algorithm is used to solve the objective function. An abdominal image, a pelvic image and a thoracic image are employed to evaluate performance of the proposed method. RESULTS: In terms of quantitative evaluations, experimental results show that new algorithm yields PSNR of 48.20, the maximum SSIM of 99.06% and the minimum MAE of 0.0028. CONCLUSIONS: This study demonstrates that new algorithm can better preserve structural details in reconstructed CT images. It eliminates the effect of excessive smoothing in sparse angle reconstruction, enhances the sparseness and non-local self-similarity of the image, and thus it is superior to several existing reconstruction algorithms.


Author(s):  
Gabriele Grillo ◽  
Giulia Meglioli ◽  
Fabio Punzo

AbstractWe consider the porous medium equation with a power-like reaction term, posed on Riemannian manifolds. Under certain assumptions on p and m in (1.1), and for small enough nonnegative initial data, we prove existence of global in time solutions, provided that the Sobolev inequality holds on the manifold. Furthermore, when both the Sobolev and the Poincaré inequalities hold, similar results hold under weaker assumptions on the forcing term. By the same functional analytic methods, we investigate global existence for solutions to the porous medium equation with source term and variable density in $${{\mathbb {R}}}^n$$ R n .


2021 ◽  
Author(s):  
Saioa A. Campuzano ◽  
Angelo De Santis ◽  
Martina Orlando ◽  
F. Javier Pavón-Carrasco ◽  
Alberto Molina-Cardín

<p>The Shannon Information or Information Content is a statistical measure of a system characterising its properties of organisation (maximum value) or disorder (minimum value). Once it is introduced the scalar potential of the geomagnetic field in terms of a spherical harmonic expansion, it is straightforward to define the Shannon Information by an expression including the Gauss coefficients [De Santis et al., EPSL, 2004]. Some recent models of the past geomagnetic field, including also the two most recent excursions, i.e. Laschamp (~41 ka) and Mono Lake (~34 ka) events, allow us to calculate the Shannon Information in the periods of those events and compare each other. It is expected that when approaching to excursions, the Shannon Information decreases, i.e. the disorder of the system increases. From the behaviour in time of the Shannon Information calculated from the Gauss coefficients of three geomagnetic field reconstructions that span the last excursions, i.e. IMOLE, GGF100k and LSMOD2, it is observed a decrease of the Shannon Information that seems to anticipate the occurrence of the impending excursions some time in advance. This result must be taken with caution because the reconstructions used are based on sedimentary data, which could present some smoothing effects related to the acquisition of the magnetisation mechanism.</p>


2021 ◽  
Author(s):  
Hu Liu ◽  
Wenzhi Zhao ◽  
Yang Yu ◽  
Li Guo ◽  
Jintao Liu

<p>Preferential flow (PF)-dominated soil structure is often considered a unique system consisting of micropores and macropores and thus supposed to provide dual-pore filtering effects on hydrological signals, through which smoothing effects are likely to be stronger for matrix flow and weaker for PF via macropores. By using time series of hydrological signals (precipitation, canopy interception, throughfall, soil moisture, evapotranspiration, water storage in soil and groundwater, and catchment discharge) propagating through the Shale Hills Catchments and representative soil series, the filtering effects of the catchment and soil profiles were tested through the wavelet analysis. The hypothesized dual-pore-style filtering effects of the soil profile were also confirmed through the coherence spectra and phase differences, rendering them applicable for possible use as “fingerprints” of PF to infer subsurface flow features. We found that PF dominates the catchment’s discharge response at the scales from three to twelve days, which contributes to the catchment discharge mainly as subsurface lateral flow at upper or middle soil horizons. Through subsurface PF pathways, even the hilltop is likely hydrologically connected to the valley floor, building connections with or making contributions to the catchment discharge. This work highlights the potential of wavelet analysis for retrieving and characterizing subsurface flow processes based on the revealed dual-pore filtering effects of the soil system.</p>


Author(s):  
Fiaz Hussain ◽  
Ray-Shyan Wu

Hydraulic conductivity is the key and one of the most uncertain parameters in groundwater modeling. The grid based numerical simulation require spatial distribution of sampled hydraulic conductivity at un-sampled locations in the study area. This spatial interpolation has been routinely performed using variogram based models (two-point geostatistics methods). These traditional techniques fail to capture the complex geological structures, provides smoothing effects and ignore the higher order moments of subsurface heterogeneities. In this work, a multiple-point geostatistics (MPS) method is applied to interpolate hydraulic conductivity data which will be further used in WASH123D numerical groundwater simulation model for regional smart groundwater management. To do this, MPS need ‘training images (TIs) as a key input. TI is a conceptual model of subsurface geological heterogeneity which was developed by using concept of ages, topographic slope as an index criteria and knowledge of geologist. After considerations of full physics of study area, an example shows the advantages of using multiple-point geostatistics compared with the traditional two-point geostatistics methods (such as Kriging) for the interpolation of hydraulic conductivity data in a complex geological formation.


2020 ◽  
Author(s):  
Yashar Zeighami ◽  
Alan C. Evans

AbstractAssociation and prediction studies of the brain target the biological consequences of aging and their impact on brain function. Such studies are conducted using different smoothing levels and parcellations at the preprocessing stage, on which their results are dependent. However, the impact of these parameters on the relationship between association values and prediction accuracy is not established. In this study, we used cortical thickness and its relationship with age to investigate how different smoothing and parcellation levels affect the detection of age-related brain correlates as well as brain age prediction accuracy. Our main measures were resel numbers - resolution elements - and age-related variance explained. Using these common measures enabled us to directly compare parcellation and smoothing effects in both association and prediction studies. In our sample of N=608 participants with age range 18-88, we evaluated age-related cortical thickness changes as well as brain age prediction. We found a negative relationship between prediction performance and correlation values for both parameters. Our results also quantify the relationship between delta age estimates obtained based on different processing parameters. Furthermore, with the direct comparison of the two approaches, we highlight the importance of correct choice of smoothing and parcellation parameters in each task, and how they can affect the results of the analysis in opposite directions.


Sign in / Sign up

Export Citation Format

Share Document