scholarly journals Gaussian smoothing and modified histogram normalization methods to improve neural-biomarker interpretations for dyslexia classification mechanism

PLoS ONE ◽  
2021 ◽  
Vol 16 (2) ◽  
pp. e0245579
Author(s):  
Opeyemi Lateef Usman ◽  
Ravie Chandren Muniyandi ◽  
Khairuddin Omar ◽  
Mazlyfarina Mohamad

Achieving biologically interpretable neural-biomarkers and features from neuroimaging datasets is a challenging task in an MRI-based dyslexia study. This challenge becomes more pronounced when the needed MRI datasets are collected from multiple heterogeneous sources with inconsistent scanner settings. This study presents a method of improving the biological interpretation of dyslexia’s neural-biomarkers from MRI datasets sourced from publicly available open databases. The proposed system utilized a modified histogram normalization (MHN) method to improve dyslexia neural-biomarker interpretations by mapping the pixels’ intensities of low-quality input neuroimages to range between the low-intensity region of interest (ROIlow) and high-intensity region of interest (ROIhigh) of the high-quality image. This was achieved after initial image smoothing using the Gaussian filter method with an isotropic kernel of size 4mm. The performance of the proposed smoothing and normalization methods was evaluated based on three image post-processing experiments: ROI segmentation, gray matter (GM) tissues volume estimations, and deep learning (DL) classifications using Computational Anatomy Toolbox (CAT12) and pre-trained models in a MATLAB working environment. The three experiments were preceded by some pre-processing tasks such as image resizing, labelling, patching, and non-rigid registration. Our results showed that the best smoothing was achieved at a scale value, σ = 1.25 with a 0.9% increment in the peak-signal-to-noise ratio (PSNR). Results from the three image post-processing experiments confirmed the efficacy of the proposed methods. Evidence emanating from our analysis showed that using the proposed MHN and Gaussian smoothing methods can improve comparability of image features and neural-biomarkers of dyslexia with a statistically significantly high disc similarity coefficient (DSC) index, low mean square error (MSE), and improved tissue volume estimations. After 10 repeated 10-fold cross-validation, the highest accuracy achieved by DL models is 94.7% at a 95% confidence interval (CI) level. Finally, our finding confirmed that the proposed MHN method significantly outperformed the normalization method of the state-of-the-art histogram matching.

2018 ◽  
Vol 7 (4.33) ◽  
pp. 487
Author(s):  
Mohamad Haniff Harun ◽  
Mohd Shahrieel Mohd Aras ◽  
Mohd Firdaus Mohd Ab Halim ◽  
Khalil Azha Mohd Annuar ◽  
Arman Hadi Azahar ◽  
...  

This investigation is solely on the adaptation of a vision system algorithm to classify the processes to regulate the decision making related to the tasks and defect’s recognition. These idea stresses on the new method on vision algorithm which is focusing on the shape matching properties to classify defects occur on the product. The problem faced before that the system required to process broad data acquired from the object caused the time and efficiency slightly decrease. The propose defect detection approach combine with Region of Interest, Gaussian smoothing, Correlation and Template Matching are introduced. This application provides high computational savings and results in better recognition rate about 95.14%. The defects occur provides with information of the height which corresponds by the z-coordinate, length which corresponds by the y-coordinate and width which corresponds by the x-coordinate. This data gathered from the proposed system using dual camera for executing the three dimensional transformation.  


2021 ◽  
pp. 23-41
Author(s):  
Subhagata Chattopadhyay

The study proposes a novel approach to automate classifying Chest X-ray (CXR) images of COVID-19 positive patients. All acquired images have been pre-processed with Simple Median Filter (SMF) and Gaussian Filter (GF) with kernel size (5, 5). The better filter is then identified by comparing Mean Squared Error (MSE) and Peak Signal-to-Noise Ratio (PSNR) of denoised images. Canny's edge detection has been applied to find the Region of Interest (ROI) on denoised images. Eigenvalues [-2, 2] of the Hessian matrix (5 × 5) of the ROIs are then extracted, which constitutes the 'input' dataset to the Feed Forward Neural Network (FFNN) classifier, developed in this study. Eighty percent of the data is used for training the said network after 10-fold cross-validation and the performance of the network is tested with the remaining 20% of the data. Finally, validation has been made on another set of 'raw' normal and abnormal CXRs. Precision, Recall, Accuracy, and Computational time complexity (Big(O)) of the classifier are then estimated to examine its performance.


Author(s):  
Ming Luo ◽  
Liang Ge ◽  
Zhibo Xue ◽  
Jiawei Zhang ◽  
Yanjun LI ◽  
...  

The measurement of downhole engineering parameters is greatly disturbed by the working environment. Effective de-noising methods are required for processing logging-while-drilling (LWD) acquisition signals, in order to obtain downhole engineering parameters accurately and effectively. In this paper, a new de-noising method for measuring downhole engineering parameters was presented, based on a feedback method and a wavelet transform threshold function. Firstly, in view of the mutability and density of downhole engineering data, an improved wavelet threshold function was proposed to de-noise the signal, so as to overcome the shortcomings of data oscillation and deviation caused by the traditional threshold function. Secondly, due to the unknown true value, traditional single denoising effect evaluation cannot meet the requirements of quality evaluation very well. So the root mean square error (RMSE), signal-to-noise ratio (SNR), smoothness (R) and fusion indexs (F) are used as the evaluation parameters of the de-noising effect, which can determine the optimal wavelet decomposition scale and the best wavelet basis. Finally, the proposed method was verified based on the measured downhole data. The experimental results showed that the improved wavelet de-noising method could reduce all kinds of interferences in the LWD signal, providing reliable measurement for analyzing the working status of the drilling bit.


2019 ◽  
Vol 829 ◽  
pp. 252-257
Author(s):  
Azhari ◽  
Yohanes Hutasoit ◽  
Freddy Haryanto

CBCT is a modernized technology in producing radiograph image on dentistry. The image quality excellence is very important for clinicians to interpret the image, so the result of diagnosis produced becoming more accurate, appropriate, thus minimizing the working time. This research was aimed to assess the image quality using the blank acrylic phantom polymethylmethacrylate (PMMA) (C­5H8O2)n in the density of 1.185 g/cm3 for evaluating the homogeneity and uniformity of the image produced. Acrylic phantom was supported with a tripod and laid down on the chin rest of the CBCT device, then the phantom was fixed, and the edge of the phantom was touched by the bite block. Furthermore, the exposure of the X-ray was executed toward the acrylic phantom with various kVp and mAs, from 80 until 90, with the range of 5 kV and the variation of mA was 3, 5, and 7 mA respectively. The time exposure was kept constant for 25 seconds. The samples were taken from CBCT acrylic images, then as much as 5 ROIs (Region of Interest) was chosen to be analyzed. The ROIs determination was analyzed by using the ImageJ® software for recognizing the influence of kVp and mAs towards the image uniformity, noise and SNR. The lowest kVp and mAs had the result of uniformity value, homogeneity and signal to noise ratio of 11.22; 40.35; and 5.96 respectively. Meanwhile, the highest kVp and mAs had uniformity value, homogeneity and signal to noise ratio of 16.96; 26.20; and 5.95 respectively. There were significant differences between the image uniformity and homogeneity on the lowest kVp and mAs compared to the highest kVp and mAs, as analyzed with the ANOVA statistics analysis continued with the t-student post-hoc test with α = 0.05. However, there was no significant difference in SNR as analyzed with the ANOVA statistic analysis. The usage of the higher kVp and mAs caused the improvement of the image homogeneity and uniformity compared to the lower kVp and mAs.


2019 ◽  
Vol 18 (2) ◽  
pp. 283-293 ◽  
Author(s):  
Mark L.C.M. Bruurmijn ◽  
Wouter Schellekens ◽  
Mathijs A.H. Raemaekers ◽  
Nick F. Ramsey

AbstractFor some experimental approaches in brain imaging, the existing normalization techniques are not always sufficient. This may be the case if the anatomical shape of the region of interest varies substantially across subjects, or if one needs to compare the left and right hemisphere in the same subject. Here we propose a new standard representation, building upon existing normalization methods: Cgrid (Cartesian geometric representation with isometric dimensions). Cgrid is based on imposing a Cartesian grid over a cortical region of interest that is bounded by anatomical (atlas-based) landmarks. We applied this new representation to the sensorimotor cortex and we evaluated its performance by studying the similarity of activation patterns for hand, foot and tongue movements between subjects, and similarity between hemispheres within subjects. The Cgrid similarities were benchmarked against the similarities of activation patterns when transformed into standard MNI space using SPM, and to similarities from FreeSurfer’s surface-based normalization. For both between-subject and between-hemisphere comparisons, similarity scores in Cgrid were high, similar to those from FreeSurfer normalization and higher than similarity scores from SPM’s MNI normalization. This indicates that Cgrid allows for a straightforward way of representing and comparing sensorimotor activity patterns across subjects and between hemispheres of the same subjects.


Sensors ◽  
2019 ◽  
Vol 19 (2) ◽  
pp. 274 ◽  
Author(s):  
Shengying Yang ◽  
Huibin Qin ◽  
Xiaolin Liang ◽  
Thomas Gulliver

Unmanned aerial vehicles (UAVs) are now readily available worldwide and users can easily fly them remotely using smart controllers. This has created the problem of keeping unauthorized UAVs away from private or sensitive areas where they can be a personal or public threat. This paper proposes an improved radio frequency (RF)-based method to detect UAVs. The clutter (interference) is eliminated using a background filtering method. Then singular value decomposition (SVD) and average filtering are used to reduce the noise and improve the signal to noise ratio (SNR). Spectrum accumulation (SA) and statistical fingerprint analysis (SFA) are employed to provide two frequency estimates. These estimates are used to determine if a UAV is present in the detection environment. The data size is reduced using a region of interest (ROI), and this improves the system efficiency and improves azimuth estimation accuracy. Detection results are obtained using real UAV RF signals obtained experimentally which show that the proposed method is more effective than other well-known detection algorithms. The recognition rate with this method is close to 100% within a distance of 2.4 km and greater than 90% within a distance of 3 km. Further, multiple UAVs can be detected accurately using the proposed method.


2020 ◽  
Vol 62 (6) ◽  
pp. 352-356
Author(s):  
E Yahaghi ◽  
M E Hosseini-Ashrafi

Weld quality inspection using industrial radiography is considered to be one of the most important processes in critical industries such as aeronautical manufacturing. The quality of radiographic images of welded industrial parts may suffer from poor signal-to-noise ratio (SNR), the main cause of which is the unavoidable detection of scattered X-rays. Image processing methods may be used to enhance image contrast and achieve improved defect detection. In this study, the outcomes from three different image contrast enhancement spatial domain transform algorithms are analysed and compared. The three algorithms used are normalised convolution (NC), interpolated convolution (IC) and recursive filtering (RF). Based on the results of qualitative operator perception, the study shows that the application of all three methods results in improved image contrast, enabling enhanced visualisation of image detail. Subtle differences in performance between the outputs from the different algorithms are noted, especially around the edges of image features. Furthermore, it is found that RF is approximately two orders of magnitude quicker than the other algorithms, making it more suitable for online weld inspection lines.


Author(s):  
QIAO-YU SUN ◽  
YUE LU

Locating text region from an image of nature scene is significantly helpful for better understanding the semantic meaning of the image, which plays an important role in many applications such as image retrieval, image categorization, social media processing, etc. Traditional approach relies on the low level image features to progressively locate the candidate text regions. However, these approaches often suffer for the cases of the clutter background since the adopted low level image features are fairly simple which may not reliably distinguish text region from the clutter background. Motivated by the recent popular research on attention model, salience detection is revisited in this paper. Based on the case of text detection on nature scene image, saliency map is further analyzed and is adjusted accordingly. Using the adjusted saliency map, the candidate text regions detected by the common low level features are further verified. Moreover, efficient low level text feature, Histogram of Edge-direction (HOE), is adopted in this paper, which statistically describes the edge direction information of the region of interest on the image. Encouraging experimental results have been obtained on the nature scene images with the text of various languages.


Sign in / Sign up

Export Citation Format

Share Document