image filtering
Recently Published Documents


TOTAL DOCUMENTS

883
(FIVE YEARS 161)

H-INDEX

40
(FIVE YEARS 3)

2021 ◽  
Vol 13 (23) ◽  
pp. 13475
Author(s):  
Boce Chu ◽  
Feng Gao ◽  
Yingte Chai ◽  
Yu Liu ◽  
Chen Yao ◽  
...  

Remote sensing is the main technical means for urban researchers and planners to effectively observe targeted urban areas. Generally, it is difficult for only one image to cover a whole urban area and one image cannot support the demands of urban planning tasks for spatial statistical analysis of a whole city. Therefore, people often artificially find multiple images with complementary regions in an urban area on the premise of meeting the basic requirements for resolution, cloudiness, and timeliness. However, with the rapid increase of remote sensing satellites and data in recent years, time-consuming and low performance manual filter results have become more and more unacceptable. Therefore, the issue of efficiently and automatically selecting an optimal image collection from massive image data to meet individual demands of whole urban observation has become an urgent problem. To solve this problem, this paper proposes a large-area full-coverage remote sensing image collection filtering algorithm for individual demands (LFCF-ID). This algorithm achieves a new image filtering mode and solves the difficult problem of selecting a full-coverage remote sensing image collection from a vast amount of data. Additionally, this is the first study to achieve full-coverage image filtering that considers user preferences concerning spatial resolution, timeliness, and cloud percentage. The algorithm first quantitatively models demand indicators, such as cloudiness, timeliness, resolution, and coverage, and then coarsely filters the image collection according to the ranking of model scores to meet the different needs of different users for images. Then, relying on map gridding, the image collection is genetically optimized for individuals using a genetic algorithm (GA), which can quickly remove redundant images from the image collection to produce the final filtering result according to the fitness score. The proposed method is compared with manual filtering and greedy retrieval to verify its computing speed and filtering effect. The experiments show that the proposed method has great speed advantages over traditional methods and exceeds the results of manual filtering in terms of filtering effect.


Mathematics ◽  
2021 ◽  
Vol 9 (23) ◽  
pp. 3101
Author(s):  
Ahsan Bin Tufail ◽  
Yong-Kui Ma ◽  
Mohammed K. A. Kaabar ◽  
Ateeq Ur Rehman ◽  
Rahim Khan ◽  
...  

Alzheimer’s disease (AD) is a leading health concern affecting the elderly population worldwide. It is defined by amyloid plaques, neurofibrillary tangles, and neuronal loss. Neuroimaging modalities such as positron emission tomography (PET) and magnetic resonance imaging are routinely used in clinical settings to monitor the alterations in the brain during the course of progression of AD. Deep learning techniques such as convolutional neural networks (CNNs) have found numerous applications in healthcare and other technologies. Together with neuroimaging modalities, they can be deployed in clinical settings to learn effective representations of data for different tasks such as classification, segmentation, detection, etc. Image filtering methods are instrumental in making images viable for image processing operations and have found numerous applications in image-processing-related tasks. In this work, we deployed 3D-CNNs to learn effective representations of PET modality data to quantify the impact of different image filtering approaches. We used box filtering, median filtering, Gaussian filtering, and modified Gaussian filtering approaches to preprocess the images and use them for classification using 3D-CNN architecture. Our findings suggest that these approaches are nearly equivalent and have no distinct advantage over one another. For the multiclass classification task between normal control (NC), mild cognitive impairment (MCI), and AD classes, the 3D-CNN architecture trained using Gaussian-filtered data performed the best. For binary classification between NC and MCI classes, the 3D-CNN architecture trained using median-filtered data performed the best, while, for binary classification between AD and MCI classes, the 3D-CNN architecture trained using modified Gaussian-filtered data performed the best. Finally, for binary classification between AD and NC classes, the 3D-CNN architecture trained using box-filtered data performed the best.


2021 ◽  
Author(s):  
Emma L Brown ◽  
Thierry L Lefebvre ◽  
Paul W Sweeney ◽  
Bernadette Stolz ◽  
Janek Gröhl ◽  
...  

Mesoscopic photoacoustic imaging (PAI) enables non-invasive visualisation of tumour vasculature and has the potential to assess prognosis and therapeutic response. Currently, evaluating vasculature using mesoscopic PAI involves visual or semi-quantitative 2D measurements, which fail to capture 3D vessel network complexity, and lack robust ground truths for assessment of segmentation accuracy. Here, we developed an in silico, phantom, in vivo, and ex vivo-validated end-to-end framework to quantify 3D vascular networks captured using mesoscopic PAI. We applied our framework to evaluate the capacity of rule-based and machine learning-based segmentation methods, with or without vesselness image filtering, to preserve blood volume and network structure by employing topological data analysis. We first assessed segmentation performance against ground truth data of in silico synthetic vasculatures and a photoacoustic string phantom. Our results indicate that learning-based segmentation best preserves vessel diameter and blood volume at depth, while rule-based segmentation with vesselness image filtering accurately preserved network structure in superficial vessels. Next, we applied our framework to breast cancer patient-derived xenografts (PDXs), with corresponding ex vivo immunohistochemistry. We demonstrated that the above segmentation methods can reliably delineate the vasculature of 2 breast PDX models from mesoscopic PA images. Our results underscore the importance of evaluating the choice of segmentation method when applying mesoscopic PAI as a tool to evaluate vascular networks in vivo.


2021 ◽  
Author(s):  
Erhan Coşkun ◽  
Torran Elson ◽  
Sean Lim ◽  
James Mathews ◽  
Gruff Morris ◽  
...  

CrowdEmotion produce software to measure a person's emotions based on analysis of microfacial expressions using a machine learning algorithm to recognize which features correspond with which emotions. The features are derived by applying a bank of Gabor filters to a set of frames. CrowdEmotion needed to improve the accuracy, processing speed and cost-efficiency of the tool. In particular they wanted to know if a subset of the bank of Gabor filters was sufficient, and whether the image filtering stage could be implemented on a GPU. A framework for choosing the optimum set of Gabor filters was established and ways of reducing the dimensionality of this were interrogated. Taking a subset of Local Binary Patterns was found to be fully justified. Meanwhile choosing a gridding pattern is open to interpretation; some suggestions were made about how this choice might be improved.


2021 ◽  
Vol 11 (21) ◽  
pp. 10358
Author(s):  
Chun He ◽  
Ke Guo ◽  
Huayue Chen

In recent years, image filtering has been a hot research direction in the field of image processing. Experts and scholars have proposed many methods for noise removal in images, and these methods have achieved quite good denoising results. However, most methods are performed on single noise, such as Gaussian noise, salt and pepper noise, multiplicative noise, and so on. For mixed noise removal, such as salt and pepper noise + Gaussian noise, although some methods are currently available, the denoising effect is not ideal, and there are still many places worthy of improvement and promotion. To solve this problem, this paper proposes a filtering algorithm for mixed noise with salt and pepper + Gaussian noise that combines an improved median filtering algorithm, an improved wavelet threshold denoising algorithm and an improved Non-local Means (NLM) algorithm. The algorithm makes full use of the advantages of the median filter in removing salt and pepper noise and demonstrates the good performance of the wavelet threshold denoising algorithm and NLM algorithm in filtering Gaussian noise. At first, we made improvements to the three algorithms individually, and then combined them according to a certain process to obtain a new method for removing mixed noise. Specifically, we adjusted the size of window of the median filtering algorithm and improved the method of detecting noise points. We improved the threshold function of the wavelet threshold algorithm, analyzed its relevant mathematical characteristics, and finally gave an adaptive threshold. For the NLM algorithm, we improved its Euclidean distance function and the corresponding distance weight function. In order to test the denoising effect of this method, salt and pepper + Gaussian noise with different noise levels were added to the test images, and several state-of-the-art denoising algorithms were selected to compare with our algorithm, including K-Singular Value Decomposition (KSVD), Non-locally Centralized Sparse Representation (NCSR), Structured Overcomplete Sparsifying Transform Model with Block Cosparsity (OCTOBOS), Trilateral Weighted Sparse Coding (TWSC), Block Matching and 3D Filtering (BM3D), and Weighted Nuclear Norm Minimization (WNNM). Experimental results show that our proposed algorithm is about 2–7 dB higher than the above algorithms in Peak Signal-Noise Ratio (PSNR), and also has better performance in Root Mean Square Error (RMSE), Structural Similarity (SSIM), and Feature Similarity (FSIM). In general, our algorithm has better denoising performance, better restoration of image details and edge information, and stronger robustness than the above-mentioned algorithms.


2021 ◽  
Vol 2090 (1) ◽  
pp. 012132
Author(s):  
Arman S Kussainov ◽  
Maxim Em ◽  
Yernar Myrzabek ◽  
Maksat Mukhatay

Abstract We have implemented the basic steps for the FDK backprojecting algorithm in computed tomography. Application works from the set of preloaded projections and uses OpenCV libraries for FFT, convolution, frequency space image filtering, image’s brightness, contrast and quality manipulation. Compared to the desktop implementation, the calculation-intensive part of the application was moved to the asynchronous background task hosted by an android fragment. This allows the task to survive the application’s configuration changes and to run in the background even if the main activity was destroyed. The minimalistic interface with the access to all main backprojecting parameters was implemented. The result of backprojection is saved as an image in the download folder of the phone. The user also has the control over the reconstructed slice location along the Z axis.


Biomolecules ◽  
2021 ◽  
Vol 11 (10) ◽  
pp. 1523
Author(s):  
Juliette de Noiron ◽  
Marion Hoareau ◽  
Jessie Colin ◽  
Isabelle Guénal

Apoptosis is associated with numerous phenotypical characteristics, and is thus studied with many tools. In this study, we compared two broadly used apoptotic assays: TUNEL and staining with an antibody targeting the activated form of an effector caspase. To compare them, we developed a protocol based on commonly used tools such as image filtering, z-projection, and thresholding. Even though it is commonly used in image-processing protocols, thresholding remains a recurring problem. Here, we analyzed the impact of processing parameters and readout choice on the accuracy of apoptotic signal quantification. Our results show that TUNEL is quite robust, even if image processing parameters may not always allow to detect subtle differences of the apoptotic rate. On the contrary, images from anti-cleaved caspase staining are more sensitive to handle and necessitate being processed more carefully. We then developed an open-source Fiji macro automatizing most steps of the image processing and quantification protocol. It is noteworthy that the field of application of this macro is wider than apoptosis and it can be used to treat and quantify other kind of images.


Author(s):  
A.N. Grekov ◽  
◽  
Y.E. Shishkin ◽  
S.S. Peliushenko ◽  
A.S. Mavrin ◽  
...  

An algorithm and a program for detecting the boundaries of water bodies for the autopilot module of a surface robot are proposed. A method for detecting water objects on satellite maps by the method of finding a color in the HSV color space, using erosion, dilation - methods of digital image filtering is applied. The following operators for constructing contours on the image are investigated: the operators of Sobel, Roberts, Prewitt, and from them the one that detects the boundary more accurately is selected for this module. An algorithm for calculating the GPS coordinates of the contours is created. The proposed algorithm allows saving the result in a format suitable for the surface robot autopilot module.


Sign in / Sign up

Export Citation Format

Share Document