laplacian filter
Recently Published Documents


TOTAL DOCUMENTS

48
(FIVE YEARS 19)

H-INDEX

5
(FIVE YEARS 1)

2022 ◽  
Vol 2022 ◽  
pp. 1-12
Author(s):  
Muhammad Hameed Siddiqi ◽  
Amjad Alsirhani

Most medical images are low in contrast because adequate details that may prove vital decisions are not visible to the naked eye. Also, due to the low-contrast nature of the image, it is not easily segmented because there is no significant change between the pixel values, which makes the gradient very small Hence, the contour cannot converge on the edges of the object. In this work, we have proposed an ensembled spatial method for image enhancement. In this ensembled approach, we first employed the Laplacian filter, which highlights the areas of fast intensity variation. This filter can determine the sufficient details of an image. The Laplacian filter will also improve those features having shrill disjointedness. Then, the gradient of the image has been determined, which utilizes the surrounding pixels for the weighted convolution operation for noise diminishing. However, in the gradient filter, there is one negative integer in the weighting. The intensity value of the middle pixel might be deducted from the surrounding pixels, to enlarge the difference between the head-to-head pixels for calculating the gradients. This is one of the reasons due to which the gradient filter is not entirely optimistic, which may be calculated in eight directions. Therefore, the averaging filter has been utilized, which is an effective filter for image enhancement. This approach does not rely on the values that are completely diverse from distinctive values in the surrounding due to which it recollects the details of the image. The proposed approach significantly showed the best performance on various images collected in dynamic environments.


2021 ◽  
Vol 2129 (1) ◽  
pp. 012015
Author(s):  
N Imran ◽  
S Hameed ◽  
Z Hafeez ◽  
Z Faheem ◽  
M Waseem ◽  
...  

Abstract With the growth of information technologies, E-industry safety has recently become the mutual attention of education and business firms. Digital image watermarking is a technique that refers to the security of multimedia data. It is a process referred to the security and authentication of a digital image, video, and audio by embedding a watermark. Watermarking technique applies a number of variable editions to the host content, where the addition is related to embed information. In the past, researchers develop multiple simple watermarking techniques, today race is to find a region where the watermark is imperceptible and have a high payload. In this paper, an invisible image watermarking technique based on the least significant bit (LSB) and laplacian filter is proposed. The original image is divided into blocks and the laplacian filter is applied on each block. Laplacian is a derivative filter that uses the second derivate to find out the area of rapid changes in the image and the least significant bit is a technique to embed a watermark into the bit positions. Watermark is embedded on these regions which is favourable in achieving high desirable properties. This technique shows strong robustness against image processing and geometrical attacks. In evaluation with state of art methods, the proposed technique shows satisfactory progress.


Author(s):  
Isidora Stankovic ◽  
Milos Brajovic ◽  
Ljubisa Stankovic ◽  
Milos Dakovic

2021 ◽  
Author(s):  
Yuanhao Gong ◽  
Wenming Tang ◽  
Lebin Zhou ◽  
Lantao Yu ◽  
Guoping Qiu

Author(s):  
M. G. G. Silva ◽  
D. J. Silva ◽  
P. D. Costa ◽  
R. C. Silva ◽  
T. E. B. Cassimiro ◽  
...  

Abstract Given the increased risks of water scarcity and the presence of polluting agents in water resources, this paper aims at the development and presentation of a computational tool capable of assessing water quality based on digital processing techniques applied to satellite images. Initially, a database was created for Brazilian regions, consisting of hydrographic basins satellite images associated with the Water Quality Index (WQI), according to the criteria established by the National Water Agency (ANA). Hitherto, the database consists of 85 images, 61 are used in the training stage and 24 in the testing stage. In both stages, the images are subjected to thresholding using the Otsu's Method, binarization, linear expansion on saturation, application of a Laplacian filter, extraction of characteristics by using co-occurrence matrices and classification by the Bayes Discriminant. Such techniques were also implemented on a computational platform in MATLAB® environment, responsible for the interface between the system and users. The proposed system presented an approximate 70% success rate regarding the classification of WQIs, which can be improved as more information is made available to improve the databases.


2021 ◽  
Author(s):  
Rahul Kumar ◽  
Rohan Bhansali

AbstractDespite ACL and meniscus tears being among the most common movement induced injuries, they are often the most difficult to diagnose due to the variable severity with which these tears occur. Typically, magnetic resonance imaging (MRI) scans are used for diagnosing ligament tears, but performing and analyzing these scans is time consuming and expensive due to the necessitation of a radiologist or professional orthopedic specialist. Consequently, we developed a custom three-stream convolutional neural network (CNN) architecture that contains multiple channels to automate the diagnosis of ACL and meniscus tears from MRI scans. Our algorithm utilizes the sagittal, coronal, and axial slices to maximize feature extraction. Furthermore, we apply the Laplace Operator on the MRI scan images to evaluate and compare its propensity in different medical imaging modalities. The algorithm attained an accuracy of 92.80%, significantly higher than that of orthopedic diagnosis accuracy. Our results point towards the feasibility of shallow, multi-channel CNNs and the ability of the Laplace Operator to improve performance metrics for MRI scan diagnosis.


Author(s):  
Ali Salim Rasheed ◽  
Rasool Hasan Finjan ◽  
Ahmed Abdulsahib Hashim ◽  
Mustafa Murtdha Al-Saeedi

<p><span>Animation and virtual reality movie-making technologies are still witnessing significant progress to this day. Building and stimulating virtual characters inside these applications is a goal. Build a 3D face via using some special tools inside the virtual world is the most important part of identifying a 3D animation. Keen Tools Face Builder add-on for Blender. Interested in creating a 3D face of a famous figure, artist or the general public by adopting several 2D images added to the virtual blinder software environment. The main problem facing these tools is that they deal with high-resolution and sharpness pictures because some images that contain blurring, the result is to build a 3D face model that contains design distortions and non- clearly. in this proposed paper, build a data set for 2D pictures of a specific character (actor), at a resolution of 1920 x 1080 pixels. These images were caught by the camera, different in sharpness and blurring (four types of blurry). Using the “Laplacian Filter algorithm” and OpenCV library with Python language, to isolate blurry from sharpness 2D images. Sharpness images used to build a 3D face model that gave real and similar results to the character in the pictures. </span></p>


Author(s):  
Rania Salah El-Sayed ◽  
Mohamed Nour El-Sayed

This paper proposes an efficient model for recognizing and classifying a vehicle type. The model localizes each object in the image then identifies the vehicle type. The features of an image are extracted using the histogram oriented gradients (HOG) and ant colony optimization (ACO). A vehicle type is determined using different classifiers namely: the k-nearest neighbor (KNN), support vector machine (SVM), random forest (RF), and Softmax classifiers. The model is implemented and operated on two datasets of vehicles' images as test-beds. From the comparative study, the SVM outperforms the other adopted classifiers and is also better using HOG than that using ACO. A modification is done on HOG by adding the Laplacian filter to select the most significant image features. The accuracy of the SVM classifier using modified HOG outperforms that one using the traditional HOG. The proposed model is analyzed and discussed regardless the local geometric and photometric transformations like illumination variations.


Author(s):  
Douglas Santos ◽  
Daniel Zolett ◽  
Mateus Belli ◽  
Felipe Viel ◽  
Cesar Zeferino

Computer vision systems have several stages, and one of the operators used in these systems is the edge detection filter. High-performance computing is required in many applications and stages of computer vision systems, and many designs use FPGA technology to improve performance and decrease power consumption. In this context, this work presents an analysis of five edge detection filters synthesized to FPGA, including Laplacian, Roberts, Prewitt, Sobel, and Canny. In the experiments, we compared the hardware implementations with software versions to identify the impact of fixed-point representation on the quality of the output images. We have also assessed metrics regarding performance, silicon costs, and energy consumption. The results obtained show that the Laplacian filter has the lowest costs, while the Canny operator provides the best output image at the price of much higher silicon costs and energy consumption.


Sign in / Sign up

Export Citation Format

Share Document