scholarly journals Faces and eyes Detection in Digital Images Using Cascade Classifiers

2018 ◽  
Vol 7 (1) ◽  
pp. 57-66
Author(s):  
Hussein Ali Mezher Alhamzawi

In this Article we present a way to implementation and detect the face and eyes  on digital image, based on Haar-like features extraction and cascade classifier, these techniques used in 100 % and 92% for faces and eyes detection respectively for the best all cases using low processing time , we used cheap equipment in our work (Acer TravelMate web camera ) . OpenCV library(computer vision library) and Python language used in this work.

2020 ◽  
Vol 8 (1) ◽  
pp. 31-38
Author(s):  
Muhammad Koprawi

In general, a computer program will execute instructions serially. These instructions will be run on the CPU or referred to as serial computing. But when computing is run in large numbers, the time required by serial computing becomes very long. Therefore, we need another computation that can streamline data processing time such as parallel computing. Parallel computing can be done on GPUs (Graphical Processing Units) that are run with the help of toolkits such as CUDA (Computer Unified Device Architecture) and OpenCL (Open Computing Language). CUDA can only be run on NVIDIA graphics cards, while OpenCL can be run on all types of graphics cards. This research will compare the performance of parallel computing time between CUDA and OpenCL tested on uncompressed digital images. The digital image tested has several different sizes. The results of the study are expected to be a reference for digital image processing methods.


Author(s):  
Ricky Andri ◽  
Rivalri Kristianto Hondro ◽  
Kennedi Tampubolon

The development of information technology is very fast, causing a lot of security holes that can be misused by people who are not responsible so that it can harm certain parties. Digital images are chosen as a container for inserting messages because digital images have sufficient size to hold the message and digital images are often used in information exchange so as not to invite suspicion from irresponsible parties. The information to be sent is hidden in a digital image, then the digital image is sent as normal data, so that third parties do not suspect that there is confidential information inside. Information that is hidden in the digital image can be extracted again by the recipient of the message. Pixel Value Differencing (PVD) works on a pair of adjacent pixel values (adjacent pixels). The advantage of the Pixel Value Differencing (PVD) method is that the capacity of the image generated to insert a message can be smaller than its original size, the processing time of this method is quite fast, after the message is inserted, the image quality has good quality. But this method also has disadvantages, because it is not resistant to manipulation.Keywords: Steganography, Digital Image, Pixel Value Differencing


The easiest way to distinguish each person's identity is through the face. Face recognition is included as an inevitable pre-processing step for face recognition. Face recognition itself has to face difficulties and challenges because sometimes some form of issue is quite different from human face recognition. There are two stages used for the human face recognition process, i.e. face detection, where this process is very fast in humans. In the first phase, the person stored the face image in the database from a different angle. The person's face image storage with the help of Eigenvector value depended on components - face coordinates, face index, face angles, eyes, nose, lips, and mouth within certain distances and positions with each other. There are two types of methods that are popular in currently developed face recognition patterns, the Cascade Classifier method and the Eigenface Algorithm. Facial image recognition The Eigenface method is based on the lack of dimensional space of the face, using principal component analysis for facial features. The main purpose of the use of cascade classifiers on facial recognition using the Eigenface Algorithm was made by finding the eigenvectors corresponding to the largest eigenvalues of the facial image


Author(s):  
D. P. Gangwar ◽  
Anju Pathania

This work presents a robust analysis of digital images to detect the modifications/ morphing/ editing signs by using the image’s exif metadata, thumbnail, camera traces, image markers, Huffman codec and Markers, Compression signatures etc. properties. The details of the whole methodology and findings are described in the present work. The main advantage of the methodology is that the whole analysis has been done by using software/tools which are easily available in open sources.


Author(s):  
Lemcia Hutajulu ◽  
Hery Sunandar ◽  
Imam Saputra

Cryptography is used to protect the contents of information from anyone except those who have the authority or secret key to open information that has been encoded. Along with the development of technology and computers, the increase in computer crime has also increased, especially in image manipulation. There are many ways that people use to manipulate images that have a detrimental effect on others. The originality of a digital image is the authenticity of the image in terms of colors, shapes, objects and information without the slightest change from the other party. Nowadays many digital images circulating on the internet have been manipulated and even images have been used for material fraud in the competition, so we need a method that can detect the image is genuine or fake. In this study, the authors used the MD4 and SHA-384 methods to detect the originality of digital images, by using this method an image of doubtful authenticity can be found out that the image is authentic or fake.Keywords: Originality, Image, MD4 and SHA-384


2021 ◽  
Vol 10 (2) ◽  
pp. 85
Author(s):  
Juan Reinoso-Gordo ◽  
Antonio Gámiz-Gordo ◽  
Pedro Barrero-Ortega

Suitable graphic documentation is essential to ascertain and conserve architectural heritage. For the first time, accurate digital images are provided of a 16th-century wooden ceiling, composed of geometric interlacing patterns, in the Pinelo Palace in Seville. Today, this ceiling suffers from significant deformation. Although there are many publications on the digital documentation of architectural heritage, no graphic studies on this type of deformed ceilings have been presented. This study starts by providing data on the palace history concerning the design of geometric interlacing patterns in carpentry according to the 1633 book by López de Arenas, and on the ceiling consolidation in the 20th century. Images were then obtained using two complementary procedures: from a 3D laser scanner, which offers metric data on deformations; and from photogrammetry, which facilitates the visualisation of details. In this way, this type of heritage is documented in an innovative graphic approach, which is essential for its conservation and/or restoration with scientific foundations and also to disseminate a reliable digital image of the most beautiful ceiling of this Renaissance palace in southern Europe.


2020 ◽  
Vol 30 (1) ◽  
pp. 240-257
Author(s):  
Akula Suneetha ◽  
E. Srinivasa Reddy

Abstract In the data collection phase, the digital images are captured using sensors that often contaminated by noise (undesired random signal). In digital image processing task, enhancing the image quality and reducing the noise is a central process. Image denoising effectively preserves the image edges to a higher extend in the flat regions. Several adaptive filters (median filter, Gaussian filter, fuzzy filter, etc.) have been utilized to improve the smoothness of digital image, but these filters failed to preserve the image edges while removing noise. In this paper, a modified fuzzy set filter has been proposed to eliminate noise for restoring the digital image. Usually in fuzzy set filter, sixteen fuzzy rules are generated to find the noisy pixels in the digital image. In modified fuzzy set filter, a set of twenty-four fuzzy rules are generated with additional four pixel locations for determining the noisy pixels in the digital image. The additional eight fuzzy rules ease the process of finding the image pixels,whether it required averaging or not. In this scenario, the input digital images were collected from the underwater photography fish dataset. The efficiency of the modified fuzzy set filter was evaluated by varying degrees of Gaussian noise (0.01, 0.03, and 0.1 levels of Gaussian noise). For performance evaluation, Structural Similarity (SSIM), Mean Structural Similarity (MSSIM), Mean Square Error (MSE), Normalized Mean Square Error (NMSE), Universal Image Quality Index (UIQI), Peak Signal to Noise Ratio (PSNR), and Visual Information Fidelity (VIF) were used. The experimental results showed that the modified fuzzy set filter improved PSNR value up to 2-3 dB, MSSIM up to 0.12-0.03, and NMSE value up to 0.38-0.1 compared to the traditional filtering techniques.


Sign in / Sign up

Export Citation Format

Share Document