International Journal of Image and Graphics
Latest Publications


TOTAL DOCUMENTS

812
(FIVE YEARS 210)

H-INDEX

20
(FIVE YEARS 2)

Published By World Scientific

1793-6756, 0219-4678

Author(s):  
K. Praveen Kumar ◽  
C. Venkata Narasimhulu ◽  
K. Satya Prasad

The degraded image during the process of image analysis needs more number of iterations to restore it. These iterations take long waiting time and slow scanning, resulting in inefficient image restoration. A few numbers of measurements are enough to recuperate an image with good condition. Due to tree sparsity, a 2D wavelet tree reduces the number of coefficients and iterations to restore the degraded image. All the wavelet coefficients are extracted with overlaps as low and high sub-band space and ordered them such that they are decomposed in the tree ordering structured path. Some articles have addressed the problems with tree sparsity and total variation (TV), but few authors endorsed the benefits of tree sparsity. In this paper, a spatial variation regularization algorithm based on tree order is implemented to change the window size and variation estimators to reduce the loss of image information and to solve the problem of image smoothing operation. The acceptance rate of the tree-structured path relies on local variation estimators to regularize the performance parameters and update them to restore the image. For this, the Localized Total Variation (LTV) method is proposed and implemented on a 2D wavelet tree ordering structured path based on the proposed image smooth adjustment scheme. In the end, a reliable reordering algorithm proposed to reorder the set of pixels and to increase the reliability of the restored image. Simulation results clearly show that the proposed method improved the performance compared to existing methods of image restoration.


Author(s):  
Abdelali Elmoufidi ◽  
Ayoub Skouta ◽  
Said Jai-Andaloussi ◽  
Ouail Ouchetto

In the area of ophthalmology, glaucoma affects an increasing number of people. It is a major cause of blindness. Early detection avoids severe ocular complications such as glaucoma, cystoid macular edema, or diabetic proliferative retinopathy. Intelligent artificial intelligence has been confirmed beneficial for glaucoma assessment. In this paper, we describe an approach to automate glaucoma diagnosis using funds images. The setup of the proposed framework is in order: The Bi-dimensional Empirical Mode Decomposition (BEMD) algorithm is applied to decompose the Regions of Interest (ROI) to components (BIMFs+residue). CNN architecture VGG19 is implemented to extract features from decomposed BEMD components. Then, we fuse the features of the same ROI in a bag of features. These last very long; therefore, Principal Component Analysis (PCA) are used to reduce features dimensions. The bags of features obtained are the input parameters of the implemented classifier based on the Support Vector Machine (SVM). To train the built models, we have used two public datasets, which are ACRIMA and REFUGE. For testing our models, we have used a part of ACRIMA and REFUGE plus four other public datasets, which are RIM-ONE, ORIGA-light, Drishti-GS1, and sjchoi86-HRF. The overall precision of 98.31%, 98.61%, 96.43%, 96.67%, 95.24%, and 98.60% is obtained on ACRIMA, REFUGE, RIM-ONE, ORIGA-light, Drishti-GS1, and sjchoi86-HRF datasets, respectively, by using the model trained on REFUGE. Again an accuracy of 98.92%, 99.06%, 98.27%, 97.10%, 96.97%, and 96.36% is obtained in the ACRIMA, REFUGE, RIM-ONE, ORIGA-light, Drishti-GS1, and sjchoi86-HRF datasets, respectively, using the model training on ACRIMA. The experimental results obtained from different datasets demonstrate the efficiency and robustness of the proposed approach. A comparison with some recent previous work in the literature has shown a significant advancement in our proposal.


Author(s):  
Saorabh Kumar Mondal ◽  
Arpitam Chatterjee ◽  
Bipan Tudu

Image contrast enhancement (CE) is a frequent image enhancement requirement in diverse applications. Histogram equalization (HE), in its conventional and different further improved ways, is a popular technique to enhance the image contrast. The conventional as well as many of the later versions of HE algorithms often cause loss of original image characteristics particularly brightness distribution of original image that results artificial appearance and feature loss in the enhanced image. Discrete Cosine Transform (DCT) coefficient mapping is one of the recent methods to minimize such problems while enhancing the image contrast. Tuning of DCT parameters plays a crucial role towards avoiding the saturations of pixel values. Optimization can be a possible solution to address this problem and generate contrast enhanced image preserving the desired original image characteristics. Biological behavior-inspired optimization techniques have shown remarkable betterment over conventional optimization techniques in different complex engineering problems. Gray wolf optimization (GWO) is a comparatively new algorithm in this domain that has shown promising potential. The objective function has been formulated using different parameters to retain original image characteristics. The objective evaluation against CEF, PCQI, FSIM, BRISQUE and NIQE with test images from three standard databases, namely, SIPI, TID and CSIQ shows that the presented method can result in values up to 1.4, 1.4, 0.94, 19 and 4.18, respectively, for the stated metrics which are competitive to the reported conventional and improved techniques. This paper can be considered a first-time application of GWO towards DCT-based image CE.


Author(s):  
T. Satish Kumar ◽  
S. Jothilakshmi ◽  
Batholomew C. James ◽  
M. Prakash ◽  
N. Arulkumar ◽  
...  

In the present digital era, the exploitation of medical technologies and massive generation of medical data using different imaging modalities, adequate storage, management, and transmission of biomedical images necessitate image compression techniques. Vector quantization (VQ) is an effective image compression approach, and the widely employed VQ technique is Linde–Buzo–Gray (LBG), which generates local optimum codebooks for image compression. The codebook construction is treated as an optimization issue solved with utilization of metaheuristic optimization techniques. In this view, this paper designs an effective biomedical image compression technique in the cloud computing (CC) environment using Harris Hawks Optimization (HHO)-based LBG techniques. The HHO-LBG algorithm achieves a smooth transition among exploration as well as exploitation. To investigate the better performance of the HHO-LBG technique, an extensive set of simulations was carried out on benchmark biomedical images. The proposed HHO-LBG technique has accomplished promising results in terms of compression performance and reconstructed image quality.


Author(s):  
M. S. Lohith ◽  
Yoga Suhas Kuruba Manjunath ◽  
M. N. Eshwarappa

Biometrics is an active area of research because of the increase in need for accurate person identification in numerous applications ranging from entertainment to security. Unimodal and multimodal are the well-known biometric methods. Unimodal biometrics uses one biometric modality of a person for person identification. The performance of an unimodal biometric system is degraded due to certain limitations such as: intra-class variations and nonuniversality. The person identification using more than one biometric modality of a person is multimodal biometrics. This method of identification has gained more interest due to resistance on spoof attacks and more recognition rate. Conventional methods of feature extraction have difficulty in engineering features that are liable to more variations such as illumination, pose and age variations. Feature extraction using convolution neural network (CNN) can overcome these difficulties because large dataset with robust variations can be used for training, where CNN can learn these variations. In this paper, we propose multimodal biometrics at feature level horizontal fusion using face, ear and periocular region biometric modalities and apply deep learning CNN for feature representation and also we propose face, ear and periocular region dataset that are robust to intra-class variations. The evaluation of the system is made by using proposed database. Accuracy, Precision, Recall and [Formula: see text] score are calculated to evaluate the performance of the system and had shown remarkable improvement over existing biometric system.


Author(s):  
Mummadi Gowthami Reddy ◽  
Palagiri Veera Narayana Reddy ◽  
Patil Ramana Reddy

In the current era of technological development, medical imaging plays an important role in many applications of medical diagnosis and therapy. In this regard, medical image fusion could be a powerful tool to combine multi-modal images by using image processing techniques. But, conventional approaches failed to provide the effective image quality assessments and robustness of fused image. To overcome these drawbacks, in this work three-stage multiscale decomposition (TSMSD) using pulse-coupled neural networks with adaptive arguments (PCNN-AA) approach is proposed for multi-modal medical image fusion. Initially, nonsubsampled shearlet transform (NSST) is applied onto the source images to decompose them into low frequency and high frequency bands. Then, low frequency bands of both the source images are fused using nonlinear anisotropic filtering with discrete Karhunen–Loeve transform (NLAF-DKLT) methodology. Next, high frequency bands obtained from NSST are fused using PCNN-AA approach. Now, fused low frequency and high frequency bands are reconstructed using NSST reconstruction. Finally, band fusion rule algorithm with pyramid reconstruction is applied to get final fused medical image. Extensive simulation outcome discloses the superiority of proposed TSMSD using PCNN-AA approach as compared to state-of-the-art medical image fusion methods in terms of fusion quality metrics such as entropy (E), mutual information (MI), mean (M), standard deviation (STD), correlation coefficient (CC) and computational complexity.


Author(s):  
Jahnavi Yeturu ◽  
Poongothai Elango ◽  
S. P. Raja ◽  
P. Nagendra Kumar

Genetics is the clinical review of congenital mutation, where the principal advantage of analyzing genetic mutation of humans is the exploration, analysis, interpretation and description of the genetic transmitted and inherited effect of several diseases such as cancer, diabetes and heart diseases. Cancer is the most troublesome and disordered affliction as the proportion of cancer sufferers is growing massively. Identification and discrimination of the mutations that impart to the enlargement of tumor from the unbiased mutations is difficult, as majority tumors of cancer are able to exercise genetic mutations. The genetic mutations are systematized and categorized to sort the cancer by way of medical observations and considering clinical studies. At the present time, genetic mutations are being annotated and these interpretations are being accomplished either manually or using the existing primary algorithms. Evaluation and classification of each and every individual genetic mutation was basically predicated on evidence from documented content built on medical literature. Consequently, as a means to build genetic mutations, basically, depending on the clinical evidences persists a challenging task. There exist various algorithms such as one hot encoding technique is used to derive features from genes and their variations, TF-IDF is used to extract features from the clinical text data. In order to increase the accuracy of the classification, machine learning algorithms such as support vector machine, logistic regression, Naive Bayes, etc., are experimented. A stacking model classifier has been developed to increase the accuracy. The proposed stacking model classifier has obtained the log loss 0.8436 and 0.8572 for cross-validation data set and test data set, respectively. By the experimentation, it has been proved that the proposed stacking model classifier outperforms the existing algorithms in terms of log loss. Basically, minimum log loss refers to the efficient model. Here the log loss has been reduced to less than 1 by using the proposed stacking model classifier. The performance of these algorithms can be gauged on the basis of the various measures like multi-class log loss.


Author(s):  
Joycy K. Antony ◽  
K. Kanagalakshmi

Images captured in dim light are hardly satisfactory and increasing the International Organization for Standardization (ISO) for a short duration of exposure makes them noisy. The image restoration methods have a wide range of applications in the field of medical imaging, computer vision, remote sensing, and graphic design. Although the use of flash improves the lighting, it changed the image tone besides developing unnecessary highlight and shadow. Thus, these drawbacks are overcome using the image restoration methods that recovered the image with high quality from the degraded observation. The main challenge in the image restoration approach is recovering the degraded image contaminated with the noise. In this research, an effective algorithm, named T2FRF filter, is developed for the restoration of the image. The noisy pixel is identified from the input fingerprint image using Deep Convolutional Neural Network (Deep CNN), which is trained using the neighboring pixels. The Rider Optimization Algorithm (ROA) is used for the removal of the noisy pixel in the image. The enhancement of the pixel is performed using the type II fuzzy system. The developed T2FRF filter is measured using the metrics, such as correlation coefficient and Peak Signal to Noise Ratio (PSNR) for evaluating the performance. When compared with the existing image restoration method, the developed method obtained a maximum correlation coefficient of 0.7504 and a maximum PSNR of 28.2467dB, respectively.


Author(s):  
Marcos José Canêjo ◽  
Carlos Alexandre Barros de Mello

Edge detection is a major step in several computer vision applications. Edges define the shape of objects to be used in a recognition system, for example. In this work, we introduce an approach to edge detection inspired by a challenge for artists: the Speed Drawing Challenge. In this challenge, a person is asked to draw the same figure in different times (as 10[Formula: see text]min, 1[Formula: see text]min and 10[Formula: see text]s); at each time, different levels of details are drawn by the artist. In a short time stamp, just the major elements remain. This work proposes a new approach for producing images with different amounts of edges representing different levels of relevance. Our method uses superpixel to suppress image details, followed by Globalized Probability of Boundary (gPb) and Canny edge detection algorithms to create an image containing different number of edges. After that, an edge analysis step detects whose edges are the most relevant for the scene. The results are presented for the BSDS500 dataset and they are compared to other edge and contour detection algorithms by quantitative and qualitative means with very satisfactory results.


Author(s):  
Anchal Kumawat ◽  
Sucheta Panda

Often in practice, during the process of image acquisition, the acquired image gets degraded due to various factors like noise, motion blur, mis-focus of a camera, atmospheric turbulence, etc. resulting in the image unsuitable for further analysis or processing. To improve the quality of these degraded images, a double hybrid restoration filter is proposed on the two same sets of input images and the output images are fused to get a unified filter in combination with the concept of image fusion. First image set is processed by applying deconvolution using Wiener Filter (DWF) twice and decomposing the output image using Discrete Wavelet Transform (DWT). Similarly, second image set is also processed simultaneously by applying Deconvolution using Lucy–Richardson Filter (DLR) twice followed by the above procedure. The proposed filter gives a better performance as compared to DWF and DLR filters in case of both blurry as well as noisy images. The proposed filter is compared with some standard deconvolution algorithms and also some state-of-the-art restoration filters with the help of seven image quality assessment parameters. Simulation results prove the success of the proposed algorithm and at the same time, visual and quantitative results are very impressive.


Sign in / Sign up

Export Citation Format

Share Document