A Novel Approach for Colorization of a Grayscale Image using Soft Computing Techniques

Author(s):  
Abul Hasnat ◽  
Santanu Halder ◽  
Debotosh Bhattacharjee ◽  
Mita Nasipuri

Colorization of grayscale image is a process to convert a grayscale image into a color one. Few research works reported in literature on this but there is hardly any generalized method that successfully colorizes all types of grayscale image. This study proposes a novel grayscale image colorization method using a reference color image. It takes the grayscale image and the type of the query image as input. First, it selects reference image from color image database using histogram index of the query image and histogram index of luminance channel of color images of respective type. Once the reference image is selected, four features are extracted for each pixel of the luminance channel of the reference image. These extracted features as input and chrominance blue(Cb) value as target value forms the training dataset for Cb channel. Similarly training dataset for chrominance red(Cr) channel is also formed. These extracted features of the reference image and associated chrominance values are used to train two artificial neural network(ANN)- one for Cb and one for Cr channel. Then, for each pixel of the of query image, same four features are extracted and used as input to the trained ANN to predict the chrominance values of the query image. Thus predicted chrominance values along with the original luminance values of the query image are used to construct the colorized image. The experiment has been conducted on images collected from different standard image database i.e. FRAV2D, UCID.v2 and images captured using standard digital camera etc. These images are initially converted into grayscale images and then the colorization method was applied. For performance evaluation, PSNR between the original color image and newly colorized image is calculated. PSNR shows that the proposed method better colorizes than the recently reported methods in the literature. Beside this, “Colorization Turing test” was conducted asking human subject to choose the image (closer to the original color image) among the colorized images using proposed algorithm and recently reported methods. In 80% of cases colorized images using the proposed method got selected.

2018 ◽  
pp. 886-904
Author(s):  
Abul Hasnat ◽  
Santanu Halder ◽  
Debotosh Bhattacharjee ◽  
Mita Nasipuri

The proposed work is a novel grayscale face image colorization approach using a reference color face image. It takes a reference color image which presumably contains semantically similar color information for the query grayscale image and colorizes the grayscale face image with the help of the reference image. In this novel patch based colorization, the system searches a suitable patch on reference color image for each patch of grayscale image to colorize. Exhaustive patch search in reference color image takes much time resulting slow colorization process applicable for real time applications. So PSO is used to reduce the patch searching time for faster colorization process applicable in real time applications. The proposed method is successfully applied on 150 male and female face images of FRAV2D database. “Colorization Turing test” was conducted asking human subject to choose the image (close to the original color image) between colorized image using proposed algorithm and recent methods and in most of the cases colorized images using the proposed method got selected.


2017 ◽  
Vol 1 (1) ◽  
pp. 72-89 ◽  
Author(s):  
Abul Hasnat ◽  
Santanu Halder ◽  
Debotosh Bhattacharjee ◽  
Mita Nasipuri

The proposed work is a novel grayscale face image colorization approach using a reference color face image. It takes a reference color image which presumably contains semantically similar color information for the query grayscale image and colorizes the grayscale face image with the help of the reference image. In this novel patch based colorization, the system searches a suitable patch on reference color image for each patch of grayscale image to colorize. Exhaustive patch search in reference color image takes much time resulting slow colorization process applicable for real time applications. So PSO is used to reduce the patch searching time for faster colorization process applicable in real time applications. The proposed method is successfully applied on 150 male and female face images of FRAV2D database. “Colorization Turing test” was conducted asking human subject to choose the image(close to the original color image) between colorized image using proposed algorithm and recent methods and in most of the cases colorized images using the proposed method got selected.


2021 ◽  
Author(s):  
Rudy Venguswamy ◽  
Mike Levy ◽  
Anirudh Koul ◽  
Satyarth Praveen ◽  
Tarun Narayanan ◽  
...  

<p>Machine learning modeling for Earth events at NASA is often limited by the availability of labeled examples. For example, training classifiers for forest fires or oil spills from satellite imagery requires curating a massive and diverse dataset of example forest fires, a tedious multi-month effort requiring careful review of over 196.9 million square miles of data per day for 20 years. While such images might exist in abundance within 40 petabytes of unlabeled satellite data, finding these positive examples to include in a training dataset for a machine learning model is extremely time-consuming and requires researchers to "hunt" for positive examples, like finding a needle in a haystack. </p><p>We present a no-code open-source tool, Curator, whose goal is to minimize the amount of human manual image labeling needed to achieve a state of the art classifier. The pipeline, purpose-built to take advantage of the massive amount of unlabeled images, consists of (1) self-supervision training to convert unlabeled images into meaningful representations, (2) search-by-example to collect a seed set of images, (3) human-in-the-loop active learning to iteratively ask for labels on uncertain examples and train on them. </p><p>In step 1, a model capable of representing unlabeled images meaningfully is trained with a self-supervised algorithm (like SimCLR) on a random subset of the dataset (that conforms to researchers’ specified “training budget.”). Since real-world datasets are often imbalanced leading to suboptimal models, the initial model is used to generate embeddings on the entire dataset. Then, images with equidistant embeddings are sampled. This iterative training and resampling strategy improves both balanced training data and models every iteration. In step 2, researchers supply an example image of interest, and the output embeddings generated from this image are used to find other images with embeddings near the reference image’s embedding in euclidean space (hence similar looking images to the query image). These proposed candidate images contain a higher density of positive examples and are annotated manually as a seed set. In step 3, the seed labels are used to train a classifier to identify more candidate images for human inspection with active learning. Each classification training loop, candidate images for labeling are sampled from the larger unlabeled dataset based on the images that the model is most uncertain about (p ≈ 0.5).</p><p>Curator is released as an open-source package built on PyTorch-Lightning. The pipeline uses GPU-based transforms from the NVIDIA-Dali package for augmentation, leading to a 5-10x speed up in self-supervised training and is run from the command line.</p><p>By iteratively training a self-supervised model and a classifier in tandem with human manual annotation, this pipeline is able to unearth more positive examples from severely imbalanced datasets which were previously untrainable with self-supervision algorithms. In applications such as detecting wildfires, atmospheric dust, or turning outward with telescopic surveys, increasing the number of positive candidates presented to humans for manual inspection increases the efficacy of classifiers and multiplies the efficiency of researchers’ data curation efforts.</p>


2019 ◽  
Vol 2019 (1) ◽  
pp. 95-98
Author(s):  
Hans Jakob Rivertz

In this paper we give a new method to find a grayscale image from a color image. The idea is that the structure tensors of the grayscale image and the color image should be as equal as possible. This is measured by the energy of the tensor differences. We deduce an Euler-Lagrange equation and a second variational inequality. The second variational inequality is remarkably simple in its form. Our equation does not involve several steps, such as finding a gradient first and then integrating it. We show that if a color image is at least two times continuous differentiable, the resulting grayscale image is not necessarily two times continuous differentiable.


Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4084
Author(s):  
Xin-Yu Zhao ◽  
Li-Jing Li ◽  
Lei Cao ◽  
Ming-Jie Sun

Digital cameras obtain color information of the scene using a chromatic filter, usually a Bayer filter, overlaid on a pixelated detector. However, the periodic arrangement of both the filter array and the detector array introduces frequency aliasing in sampling and color misregistration during demosaicking process which causes degradation of image quality. Inspired by the biological structure of the avian retinas, we developed a chromatic LED array which has a geometric arrangement of multi-hyperuniformity, which exhibits an irregularity on small-length scales but a quasi-uniformity on large scales, to suppress frequency aliasing and color misregistration in full color image retrieval. Experiments were performed with a single-pixel imaging system using the multi-hyperuniform chromatic LED array to provide structured illumination, and 208 fps frame rate was achieved at 32 × 32 pixel resolution. By comparing the experimental results with the images captured with a conventional digital camera, it has been demonstrated that the proposed imaging system forms images with less chromatic moiré patterns and color misregistration artifacts. The concept proposed verified here could provide insights for the design and the manufacturing of future bionic imaging sensors.


2022 ◽  
Vol 23 (1) ◽  
pp. 116-128
Author(s):  
Baydaa Khaleel

Image retrieval is an important system for retrieving similar images by searching and browsing in a large database. The image retrieval system can be a reliable tool for people to optimize the use of image accumulation, and finding efficient methods to retrieve images is very important. Recent decades have marked increased research interest in field image retrieval. To retrieve the images, an important set of features is used. In this work, a combination of methods was used to examine all the images and detect images in a database according to a query image. Linear Discriminant Analysis (LDA) was used for feature extraction of the images into the dataset. The images in the database were processed by extracting their important and robust features and storing them in the feature store. Likewise, the strong features were extracted for specific query images. By using some Meta Heuristic algorithms such as Cuckoo Search (CS), Ant Colony Optimization (ACO), and using an artificial neural network such as single-layer Perceptron Neural Network (PNN), similarity was evaluated. It also proposed a new two method by hybridized PNN and CS with fuzzy logic to produce a new method called Fuzzy Single Layer Perceptron Neural Network (FPNN), and Fuzzy Cuckoo Search to examine the similarity between features for query images and features for images in the database. The efficiency of the system methods was evaluated by calculating the precision recall value of the results. The proposed method of FCS outperformed other methods such as (PNN), (ACO), (CS), and (FPNN) in terms of precision and image recall. ABSTRAK: Imej dapatan semula adalah sistem penting bagi mendapatkan imej serupa melalui carian imej dan melayari pangkalan besar data. Sistem dapatan semula imej ini boleh dijadikan alat boleh percaya untuk orang mengoptimum penggunaan pengumpulan imej, dan kaedah pencarian yang berkesan bagi mendapatkan imej adalah sangat penting. Beberapa dekad yang lalu telah menunjukan banyak penyelidikan dalam bidang imej dapatan semula. Bagi mendapatkan imej-imej ini, ciri-ciri set penting telah digunakan. Kajian ini menggunakan beberapa kaedah bagi memeriksa semua imej dan mengesan imej dalam pangkalan data berdasarkan imej carian. Kami menggunakan Analisis Diskriminan Linear (LDA) bagi mengekstrak ciri imej ke dalam set data. Imej-imej dalam pangkalan data diproses dengan mengekstrak ciri-ciri penting dan berkesan daripadanya dan menyimpannya dalam simpanan ciri. Begitu juga, ciri-ciri penting ini diekstrak bagi imej carian tertentu. Persamaan dinilai melalui beberapa algoritma Meta Heuristik seperti Carian Cuckoo (CS), Pengoptimuman Koloni Semut (ACO), dan menggunakan lapisan tunggal rangkaian neural buatan seperti Rangkaian Neural Perseptron (PNN). Dua cadangan baru dengan kombinasi hibrid PNN dan CS bersama logik kabur bagi menghasilkan kaedah baru yang disebut Lapisan Tunggal Kabur Rangkaian Neural Perceptron (FPNN), dan Carian Cuckoo Kabur bagi mengkaji persamaan antara ciri carian imej dan imej pangkalan data. Nilai kecekapan kaedah sistem dinilai dengan mengira ketepatan mengingat pada dapatan hasil. Kaedah FCS yang dicadangkan ini mengatasi kaedah lain seperti (PNN), (ACO), (CS) dan (FPNN) dari segi ketepatan dan ingatan imej.


In many image processing applications, a wide range of image enhancement techniques are being proposed. Many of these techniques demanda lot of critical and advance steps, but the resultingimage perception is not satisfactory. This paper proposes a novel sharpening method which is being experimented with additional steps. In the first step, the color image is transformed into grayscale image, then edge detection process is applied using Laplacian technique. Then deduct this image from the original image. The resulting image is as expected; After performing the enhancement process,the high quality of the image can be indicated using the Tenengrad criterion. The resulting image manifested the difference in certain areas, the dimension and the depth as well. Histogram equalization technique can also be applied to change the images color.


2008 ◽  
Vol 15 (2) ◽  
pp. 203-218
Author(s):  
Luiz E. S. Oliveira ◽  
Paulo R. Cavalin ◽  
Alceu S. Britto Jr ◽  
Alessandro L. Koerich

This paper addresses the issue of detecting defects in Pine wood using features extracted from grayscale images. The feature set proposed here is based on the concept of texture and it is computed from the co-occurrence matrices. The features provide measures of properties such as smoothness, coarseness, and regularity. Comparative experiments using a color image based feature set extracted from percentile histograms are carried to demonstrate the efficiency of the proposed feature set. Two different learning paradigms, neural networks and support vector machines, and a feature selection algorithm based on multi-objective genetic algorithms were considered in our experiments. The experimental results show that after feature selection, the grayscale image based feature set achieves very competitive performance for the problem of wood defect detection relative to the color image based features.


2019 ◽  
Author(s):  
Zied Hosni ◽  
Annalisa Riccardi ◽  
Stephanie Yerdelen ◽  
Alan R. G. Martin ◽  
Deborah Bowering ◽  
...  

<div><div><p>Polymorphism is the capacity of a molecule to adopt different conformations or molecular packing arrangements in the solid state. This is a key property to control during pharmaceutical manufacturing because it can impact a range of properties including stability and solubility. In this study, a novel approach based on machine learning classification methods is used to predict the likelihood for an organic compound to crystallise in multiple forms. A training dataset of drug-like molecules was curated from the Cambridge Structural Database (CSD) and filtered according to entries in the Drug Bank database. The number of separate forms in the CSD for each molecule was recorded. A metaclassifier was trained using this dataset to predict the expected number of crystalline forms from the compound descriptors. This approach was used to estimate the number of crystallographic forms for an external validation dataset. These results suggest this novel methodology can be used to predict the extent of polymorphism of new drugs or not-yet experimentally screened molecules. This promising method complements expensive ab initio methods for crystal structure prediction and as integral to experimental physical form screening, may identify systems that with unexplored potential.</p> </div> </div>


Sign in / Sign up

Export Citation Format

Share Document