Optical color image cryptosystem based on interference principle and deep learning

Optik ◽  
2021 ◽  
pp. 168474
Author(s):  
Minxu Jin ◽  
Wenqi Wang ◽  
Xiaogang Wang
Mathematics ◽  
2020 ◽  
Vol 8 (12) ◽  
pp. 2258
Author(s):  
Madhab Raj Joshi ◽  
Lewis Nkenyereye ◽  
Gyanendra Prasad Joshi ◽  
S. M. Riazul Islam ◽  
Mohammad Abdullah-Al-Wadud ◽  
...  

Enhancement of Cultural Heritage such as historical images is very crucial to safeguard the diversity of cultures. Automated colorization of black and white images has been subject to extensive research through computer vision and machine learning techniques. Our research addresses the problem of generating a plausible colored photograph of ancient, historically black, and white images of Nepal using deep learning techniques without direct human intervention. Motivated by the recent success of deep learning techniques in image processing, a feed-forward, deep Convolutional Neural Network (CNN) in combination with Inception- ResnetV2 is being trained by sets of sample images using back-propagation to recognize the pattern in RGB and grayscale values. The trained neural network is then used to predict two a* and b* chroma channels given grayscale, L channel of test images. CNN vividly colorizes images with the help of the fusion layer accounting for local features as well as global features. Two objective functions, namely, Mean Squared Error (MSE) and Peak Signal-to-Noise Ratio (PSNR), are employed for objective quality assessment between the estimated color image and its ground truth. The model is trained on the dataset created by ourselves with 1.2 K historical images comprised of old and ancient photographs of Nepal, each having 256 × 256 resolution. The loss i.e., MSE, PSNR, and accuracy of the model are found to be 6.08%, 34.65 dB, and 75.23%, respectively. Other than presenting the training results, the public acceptance or subjective validation of the generated images is assessed by means of a user study where the model shows 41.71% of naturalness while evaluating colorization results.


Author(s):  
Mohammadreza Hajiarbabi ◽  
Arvin Agah

Human skin detection is an important and challenging problem in computer vision. Skin detection can be used as the first phase in face detection when using color images. The differences in illumination and ranges of skin colors have made skin detection a challenging task. Gaussian model, rule based methods, and artificial neural networks are methods that have been used for human skin color detection. Deep learning methods are new techniques in learning that have shown improved classification power compared to neural networks. In this paper the authors use deep learning methods in order to enhance the capabilities of skin detection algorithms. Several experiments have been performed using auto encoders and different color spaces. The proposed technique is evaluated compare with other available methods in this domain using two color image databases. The results show that skin detection utilizing deep learning has better results compared to other methods such as rule-based, Gaussian model and feed forward neural network.


2021 ◽  
Vol 23 (Supplement_6) ◽  
pp. vi133-vi134
Author(s):  
Julia Cluceru ◽  
Joanna Phillips ◽  
Annette Molinaro ◽  
Yannet Interian ◽  
Tracy Luks ◽  
...  

Abstract In contrast to the WHO 2016 guidelines that use genetic alterations to further stratify patients within a designated grade, new recommendations suggest that IDH mutation status, followed by 1p19q-codeletion, should be used before grade when differentiating gliomas. Although most gliomas will be resected and their tissue evaluated with genetic profiling, non-invasive characterization of genetic subgroup can benefit patients where surgery is not otherwise advised or a fast turn-around is required for clinical trial eligibility. Prior studies have demonstrated the utility of using anatomical images and deep learning to distinguish either IDH-mutant from IDH-wildtype tumors or 1p19q-codeleted from non-codeleted lesions separately, but not combined or using the most recent recommendations for stratification. The goal of this study was to evaluate the effects of training strategy and incorporation of Apparent Diffusion Coefficient (ADC) maps from diffusion-weighted imaging on predicting new genetic subgroups with deep learning. Using 414 patients with newly-diagnosed glioma (split 285/50/49 training/validation/test) and optimized training hyperparameters, we found that a 3-class approach with T1-post-contrast, T2-FLAIR, and ADC maps as inputs achieved the best performance for molecular subgroup classification, with overall accuracies of 86.0%[CI:0.839,1.0], 80.0%[CI:0.720,1.0], and 85.7%[CI:0.771,1.0] on training, validation, and test sets, respectively, and final test class accuracies of 95.2%(IDH-wildtype), 88.9%(IDH-mutated,1p19qintact), and 60%(IDHmutated,1p19q-codeleted). Creating an RGB-color image from 3 MRI images and applying transfer learning with a residual network architecture pretrained on ImageNet resulted in an 8% averaged increase in overall accuracy. Although classifying both IDH and 1p19q mutations together was overall advantageous compared with a tiered structure that first classified IDH mutational status, the 2-tiered approach better generalized to an independent multi-site dataset when only anatomical images were used. Including biologically relevant ADC images improved model generalization to our test set regardless of modeling approach, highlighting the utility of incorporating diffusion-weighted imaging in future multi-site analyses of molecular subgroup.


Optik ◽  
2022 ◽  
pp. 168548
Author(s):  
Subhrajyoti Deb ◽  
Pratap Kumar Behera

2021 ◽  
Author(s):  
Patrice Carbonneau

<p>Semantic image classification as practised in Earth Observation is poorly suited to mapping fluvial landforms which are often composed of multiple landcover types such as water, riparian vegetation and exposed sediment. Deep learning methods developed in the field of computer vision for the purpose of image classification (ie the attribution of a single label to an image such as cat/dog/etc) are in fact more suited to such landform mapping tasks. Notably, Convolutional Neural Networks (CNN) have excelled at the task of labelling images. However, CNN are notorious for requiring very large training sets that are laborious and costly to assemble. Similarity learning is a sub-field of deep learning and is better known for one-shot and few-shot learning methods. These approaches aim to reduce the need for large training sets by using CNN architectures to compare a single, or few, known examples of an instance to a new image and determining if the new image is similar to the provided examples. Similarity learning rests on the concept of image embeddings which are condensed higher-dimension vector representations of an image generated by a CNN. Ideally, and if a CNN is suitably trained, image embeddings will form clusters according to image classes, even if some of these classes were never used in the initial CNN training.</p><p> </p><p>In this paper, we use similarity learning for the purpose of fluvial landform mapping from Sentinel-2 imagery. We use the True Color Image product with a spatial resolution of 10 meters and begin by manually extracting tiles of 128x128 pixels for 4 classes: non-river, meandering reaches, anastomosing reaches and braiding reaches. We use the DenseNet121 CNN topped with a densely connected layer of 8 nodes which will produce embeddings as 8-dimension vectors. We then train this network with only 3 classes (non-river, meandering and anastomosing) using a categorical cross-entropy loss function. Our first result is that when applied to our image tiles, the embeddings produced by the trained CNN deliver 4 clusters. Despite not being used in the network training, the braiding river reach tiles have produced embeddings that form a distinct cluster. We then use this CNN to perform few-shot learning with a Siamese triplet architecture that will classify a new tile based on only 3 examples of each class. Here we find that tiles from the non-river, meandering and anastomising class were classified with F1 scores of 72%, 87% and 84%, respectively. The braiding river tiles were classified to an F1 score of 80%. Whilst these performances are lesser than the 90%+ performances expected from conventional CNN, the prediction of a new class of objects (braiding reaches) with only 3 samples to 80% F1 is unprecedented in river remote sensing. We will conclude the paper by extending the method to mapping fluvial landforms on entire Sentinel-2 tiles and we will show how we can use advanced cluster analyses of image embeddings to identify landform classes in an image without making a priori decisions on the classes that are present in the image.</p>


2021 ◽  
Vol 137 ◽  
pp. 106392
Author(s):  
Gang Qu ◽  
Xiangfeng Meng ◽  
Yongkai Yin ◽  
Huazheng Wu ◽  
Xiulun Yang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document