Gray-Scale Image Color Migration Algorithm Based on Dual-Stream Convolutional Neural Network

Author(s):  
Xiwen Zhang ◽  
Jin Duan
Author(s):  
Sumit S. Lad ◽  
◽  
Amol C. Adamuthe

Malware is a threat to people in the cyber world. It steals personal information and harms computer systems. Various developers and information security specialists around the globe continuously work on strategies for detecting malware. From the last few years, machine learning has been investigated by many researchers for malware classification. The existing solutions require more computing resources and are not efficient for datasets with large numbers of samples. Using existing feature extractors for extracting features of images consumes more resources. This paper presents a Convolutional Neural Network model with pre-processing and augmentation techniques for the classification of malware gray-scale images. An investigation is conducted on the Malimg dataset, which contains 9339 gray-scale images. The dataset created from binaries of malware belongs to 25 different families. To create a precise approach and considering the success of deep learning techniques for the classification of raising the volume of newly created malware, we proposed CNN and Hybrid CNN+SVM model. The CNN is used as an automatic feature extractor that uses less resource and time as compared to the existing methods. Proposed CNN model shows (98.03%) accuracy which is better than other existing CNN models namely VGG16 (96.96%), ResNet50 (97.11%) InceptionV3 (97.22%), Xception (97.56%). The execution time of the proposed CNN model is significantly reduced than other existing CNN models. The proposed CNN model is hybridized with a support vector machine. Instead of using Softmax as activation function, SVM performs the task of classifying the malware based on features extracted by the CNN model. The proposed fine-tuned model of CNN produces a well-selected features vector of 256 Neurons with the FC layer, which is input to SVM. Linear SVC kernel transforms the binary SVM classifier into multi-class SVM, which classifies the malware samples using the one-against-one method and delivers the accuracy of 99.59%.


Geophysics ◽  
2020 ◽  
pp. 1-61
Author(s):  
Rongang Cui ◽  
Danping Cao ◽  
Qiang Liu ◽  
Zhaolin Zhu ◽  
Yan Jia

Predicting elastic parameters based on digital rock images is an interesting application of a convolutional neural network (CNN), which can improve the efficiency of prediction. Predicting elastic parameters by a conventional CNN, which is used for image classification such as LeNet and AlexNet, lacks geophysical constraints, and its accuracy in predicting elastic parameters is poor, with limited training data available. A combination of a U-Net and a convolutional neural network (CUCNN) is proposed to predict the elastic parameters from digital rock images with limited training data. In CUCNN, the rock matrix and pore types segmented from gray-scale images are treated as constraints, that induces the convolutional kernels to extract the global as well as the local scale rock features. The loss function, designed in a composite form to accelerate the convergence speed, contains the segmentation error and elastic parameters predicted from the gray-scale images. By adding geophysical constraints to the CNN, an implicit representation from the gray-scale image to the elastic parameters can be gained, which can improve the accuracy and efficiency of parameter prediction. The proposed method was tested using training and verification data derived from 1800 2D image slices of Berea sandstone samples, and the results were compared against the CNN model. The Vp and Vs were calculated by the finite-element method as the control to test the performance of both models. The results show the CUCNN’s R2 score is 0.84, which increased by as much as 0.21 compared to the conventional CNN.


Author(s):  
Dr. Abhay E Wagh

Abstract: Now a day, with the rapid advancement in the digital contents identification, auto classification of the images is most challenging job in the computer field. Programmed comprehension and breaking down of pictures by framework is troublesome when contrasted with human visions. A Several research have been done to defeat issue in existing classification system,, yet the yield was limited distinctly to low even out picture natives. Nonetheless, those approach need with exact order of pictures. This system uses deep learning algorithm concept to achieve the desired results in this area like computer. Our framework presents Convolutional Neural Network (CNN), a machine learning algorithm is used for automatic classification the images. This system uses the Digit of MNIST data set as a bench mark for classification of gray-scale images. The gray-scale images are used for training which requires more computational power for classification of those images. Using CNN network the result is near about 98% accuracy. Our model accomplishes the high precision in grouping of images. Keywords: Convolutional Neural Network (CNN), deep learning, MINIST, Machine Learning.


2020 ◽  
Author(s):  
S Kashin ◽  
D Zavyalov ◽  
A Rusakov ◽  
V Khryashchev ◽  
A Lebedev

2020 ◽  
Vol 2020 (10) ◽  
pp. 181-1-181-7
Author(s):  
Takahiro Kudo ◽  
Takanori Fujisawa ◽  
Takuro Yamaguchi ◽  
Masaaki Ikehara

Image deconvolution has been an important issue recently. It has two kinds of approaches: non-blind and blind. Non-blind deconvolution is a classic problem of image deblurring, which assumes that the PSF is known and does not change universally in space. Recently, Convolutional Neural Network (CNN) has been used for non-blind deconvolution. Though CNNs can deal with complex changes for unknown images, some CNN-based conventional methods can only handle small PSFs and does not consider the use of large PSFs in the real world. In this paper we propose a non-blind deconvolution framework based on a CNN that can remove large scale ringing in a deblurred image. Our method has three key points. The first is that our network architecture is able to preserve both large and small features in the image. The second is that the training dataset is created to preserve the details. The third is that we extend the images to minimize the effects of large ringing on the image borders. In our experiments, we used three kinds of large PSFs and were able to observe high-precision results from our method both quantitatively and qualitatively.


Sign in / Sign up

Export Citation Format

Share Document