scholarly journals Automatic colorization of digital halftone images using neural networks

Author(s):  
M. E. Serdyuk ◽  
S. F. Syryk ◽  
O. O. Sokol

The problem of automatic colorization of monochrome images is considered. methods of colorizing are used in film industry to restore chromaticity of old movies and photographic materials, in computer vision problems, in medical images processing etc. Modern techniques of colorization contain many manual operations, take a lot of time and are expensive. Many methods of colorization require human participation to correctly determine colors, since there is no one-to-one accordance between grayscale and color. In this paper we discuss method for fully automatic colorization of monochrome images using a convolutional neural network. This approach has reduced using of manual operations to minimum. Structure of the neural network for coloration based on the VGG16 model is considered in the paper. Types of layers that are appropriate for solving the problem of colorization are determined and analyzed. Proposed structure consists of 13 convolutional layers and three upsampling layers. The subsample layers are replaced with the necessary zero addition with a step of 2x2. All layers’ filters have 3x3 size. The activation function of all convolutional layers is ReLU and hyperbolic tangent of the last layer. The presented model is implemented in a software system for automatic image colorization. The software system includes two parts. The first part implements construction and training of the neural network. The second part uses obtained neural network to generate colorized images from grayscale images. Network training was carried out on a sample of Caltech-256, which contains 256 categories of objects. After training the system was tested on series of grayscale images. Testing showed that the system performs enough plausible colorization of certain types objects. Acceptable results were obtained in the colorization of images of nature, ordinary animals, portrait photos. In unsuccessful cases objects were painted in brown shades. Unsuccessful results were obtained for images that contained only parts of objects or these objects were represented in the training sample in too different colors.

Author(s):  
Anastasya Grecheneva ◽  
Nikolay Dorofeev ◽  
Maxim Goryachev

n this paper, we consider the possibility of distinguishing the movements of a person and people by their gait based on data obtained from the accelerometer of a wearable device. A mobile phone was used as a wearable device. The paper considers the features of recognizing human movements based on a wearable device. A recognition algorithm based on a neural network with preliminary data processing and correlation analysis is proposed. The volume of the training sample consisted of 32 subjects with various physiological characteristics. The sample size in the subgroup of four people ranged from 2000 to 3000 movements. The main motor patterns for classification were the movements performed when walking in a straight line and stairs with a load (a bag with a laptop weighing 3.5 kg) and without it. The direct propagation network is chosen as the basic structure for the neural network. The neural network has 260 input neurons, 100 neurons in one hidden layer, and 4 neurons in the output layer. When training the neural network, the gradient reverse descent function was used. Cross- entropy was used as an optimization criterion. The activation function of the hidden layer was a sigmoid, and the output layer was a normalized exponential function. The presented algorithm makes it possible to distinguish between subjects when performing different movements in more than 90% of cases. The practical application of the results of the work is relevant for automated information systems of the medical, law enforcement and banking sectors.


2020 ◽  
Vol 2020 (10) ◽  
pp. 54-62
Author(s):  
Oleksii VASYLIEV ◽  

The problem of applying neural networks to calculate ratings used in banking in the decision-making process on granting or not granting loans to borrowers is considered. The task is to determine the rating function of the borrower based on a set of statistical data on the effectiveness of loans provided by the bank. When constructing a regression model to calculate the rating function, it is necessary to know its general form. If so, the task is to calculate the parameters that are included in the expression for the rating function. In contrast to this approach, in the case of using neural networks, there is no need to specify the general form for the rating function. Instead, certain neural network architecture is chosen and parameters are calculated for it on the basis of statistical data. Importantly, the same neural network architecture can be used to process different sets of statistical data. The disadvantages of using neural networks include the need to calculate a large number of parameters. There is also no universal algorithm that would determine the optimal neural network architecture. As an example of the use of neural networks to determine the borrower's rating, a model system is considered, in which the borrower's rating is determined by a known non-analytical rating function. A neural network with two inner layers, which contain, respectively, three and two neurons and have a sigmoid activation function, is used for modeling. It is shown that the use of the neural network allows restoring the borrower's rating function with quite acceptable accuracy.


2021 ◽  
Author(s):  
Christopher Irrgang ◽  
Jan Saynisch-Wagner ◽  
Robert Dill ◽  
Eva Boergens ◽  
Maik Thomas

<p>Space-borne observations of terrestrial water storage (TWS) are an essential ingredient for understanding the Earth's global water cycle, its susceptibility to climate change, and for risk assessments of ecosystems, agriculture, and water management. However, the complex distribution of water masses in rivers, lakes, or groundwater basins remains elusive in coarse-resolution gravimetry observations. We combine machine learning, numerical modeling, and satellite altimetry to build and train a downscaling neural network that recovers simulated TWS from synthetic space-borne gravity observations. The neural network is designed to adapt and validate its training progress by considering independent satellite altimetry records. We show that the neural network can accurately derive TWS anomalies in 2019 after being trained over the years 2003 to 2018. Specifically for validated regions in the Amazonas, we highlight that the neural network can outperform the numerical hydrology model used in the network training.</p><p>https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2020GL089258</p>


2021 ◽  
Vol 26 (jai2021.26(1)) ◽  
pp. 32-41
Author(s):  
Bodyanskiy Y ◽  
◽  
Antonenko T ◽  

Modern approaches in deep neural networks have a number of issues related to the learning process and computational costs. This article considers the architecture grounded on an alternative approach to the basic unit of the neural network. This approach achieves optimization in the calculations and gives rise to an alternative way to solve the problems of the vanishing and exploding gradient. The main issue of the article is the usage of the deep stacked neo-fuzzy system, which uses a generalized neo-fuzzy neuron to optimize the learning process. This approach is non-standard from a theoretical point of view, so the paper presents the necessary mathematical calculations and describes all the intricacies of using this architecture from a practical point of view. From a theoretical point, the network learning process is fully disclosed. Derived all necessary calculations for the use of the backpropagation algorithm for network training. A feature of the network is the rapid calculation of the derivative for the activation functions of neurons. This is achieved through the use of fuzzy membership functions. The paper shows that the derivative of such function is a constant, and this is a reason for the statement of increasing in the optimization rate in comparison with neural networks which use neurons with more common activation functions (ReLU, sigmoid). The paper highlights the main points that can be improved in further theoretical developments on this topic. In general, these issues are related to the calculation of the activation function. The proposed methods cope with these points and allow approximation using the network, but the authors already have theoretical justifications for improving the speed and approximation properties of the network. The results of the comparison of the proposed network with standard neural network architectures are shown


2012 ◽  
Vol 605-607 ◽  
pp. 2175-2178
Author(s):  
Xiao Qin Wu

In order to overcome the disadvantage of neural networks that their structure and parameters were decided stochastically or by one’s experience, an improved BP neural network training algorithm based on genetic algorithm was proposed.In this paper,genetic algorithms and simulated annealing algorithm that optimizes neural network is proposed which is used to scale the fitness function and select the proper operation according to the expected value in the course of optimization,and the weights and thresholds of the neural network is optimized. This method is applied to the stock prediction system.The experimental results show that the proposed approach have high accuracy,strong stability and improved confidence.


2000 ◽  
Author(s):  
Arturo Pacheco-Vega ◽  
Mihir Sen ◽  
Rodney L. McClain

Abstract In the current study we consider the problem of accuracy in heat rate estimations from artificial neural network models of heat exchangers used for refrigeration applications. The network configuration is of the feedforward type with a sigmoid activation function and a backpropagation algorithm. Limited experimental measurements from a manufacturer are used to show the capability of the neural network technique in modeling the heat transfer in these systems. Results from this exercise show that a well-trained network correlates the data with errors of the same order as the uncertainty of the measurements. It is also shown that the number and distribution of the training data are linked to the performance of the network when estimating the heat rates under different operating conditions, and that networks trained from few tests may give large errors. A methodology based on the cross-validation technique is presented to find regions where not enough data are available to construct a reliable neural network. The results from three tests show that the proposed methodology gives an upper bound of the estimated error in the heat rates.


Author(s):  
T.K. Biryukova

Classic neural networks suppose trainable parameters to include just weights of neurons. This paper proposes parabolic integrodifferential splines (ID-splines), developed by author, as a new kind of activation function (AF) for neural networks, where ID-splines coefficients are also trainable parameters. Parameters of ID-spline AF together with weights of neurons are vary during the training in order to minimize the loss function thus reducing the training time and increasing the operation speed of the neural network. The newly developed algorithm enables software implementation of the ID-spline AF as a tool for neural networks construction, training and operation. It is proposed to use the same ID-spline AF for neurons in the same layer, but different for different layers. In this case, the parameters of the ID-spline AF for a particular layer change during the training process independently of the activation functions (AFs) of other network layers. In order to comply with the continuity condition for the derivative of the parabolic ID-spline on the interval (x x0, n) , its parameters fi (i= 0,...,n) should be calculated using the tridiagonal system of linear algebraic equations: To solve the system it is necessary to use two more equations arising from the boundary conditions for specific problems. For exam- ple the values of the grid function (if they are known) in the points (x x0, n) may be used for solving the system above: f f x0 = ( 0) , f f xn = ( n) . The parameters Iii+1 (i= 0,...,n−1 ) are used as trainable parameters of neural networks. The grid boundaries and spacing of the nodes of ID-spline AF are best chosen experimentally. The optimal selection of grid nodes allows improving the quality of results produced by the neural network. The formula for a parabolic ID-spline is such that the complexity of the calculations does not depend on whether the grid of nodes is uniform or non-uniform. An experimental comparison of the results of image classification from the popular FashionMNIST dataset by convolutional neural 0, x< 0 networks with the ID-spline AFs and the well-known ReLUx( ) =AF was carried out. The results reveal that the usage x x, ≥ 0 of the ID-spline AFs provides better accuracy of neural network operation than the ReLU AF. The training time for two convolutional layers network with two ID-spline AFs is just about 2 times longer than with two instances of ReLU AF. Doubling of the training time due to complexity of the ID-spline formula is the acceptable price for significantly better accuracy of the network. Wherein the difference of an operation speed of the networks with ID-spline and ReLU AFs will be negligible. The use of trainable ID-spline AFs makes it possible to simplify the architecture of neural networks without losing their efficiency. The modification of the well-known neural networks (ResNet etc.) by replacing traditional AFs with ID-spline AFs is a promising approach to increase the neural network operation accuracy. In a majority of cases, such a substitution does not require to train the network from scratch because it allows to use pre-trained on large datasets neuron weights supplied by standard software libraries for neural network construction thus substantially shortening training time.


2022 ◽  
pp. 202-226
Author(s):  
Leema N. ◽  
Khanna H. Nehemiah ◽  
Elgin Christo V. R. ◽  
Kannan A.

Artificial neural networks (ANN) are widely used for classification, and the training algorithm commonly used is the backpropagation (BP) algorithm. The major bottleneck faced in the backpropagation neural network training is in fixing the appropriate values for network parameters. The network parameters are initial weights, biases, activation function, number of hidden layers and the number of neurons per hidden layer, number of training epochs, learning rate, minimum error, and momentum term for the classification task. The objective of this work is to investigate the performance of 12 different BP algorithms with the impact of variations in network parameter values for the neural network training. The algorithms were evaluated with different training and testing samples taken from the three benchmark clinical datasets, namely, Pima Indian Diabetes (PID), Hepatitis, and Wisconsin Breast Cancer (WBC) dataset obtained from the University of California Irvine (UCI) machine learning repository.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Onesimo Meza-Cruz ◽  
Isaac Pilatowsky ◽  
Agustín Pérez-Ramírez ◽  
Carlos Rivera-Blanco ◽  
Youness El Hamzaoui ◽  
...  

The aim of this work is to present a model for heat transfer, desorbed refrigerant, and pressure of an intermittent solar cooling system’s thermochemical reactor based on backpropagation neural networks and mathematical symmetry groups. In order to achieve this, a reactor was designed and built based on the reaction of BaCl2-NH3. Experimental data from this reactor were collected, where barium chloride was used as a solid absorbent and ammonia as a refrigerant. The neural network was trained using the Levenberg–Marquardt algorithm. The correlation coefficient between experimental data and data simulated by the neural network was r = 0.9957. In the neural network’s sensitivity analysis, it was found that the inputs, reactor’s heating temperature and sorption time, influence neural network’s learning by 35% and 20%, respectively. It was also found that, by applying permutations to experimental data and using multibase mathematical symmetry groups, the neural network training algorithm converges faster.


Sign in / Sign up

Export Citation Format

Share Document