Prediction of Hydrodynamic Forces and Moments on Submarines Using Neural Networks

Author(s):  
Ibrahim Mohamed ◽  
Mahmoud Haddara ◽  
Christopher D. Williams ◽  
Michael Mackay

This paper describes a parametric identification tool for predicting the hydrodynamic forces acting on a submarine model using its motion history. The tool uses a neural network to identify the hydrodynamic forces and moments; the network was trained with data obtained from multi-degree-of-freedom captive maneuvering tests. The characteristics of the trained network are demonstrated through reconstruction of the force and moment time histories. This technique has the potential to reduce experimental time and cost by enabling a full hydrodynamic model of the vehicle to be obtained from a relatively limited number of test maneuvers.

2022 ◽  
Vol 14 (4) ◽  
pp. 5-12
Author(s):  
Ol'ga Ermilina ◽  
Elena Aksenova ◽  
Anatoliy Semenov

The paper provides formalization and construction of a model of the process of electrical discharge machining. When describing the process, a T-shaped equivalent circuit containing an RLC circuit was used. Determine the transfer function of the proposed substitution scheme. Also, a task is formulated and an algorithm for neural network parametric identification of a T-shaped equivalent circuit is proposed. The problem is posed and an algorithm is developed for neural network parametric identification of the equivalent circuit with a computational experiment, the formation of training samples on its basis, and the subsequent training of dynamic and static neural networks used in the identification problem. The process was simulated in Simulink, Matlab package. Acceptable coincidence of the calculated data with the experimental ones showed that the proposed model of electrical discharge machining reflects real electromagnetic processes occurring in the interelectrode gap.


Author(s):  
Mehmet Ersin Yumer ◽  
Levent Burak Kara

This paper presents a new point set surfacing method that employs neural networks for regression. Our technique takes as input unstructured and possibly noisy point sets representing two-manifolds in R3. To facilitate parametrization, the set is first embedded in R2 using neighborhood preserving locally linear embedding. A neural network is then constructed and trained that learns a mapping between the embedded 2D parametric coordinates and the corresponding 3D space coordinates. The trained network is then used to generate a tessellation that spans the parametric space, thereby producing a surface in the original space. This approach enables the surfacing of noisy and non-uniformly distributed point sets, and can be applied to open or closed surfaces. We show the utility of the proposed method on a number of test models, as well as its application to freeform surface creation in virtual reality environments.


Electronics ◽  
2020 ◽  
Vol 9 (11) ◽  
pp. 1929
Author(s):  
Jiacang Ho ◽  
Dae-Ki Kang

Deep neural networks have achieved high performance in image classification, image generation, voice recognition, natural language processing, etc.; however, they still have confronted several open challenges that need to be solved such as incremental learning problem, overfitting in neural networks, hyperparameter optimization, lack of flexibility and multitasking, etc. In this paper, we focus on the incremental learning problem which is related with machine learning methodologies that continuously train an existing model with additional knowledge. To the best of our knowledge, a simple and direct solution to solve this challenge is to retrain the entire neural network after adding the new labels in the output layer. Besides that, transfer learning can be applied only if the domain of the new labels is related to the domain of the labels that have already been trained in the neural network. In this paper, we propose a novel network architecture, namely Brick Assembly Network (BAN), which allows a trained network to assemble (or dismantle) a new label to (or from) a trained neural network without retraining the entire network. In BAN, we train labels with a sub-network (i.e., a simple neural network) individually and then we assemble the converged sub-networks that have trained for a single label together to form a full neural network. For each label to be trained in a sub-network of BAN, we introduce a new loss function that minimizes the loss of the network with only one class data. Applying one loss function for each class label is unique and different from standard neural network architectures (e.g., AlexNet, ResNet, InceptionV3, etc.) which use the values of a loss function from multiple labels to minimize the error of the network. The difference of between the loss functions of previous approaches and the one we have introduced is that we compute a loss values from node values of penultimate layer (we named it as a characteristic layer) instead of the output layer where the computation of the loss values occurs between true labels and predicted labels. From the experiment results on several benchmark datasets, we evaluate that BAN shows a strong capability of adding (and removing) a new label to a trained network compared with a standard neural network and other previous work.


Author(s):  
Dmytro Kyrychuk ◽  
Andriy Segin

The paper presents the results of the research on the expediency of training a neural network on images of different clarity and brightness using unevenly distributed lighting on a working area with statically positioned system elements. The use of transfer learning for neural networks to improve the accuracy of object recognition was justified. The object recognition ability of a convolutional neural network while scaling the object relatively to the original was researched. The results of the research on the influence of lighting on the quality of object recognition by a trained network and the influence of background choice for a working area on the quality of object-based feature selection are presented. Based on the results obtained, recommendations for the preparation of individual datasets to improve the quality of training and further object recognition of convolutional neural networks through the elimination of unnecessary variables in images were provided.


1997 ◽  
Vol 9 (1) ◽  
pp. 205-225 ◽  
Author(s):  
Rudy Setiono

An algorithm for extracting rules from a standard three-layer feedforward neural network is proposed. The trained network is first pruned not only to remove redundant connections in the network but, more important, to detect the relevant inputs. The algorithm generates rules from the pruned network by considering only a small number of activation values at the hidden units. If the number of inputs connected to a hidden unit is sufficiently small, then rules that describe how each of its activation values is obtained can be readily generated. Otherwise the hidden unit will be split and treated as output units, with each output unit corresponding to an activation value. A hidden layer is inserted and a new subnetwork is formed, trained, and pruned. This process is repeated until every hidden unit in the network has a relatively small number of input units connected to it. Examples on how the proposed algorithm works are shown using real-world data arising from molecular biology and signal processing. Our results show that for these complex problems, the algorithm can extract reasonably compact rule sets that have high predictive accuracy rates.


Author(s):  
A. Yu. Morozov ◽  
K. K. Abgaryan ◽  
D. L. Reviznikov

Artificial neural networks play an important role in the modern world. Their main field of application is the tasks of recognition and processing of images, speech, as well as robotics and unmanned systems. The use of neural networks is associated with high computational costs. In part, it was this fact that held back their progress, and only with the advent of high-performance computing systems did the active development of this area begin. Nevertheless, the issue of speeding up the work of neural network algorithms is still relevant. One of the promising directions is the creation of analog implementations of artificial neural networks, since analog calculations are performed orders of magnitude faster than digital ones. The memristor acts as the basic element on which such systems are built. A memristor is a resistance, the conductivity of which depends on the total charge passed through it. Combining them into a matrix (crossbar) allows one layer of artificial synapses to be implemented at the hardware level. Traditionally, the STDP method based on Hebb’s rule has been used as an analog learning method. In this work, we are modeling a two-layer fully connected network with one layer of synapses. The memristive effect can manifest itself in different substances (mainly in different oxides), so it is important to understand how the characteristics of memristors will affect the parameters of the neural network. Two oxides are considered: titanium oxide (TiO2) and hafnium oxide (HfO2). For each oxide, a parametric identification of the corresponding mathematical model is performed to best fit the experimental data. The neural network is tuned depending on the oxide used and the process of training it to recognize five patterns is simulated.


1994 ◽  
Vol 33 (01) ◽  
pp. 157-160 ◽  
Author(s):  
S. Kruse-Andersen ◽  
J. Kolberg ◽  
E. Jakobsen

Abstract:Continuous recording of intraluminal pressures for extended periods of time is currently regarded as a valuable method for detection of esophageal motor abnormalities. A subsequent automatic analysis of the resulting motility data relies on strict mathematical criteria for recognition of pressure events. Due to great variation in events, this method often fails to detect biologically relevant pressure variations. We have tried to develop a new concept for recognition of pressure events based on a neural network. Pressures were recorded for over 23 hours in 29 normal volunteers by means of a portable data recording system. A number of pressure events and non-events were selected from 9 recordings and used for training the network. The performance of the trained network was then verified on recordings from the remaining 20 volunteers. The accuracy and sensitivity of the two systems were comparable. However, the neural network recognized pressure peaks clearly generated by muscular activity that had escaped detection by the conventional program. In conclusion, we believe that neu-rocomputing has potential advantages for automatic analysis of gastrointestinal motility data.


2020 ◽  
Vol 2020 (10) ◽  
pp. 54-62
Author(s):  
Oleksii VASYLIEV ◽  

The problem of applying neural networks to calculate ratings used in banking in the decision-making process on granting or not granting loans to borrowers is considered. The task is to determine the rating function of the borrower based on a set of statistical data on the effectiveness of loans provided by the bank. When constructing a regression model to calculate the rating function, it is necessary to know its general form. If so, the task is to calculate the parameters that are included in the expression for the rating function. In contrast to this approach, in the case of using neural networks, there is no need to specify the general form for the rating function. Instead, certain neural network architecture is chosen and parameters are calculated for it on the basis of statistical data. Importantly, the same neural network architecture can be used to process different sets of statistical data. The disadvantages of using neural networks include the need to calculate a large number of parameters. There is also no universal algorithm that would determine the optimal neural network architecture. As an example of the use of neural networks to determine the borrower's rating, a model system is considered, in which the borrower's rating is determined by a known non-analytical rating function. A neural network with two inner layers, which contain, respectively, three and two neurons and have a sigmoid activation function, is used for modeling. It is shown that the use of the neural network allows restoring the borrower's rating function with quite acceptable accuracy.


2019 ◽  
Vol 2019 (1) ◽  
pp. 153-158
Author(s):  
Lindsay MacDonald

We investigated how well a multilayer neural network could implement the mapping between two trichromatic color spaces, specifically from camera R,G,B to tristimulus X,Y,Z. For training the network, a set of 800,000 synthetic reflectance spectra was generated. For testing the network, a set of 8,714 real reflectance spectra was collated from instrumental measurements on textiles, paints and natural materials. Various network architectures were tested, with both linear and sigmoidal activations. Results show that over 85% of all test samples had color errors of less than 1.0 ΔE2000 units, much more accurate than could be achieved by regression.


2020 ◽  
Vol 64 (3) ◽  
pp. 30502-1-30502-15
Author(s):  
Kensuke Fukumoto ◽  
Norimichi Tsumura ◽  
Roy Berns

Abstract A method is proposed to estimate the concentration of pigments mixed in a painting, using the encoder‐decoder model of neural networks. The model is trained to output a value that is the same as its input, and its middle output extracts a certain feature as compressed information about the input. In this instance, the input and output are spectral data of a painting. The model is trained with pigment concentration as the middle output. A dataset containing the scattering coefficient and absorption coefficient of each of 19 pigments was used. The Kubelka‐Munk theory was applied to the coefficients to obtain many patterns of synthetic spectral data, which were used for training. The proposed method was tested using spectral images of 33 paintings, which showed that the method estimates, with high accuracy, the concentrations that have a similar spectrum of the target pigments.


Sign in / Sign up

Export Citation Format

Share Document