scholarly journals Artificial intelligence in a rugged design based on multi-bit rules

2021 ◽  
Vol 2094 (3) ◽  
pp. 032009
Author(s):  
T A Zolotareva

Abstract In this paper, the technologies for training large artificial neural networks are considered: the first technology is based on the use of multilayer “deep” neural networks; the second technology involves the use of a “wide” single-layer network of neurons giving 256 private binary solutions. A list of attacks aimed at the simplest one-bit neural network decision rule is given: knowledge extraction attacks and software data modification attacks; their content is considered. All single-bit decision rules are unsafe for applying. It is necessary to use other decision rules. The security of applying neural network decision rules in relation to deliberate hacker attacks is significantly reduced if you use a decision rule of a large number of output bits. The most important property of neural network transducers is that when it is trained using 20 examples of the “Friend” image, the “Friend” output code of 256 bits long is correctly reproduced with a confidence level of 0.95. This means that the entropy of the “Friend” output codes is close to zero. A well-trained neural network virtually eliminates the ambiguity of the “Friend” image data. On the contrary, for the “Foe” images, their initial natural entropy is enhanced by the neural network. The considered works made it possible to create a draft of the second national standard for automatic training of networks of quadratic neurons with multilevel quantizers.

2017 ◽  
Vol 29 (3) ◽  
pp. 861-866 ◽  
Author(s):  
Nolan Conaway ◽  
Kenneth J. Kurtz

Since the work of Minsky and Papert ( 1969 ), it has been understood that single-layer neural networks cannot solve nonlinearly separable classifications (i.e., XOR). We describe and test a novel divergent autoassociative architecture capable of solving nonlinearly separable classifications with a single layer of weights. The proposed network consists of class-specific linear autoassociators. The power of the model comes from treating classification problems as within-class feature prediction rather than directly optimizing a discriminant function. We show unprecedented learning capabilities for a simple, single-layer network (i.e., solving XOR) and demonstrate that the famous limitation in acquiring nonlinearly separable problems is not just about the need for a hidden layer; it is about the choice between directly predicting classes or learning to classify indirectly by predicting features.


2021 ◽  
pp. 84-93
Author(s):  
Alexander Ivanov ◽  
◽  
Alexeiy Sulavko ◽  
◽  

The aim of the study is to show that a biometrics-to-access code converter based on large networks of correlation neurons makes it possible to obtain an even longer key at the output while ensuring the protection of biometric data from compromise. The research method is the use of large «wide» neural networks with automatic learning for the implementation of the biometric authentication procedure, ensuring the protection of biometric personal data from compromise. Results of the study - the first national standard GOST R 52633.5 for the automatic training of neuron networks was focused only on a physically secure, trusted computing environment. The protection of the parameters of the trained neural network converters biometrics-code using cryptographic methods led to the need to use short keys and passwords for biometric-cryptographic authentication. It is proposed to build special correlation neurons in the meta-space of Bayes-Minkowski features of a higher dimension. An experiment was carried out to verify the patterns of kkeystroke dynamics using a biometrics-to-code converter based on the data set of the AIConstructor project. In the meta-space of features, the probability of a verification error turned out to be less (EER = 0.0823) than in the original space of features (EER = 0.0864), while in the protected execution mode of the biometrics-to-code converter, the key length can be increased by more than 19 times. Experiments have shown that the transition to the mat space of BayesMinkowski features does not lead to the manifestation of the “curse of dimension” problem if some of the original features have a noticeable or strong mutual correlation. The problem of ensuring the confidentiality of the parameters of trained neural network containers, from which the neural network converter biometrics-code is formed, is relevant not only for biometric authentication tasks. It seems possible to develop a standard for protecting artificial intelligence based on automatically trained networks of Bayesian-Minkowski correlation neurons.


2021 ◽  
Author(s):  
Kathakali Sarkar ◽  
Deepro Bonnerjee ◽  
Rajkamal Srivastava ◽  
Sangram Bagh

Here, we adapted the basic concept of artificial neural networks (ANN) and experimentally demonstrate a broadly applicable single layer ANN type architecture with molecular engineered bacteria to perform complex irreversible...


2019 ◽  
Vol 10 (1) ◽  
Author(s):  
Shangying Wang ◽  
Kai Fan ◽  
Nan Luo ◽  
Yangxiaolu Cao ◽  
Feilun Wu ◽  
...  

Abstract For many biological applications, exploration of the massive parametric space of a mechanism-based model can impose a prohibitive computational demand. To overcome this limitation, we present a framework to improve computational efficiency by orders of magnitude. The key concept is to train a neural network using a limited number of simulations generated by a mechanistic model. This number is small enough such that the simulations can be completed in a short time frame but large enough to enable reliable training. The trained neural network can then be used to explore a much larger parametric space. We demonstrate this notion by training neural networks to predict pattern formation and stochastic gene expression. We further demonstrate that using an ensemble of neural networks enables the self-contained evaluation of the quality of each prediction. Our work can be a platform for fast parametric space screening of biological models with user defined objectives.


2018 ◽  
Vol 146 (11) ◽  
pp. 3885-3900 ◽  
Author(s):  
Stephan Rasp ◽  
Sebastian Lerch

Abstract Ensemble weather predictions require statistical postprocessing of systematic errors to obtain reliable and accurate probabilistic forecasts. Traditionally, this is accomplished with distributional regression models in which the parameters of a predictive distribution are estimated from a training period. We propose a flexible alternative based on neural networks that can incorporate nonlinear relationships between arbitrary predictor variables and forecast distribution parameters that are automatically learned in a data-driven way rather than requiring prespecified link functions. In a case study of 2-m temperature forecasts at surface stations in Germany, the neural network approach significantly outperforms benchmark postprocessing methods while being computationally more affordable. Key components to this improvement are the use of auxiliary predictor variables and station-specific information with the help of embeddings. Furthermore, the trained neural network can be used to gain insight into the importance of meteorological variables, thereby challenging the notion of neural networks as uninterpretable black boxes. Our approach can easily be extended to other statistical postprocessing and forecasting problems. We anticipate that recent advances in deep learning combined with the ever-increasing amounts of model and observation data will transform the postprocessing of numerical weather forecasts in the coming decade.


2018 ◽  
Vol 7 (11) ◽  
pp. 430 ◽  
Author(s):  
Krzysztof Pokonieczny

The classification of terrain in terms of passability plays a significant role in the process of military terrain assessment. It involves classifying selected terrain to specific classes (GO, SLOW-GO, NO-GO). In this article, the problem of terrain classification to the respective category of passability was solved by applying artificial neural networks (multilayer perceptron) to generate a continuous Index of Passability (IOP). The neural networks defined this factor for primary fields in two sizes (1000 × 1000 m and 100 × 100 m) based on the land cover elements obtained from Vector Smart Map (VMap) Level 2 and Shuttle Radar Topography Mission (SRTM). The work used a feedforward neural network consisting of three layers. The paper presents a comprehensive analysis of the reliability of the neural network parameters, taking into account the number of neurons, learning algorithm, activation functions and input data configuration. The studies and tests carried out have shown that a well-trained neural network can automate the process of terrain classification in terms of passability conditions.


2021 ◽  
Author(s):  
Bhasker Sri Harsha Suri ◽  
Manish Srivastava ◽  
Kalidas Yeturu

Neural networks suffer from catastrophic forgetting problem when deployed in a continual learning scenario where new batches of data arrive over time; however they are of different distributions from the previous data used for training the neural network. For assessing the performance of a model in a continual learning scenario, two aspects are important (i) to compute the difference in data distribution between a new and old batch of data and (ii) to understand the retention and learning behavior of deployed neural networks. Current techniques indicate the novelty of a new data batch by comparing its statistical properties with that of the old batch in the input space. However, it is still an open area of research to consider the perspective of a deployed neural network’s ability to generalize on the unseen data samples. In this work, we report a dataset distance measuring technique that indicates the novelty of a new batch of data while considering the deployed neural network’s perspective. We propose the construction of perspective histograms which are a vector representation of the data batches based on the correctness and confidence in the prediction of the deployed model. We have successfully tested the hypothesis empirically on image data coming MNIST Digits, MNIST Fashion, CIFAR10, for its ability to detect data perturbations of type rotation, Gaussian blur, and translation. Upon new data, given a model and its training data, we have proposed and evaluated four new scoring schemes, retention score (R), learning score (L), Oscore and SP-score for studying how much the model can retain its performance on past data, how much it can learn new data, the combined expression for the magnitude of retention and learning and stability-plasticity characteristics respectively. The scoring schemes have been evaluated MNIST Digits and MNIST Fashion data sets on different types of neural network architectures based on the number of parameters, activation functions, and learning loss functions, and an instance of a typical analysis report is presented. Machine learning model maintenance is a reality in production systems in the industry, and we hope our proposed methodology offers a solution to the need of the day in this aspect.


2021 ◽  
Vol 7 (8) ◽  
pp. 146
Author(s):  
Joshua Ganter ◽  
Simon Löffler ◽  
Ron Metzger ◽  
Katharina Ußling ◽  
Christoph Müller

Collecting real-world data for the training of neural networks is enormously time- consuming and expensive. As such, the concept of virtualizing the domain and creating synthetic data has been analyzed in many instances. This virtualization offers many possibilities of changing the domain, and with that, enabling the relatively fast creation of data. It also offers the chance to enhance necessary augmentations with additional semantic information when compared with conventional augmentation methods. This raises the question of whether such semantic changes, which can be seen as augmentations of the virtual domain, contribute to better results for neural networks, when trained with data augmented this way. In this paper, a virtual dataset is presented, including semantic augmentations and automatically generated annotations, as well as a comparison between semantic and conventional augmentation for image data. It is determined that the results differ only marginally for neural network models trained with the two augmentation approaches.


2020 ◽  
Author(s):  
Janosch Menke ◽  
Oliver Koch

Molecular fingerprints are essential for different cheminformatics approaches like similarity-based virtual screening. In this work, the concept of neural (network) fingerprints in the context of similarity search is introduced in which the activation of the last hidden layer of a trained neural network represents the molecular fingerprint. The neural fingerprint performance of five different neural network architectures was analyzed and compared to the well-established Extended Connectivity Fingerprint (ECFP) and an autoencoder-based fingerprint. This is done using a published compound dataset with known bioactivity on 160 different kinase targets. We expect neural networks to combine information about the molecular space of<br>already known bioactive compounds together with the information on the molecular structure of the query and by doing so enrich the fingerprint. The results show that indeed neural fingerprints can greatly improve the performance of similarity searches. Most importantly, it could be shown that the neural fingerprint performs well even for kinase targets that were not included in the training. Surprisingly, while Graph Neural Networks (GNNs) are thought to offer an advantageous alternative, the best performing neural fingerprints were based on traditional fully connected layers using the ECFP4 as input. The best performing kinase-specific neural fingerprint will be provided for public use.


2021 ◽  
Vol 17 (14) ◽  
pp. 135-153
Author(s):  
Haval Tariq Sadeeq ◽  
Thamer Hassan Hameed ◽  
Abdo Sulaiman Abdi ◽  
Ayman Nashwan Abdulfatah

Computer images consist of huge data and thus require more memory space. The compressed image requires less memory space and less transmission time. Imaging and video coding technology in recent years has evolved steadily. However, the image data growth rate is far above the compression ratio growth, Considering image and video acquisition system popularization. It is generally accepted, in particular that further improvement of coding efficiency within the conventional hybrid coding system is increasingly challenged. A new and exciting image compression solution is also offered by the deep convolution neural network (CNN), which in recent years has resumed the neural network and achieved significant success both in artificial intelligent fields and in signal processing. In this paper we include a systematic, detailed and current analysis of image compression techniques based on the neural network. Images are applied to the evolution and growth of compression methods based on the neural networks. In particular, the end-to-end frames based on neural networks are reviewed, revealing fascinating explorations of frameworks/standards for next-generation image coding. The most important studies are highlighted and future trends even envisaged in relation to image coding topics using neural networks.


Sign in / Sign up

Export Citation Format

Share Document