Artificial intelligence in a rugged design based on multi-bit rules
Abstract In this paper, the technologies for training large artificial neural networks are considered: the first technology is based on the use of multilayer “deep” neural networks; the second technology involves the use of a “wide” single-layer network of neurons giving 256 private binary solutions. A list of attacks aimed at the simplest one-bit neural network decision rule is given: knowledge extraction attacks and software data modification attacks; their content is considered. All single-bit decision rules are unsafe for applying. It is necessary to use other decision rules. The security of applying neural network decision rules in relation to deliberate hacker attacks is significantly reduced if you use a decision rule of a large number of output bits. The most important property of neural network transducers is that when it is trained using 20 examples of the “Friend” image, the “Friend” output code of 256 bits long is correctly reproduced with a confidence level of 0.95. This means that the entropy of the “Friend” output codes is close to zero. A well-trained neural network virtually eliminates the ambiguity of the “Friend” image data. On the contrary, for the “Foe” images, their initial natural entropy is enhanced by the neural network. The considered works made it possible to create a draft of the second national standard for automatic training of networks of quadratic neurons with multilevel quantizers.