Neural Networks with Disabilities: An Introduction to Complementary Artificial Intelligence

2021 ◽  
pp. 1-36
Author(s):  
Vagan Terziyan ◽  
Olena Kaikova

Abstract Machine learning is a good tool to simulate human cognitive skills as it is about mapping perceived information to various labels or action choices, aiming at optimal behavior policies for a human or an artificial agent operating in the environment. Regarding autonomous systems, objects and situations are perceived by some receptors as divided between sensors. Reactions to the input (e.g., actions) are distributed among the particular capability providers or actuators. Cognitive models can be trained as, for example, neural networks. We suggest training such models for cases of potential disabilities. Disability can be either the absence of one or more cognitive sensors or actuators at different levels of cognitive model. We adapt several neural network architectures to simulate various cognitive disabilities. The idea has been triggered by the “coolability” (enhanced capability) paradox, according to which a person with some disability can be more efficient in using other capabilities. Therefore, an autonomous system (human or artificial) pretrained with simulated disabilities will be more efficient when acting in adversarial conditions. We consider these coolabilities as complementary artificial intelligence and argue on the usefulness if this concept for various applications.

Background: The problem of searching for subsurface objects has a particular interest for construction, archeology and humanitarian demining. Detection of underground mines with the help of remote sensing devices replaces the traditional procedure of finding explosive objects, as it excludes the presence of a human in the area of possible damage during a charge explosion. Objectives: The aim of the work is to improve the recognition of three-dimensional objects and demonstrate the benefits of using a more informative data set obtained by a special antenna system with four receiving antennas. In addition, it is necessary to compare the effectiveness of artificial intelligence and the method of cross-correlation for recognition by subsurface radar, taking into account the additive noise of different levels present in practice. Materials and methods: The electrodynamic problem was solved by the finite difference time domain (FDTD) method. An artificial neural network (ANN) is trained on ideal signals to detect the features of the field that will be found in noisy data to determine to the position of the object. Cross-correlation also involves the use of an array of ideal signals, which will be correlated with noisy real signals. Results: The optimal and effective ANN structure for work with the received signals is created. It was tested for noise immunity. The recognition problem was also solved by the classical method of cross-correlation, and the influence of noise of different levels on its responses was studied. In addition, a comparison of the efficiency of their recognition using 1 and 4 sensors was made. Conclusions: For subsurface survey problems, a deep neural networks with at least three hidden layers of neurons should be used. This is due to the complexity and multidimensionality of the processes taking place in the surveyed space. It has been shown that artificial intelligence and cross-correlation techniques perform the object recognition well, and it is difficult to identify the best among them. Both approaches showed good noise immunity. The use of a larger data set of four receivers has a positive effect on the recognition results.


2020 ◽  
Author(s):  
Zhe Xu

<p>Despite the fact that artificial intelligence boosted with data-driven methods (e.g., deep neural networks) has surpassed human-level performance in various tasks, its application to autonomous</p> <p>systems still faces fundamental challenges such as lack of interpretability, intensive need for data and lack of verifiability. In this overview paper, I overview some attempts to address these fundamental challenges by explaining, guiding and verifying autonomous systems, taking into account limited availability of simulated and real data, the expressivity of high-level</p> <p>knowledge representations and the uncertainties of the underlying model. Specifically, this paper covers learning high-level knowledge from data for interpretable autonomous systems,</p><p>guiding autonomous systems with high-level knowledge, and</p><p>verifying and controlling autonomous systems against high-level specifications.</p>


Author(s):  
Joshua Bensemann ◽  
Qiming Bao ◽  
Gaël Gendron ◽  
Tim Hartill ◽  
Michael Witbrock

Processes occurring in brains, a.k.a. biological neural networks, can and have been modeled within artificial neural network architectures. Due to this, we have conducted a review of research on the phenomenon of blindsight in an attempt to generate ideas for artificial intelligence models. Blindsight can be considered as a diminished form of visual experience. If we assume that artificial networks have no form of visual experience, then deficits caused by blindsight give us insights into the processes occurring within visual experience that we can incorporate into artificial neural networks. This paper has been structured into three parts. Section 2 is a review of blindsight research, looking specifically at the errors occurring during this condition compared to normal vision. Section 3 identifies overall patterns from Sec. 2 to generate insights for computational models of vision. Section 4 demonstrates the utility of examining biological research to inform artificial intelligence research by examining computational models of visual attention relevant to one of the insights generated in Sec. 3. The research covered in Sec. 4 shows that incorporating one of our insights into computational vision does benefit those models. Future research will be required to determine whether our other insights are as valuable.


Current theories of artificial intelligence and the mind are dominated by the notion that thinking involves the manipulation of symbols. The symbols are intended to have a specific semantics in the sense that they represent concepts referring to objects in the external world and they conform to a syntax, being operated on by specific rules. I describe three alternative, non-symbolic approaches, each with a different emphasis but all using the same underlying computational model. This is a network of interacting computing units, a unit representing a nerve cell to a greater or lesser degree of fidelity in the different approaches. Computational neuroscience emphasizes the development and functioning of the nervous system; the approach of neural networks examines new algorithms for specific applications in, for example, pattern recognition and classification; according to the sub-symbolic approach , concepts are built up of entities called sub-symbols, which are the activities of individual processing units in a neural network. A frequently debated question is whether theories formulated at the subsymbolic level are ‘mere implementations’ of symbolic ones. I describe recent work due to Foster, who proposes that it is valid to view a system at many different levels of description and that, whereas any theory may have many different implementations, in general sub-symbolic theories may not be implementations of symbolic ones.


2020 ◽  
Author(s):  
Zhe Xu

<p>Despite the fact that artificial intelligence boosted with data-driven methods (e.g., deep neural networks) has surpassed human-level performance in various tasks, its application to autonomous</p> <p>systems still faces fundamental challenges such as lack of interpretability, intensive need for data and lack of verifiability. In this overview paper, I overview some attempts to address these fundamental challenges by explaining, guiding and verifying autonomous systems, taking into account limited availability of simulated and real data, the expressivity of high-level</p> <p>knowledge representations and the uncertainties of the underlying model. Specifically, this paper covers learning high-level knowledge from data for interpretable autonomous systems,</p><p>guiding autonomous systems with high-level knowledge, and</p><p>verifying and controlling autonomous systems against high-level specifications.</p>


Electronics ◽  
2021 ◽  
Vol 10 (24) ◽  
pp. 3183
Author(s):  
Cheng Li ◽  
Fei Miao ◽  
Gang Gao

Deep Neural Networks (DNNs) are commonly used methods in computational intelligence. Most prevalent DNN-based image classification methods are dedicated to promoting the performance by designing complicated network architectures and requiring large amounts of model parameters. These large-scale DNN-based models are performed on all images consistently. However, since there are meaningful differences between images, it is difficult to accurately classify all images by a consistent network architecture. For example, a deeper network is fit for the images that are difficult to be distinguished, but may lead to model overfitting for simple images. Therefore, we should selectively use different models to deal with different images, which is similar to the human cognition mechanism, in which different levels of neurons are activated according to the difficulty of object recognition. To this end, we propose a Hierarchical Convolutional Neural Network (HCNN) for image classification in this paper. HCNNs comprise multiple sub-networks, which can be viewed as different levels of neurons in humans, and these sub-networks are used to classify the images progressively. Specifically, we first initialize the weight of each image and each image category, and these images and initial weights are used for training the first sub-network. Then, according to the predicted results of the first sub-network, the weights of misclassified images are increased, while the weights of correctly classified images are decreased. Furthermore, the images with the updated weights are used for training the next sub-networks. Similar operations are performed on all sub-networks. In the test stage, each image passes through the sub-networks in turn. If the prediction confidences in a sub-network are higher than a given threshold, then the results are output directly. Otherwise, deeper visual features need to be learned successively by the subsequent sub-networks until a reliable image classification result is obtained or the last sub-network is reached. Experimental results show that HCNNs can obtain better results than classical CNNs and the existing models based on ensemble learning. HCNNs have 2.68% higher accuracy than Residual Network 50 (Resnet50) on the ultrasonic image dataset, 1.19% than Resnet50 on the chimpanzee facial image dataset, and 10.86% than Adaboost-CNN on the CIFAR-10 dataset. Furthermore, the HCNN is extensible, since the types of sub-networks and their combinations can be dynamically adjusted.


2020 ◽  
Vol 12 (22) ◽  
pp. 9707
Author(s):  
Sergiu Cosmin Nistor ◽  
Tudor Alexandru Ileni ◽  
Adrian Sergiu Dărăbant

Machine learning is a branch of artificial intelligence that has gained a lot of traction in the last years due to advances in deep neural networks. These algorithms can be used to process large quantities of data, which would be impossible to handle manually. Often, the algorithms and methods needed for solving these tasks are problem dependent. We propose an automatic method for creating new convolutional neural network architectures which are specifically designed to solve a given problem. We describe our method in detail and we explain its reduced carbon footprint, computation time and cost compared to a manual approach. Our method uses a rewarding mechanism for creating networks with good performance and so gradually improves its architecture proposals. The application for the algorithm that we chose for this paper is segmentation of eyeglasses from images, but our method is applicable, to a larger or lesser extent, to any image processing task. We present and discuss our results, including the architecture that obtained 0.9683 intersection-over-union (IOU) score on our most complex dataset.


2018 ◽  
Vol 1 (8) ◽  
pp. 2-5 ◽  
Author(s):  
L. L. Bosova ◽  
N. N. Samylkina

The article describes the work of Informatics Club in the framework of the project "Children University of MPSU". It is considered how it is possible to realize the development of complex questions of informatics in the framework of Club work with students of different levels of education.


2019 ◽  
Vol 2019 (1) ◽  
pp. 153-158
Author(s):  
Lindsay MacDonald

We investigated how well a multilayer neural network could implement the mapping between two trichromatic color spaces, specifically from camera R,G,B to tristimulus X,Y,Z. For training the network, a set of 800,000 synthetic reflectance spectra was generated. For testing the network, a set of 8,714 real reflectance spectra was collated from instrumental measurements on textiles, paints and natural materials. Various network architectures were tested, with both linear and sigmoidal activations. Results show that over 85% of all test samples had color errors of less than 1.0 ΔE2000 units, much more accurate than could be achieved by regression.


Author(s):  
A.B. Movsisyan ◽  
◽  
A.V. Kuroyedov ◽  
G.A. Ostapenko ◽  
S.V. Podvigin ◽  
...  

Актуальность. Определяется увеличением заболеваемости глаукомой во всем мире как одной из основных причин снижения зрения и поздней постановкой диагноза при имеющихся выраженных изменений со стороны органа зрения. Цель. Повысить эффективность диагностики глаукомы на основании оценки диска зрительного нерва и перипапиллярной сетчатки нейросетью и искусственным интеллектом. Материал и методы. Для обучения нейронной сети были выделены четыре диагноза: первый – «норма», второй – начальная глаукома, третий – развитая стадия глаукомы, четвертый – глаукома далеко зашедшей стадии. Классификация производилась на основе снимков глазного дна: область диска зрительного нерва и перипапиллярной сетчатки. В результате классификации входные данные разбивались на два класса «норма» и «глаукома». Для целей обучения и оценки качества обучения, множество данных было разбито на два подмножества: тренировочное и тестовое. В тренировочное подмножество были включены 8193 снимка с глаукомными изменениями диска зрительного нерва и «норма» (пациенты без глаукомы). Стадии заболевания были верифицированы согласно действующей классификации первичной открытоугольной глаукомы 3 (тремя) экспертами со стажем работы от 5 до 25 лет. В тестовое подмножество были включены 407 снимков, из них 199 – «норма», 208 – с начальной, развитой и далекозашедшей стадиями глаукомы. Для решения задачи классификации на «норма»/«глаукома» была выбрана архитектура нейронной сети, состоящая из пяти сверточных слоев. Результаты. Чувствительность тестирования дисков зрительных нервов с помощью нейронной сети составила 0,91, специфичность – 0,93. Анализ полученных результатов работы показал эффективность разработанной нейронной сети и ее преимущество перед имеющимися методами диагностики глаукомы. Выводы. Использование нейросетей и искусственного интеллекта является современным, эффективным и перспективным методом диагностики глаукомы.


Sign in / Sign up

Export Citation Format

Share Document