Artificial Intellect with Artificial Neural Networks

Author(s):  
В.М. Еськов ◽  
М.А. Филатов ◽  
Г.В. Газя ◽  
Н.Ф. Стратан

В настоящее время не существует единого определения искусственного интеллекта. Требуется такая классификация задач, которые должны решать системы искусственного интеллекта. В сообщении дана классификация задач при использовании искусственных нейросетей (в виде получения субъективно и объективно новой информации). Показаны преимущества таких нейросетей (неалгоритмизируемые задачи) и показан класс систем (третьего типа — биосистем), которые принципиально не могут изучаться в рамках статистики (и всей науки). Для изучения таких биосистем (с уникальными выборками) предлагается использовать искусственные нейросети, которые решают задачи системного синтеза (отыскание параметров порядка). Сейчас такие задачи решает человек в режиме эвристики, что не моделируется современными системами искусственного интеллекта. Currently, there is no single definition of artificial intelligence. We need a Such categorization of tasks to be solved by artificial intelligence. The paper proposes a task categorization for artificial neural networks (in terms of obtaining subjectively and objectively new information). The advantages of such neural networks (non-algorithmizable problems) are shown, and a class of systems (third type biosystems) which cannot be studied by statistical methods (and all science) is presented. To study such biosystems (with unique samples) it is suggested to use artificial neural networks able to perform system synthesis (search for order parameters). Nowadays such problems are solved by humans through heuristics, and this process cannot be modeled by the existing artificial intelligence systems.

Author(s):  
Т. В. Гавриленко ◽  
А. В. Гавриленко

В статье приведен обзор различных методов атак и подходов к атакам на системы искусственного интеллекта, построенных на основе искусственных нейронных сетей. Показано, что начиная с 2015 года исследователи в различных странах активно развивают методы атак и подходы к атакам на искусственные нейронные сети, при этом разработанные методы и подходы могут иметь критические последствия при эксплуатации систем искусственного интеллекта. Делается вывод о необходимости развития методологической и теоретической базы искусственных нейронных сетей и невозможности создания доверительных систем искусственного интеллекта в текущей парадигме. The paper provides an overview of methods and approaches to attacks on neural network-based artificial intelligence systems. It is shown that since 2015, global researchers have been intensively developing methods and approaches for attacks on artificial neural networks, while the existing ones may have critical consequences for artificial intelligence systems operations. We come to the conclusion that theory and methodology for artificial neural networks is to be elaborated, since trusted artificial intelligence systems cannot be created in the framework of the current paradigm.


Author(s):  
Adrian Erasmus ◽  
Tyler D. P. Brunet ◽  
Eyal Fisher

AbstractWe argue that artificial networks are explainable and offer a novel theory of interpretability. Two sets of conceptual questions are prominent in theoretical engagements with artificial neural networks, especially in the context of medical artificial intelligence: (1) Are networks explainable, and if so, what does it mean to explain the output of a network? And (2) what does it mean for a network to be interpretable? We argue that accounts of “explanation” tailored specifically to neural networks have ineffectively reinvented the wheel. In response to (1), we show how four familiar accounts of explanation apply to neural networks as they would to any scientific phenomenon. We diagnose the confusion about explaining neural networks within the machine learning literature as an equivocation on “explainability,” “understandability” and “interpretability.” To remedy this, we distinguish between these notions, and answer (2) by offering a theory and typology of interpretation in machine learning. Interpretation is something one does to an explanation with the aim of producing another, more understandable, explanation. As with explanation, there are various concepts and methods involved in interpretation: Total or Partial, Global or Local, and Approximative or Isomorphic. Our account of “interpretability” is consistent with uses in the machine learning literature, in keeping with the philosophy of explanation and understanding, and pays special attention to medical artificial intelligence systems.


2010 ◽  
Vol 163-167 ◽  
pp. 1854-1857
Author(s):  
Anuar Kasa ◽  
Zamri Chik ◽  
Taha Mohd Raihan

Prediction of internal stability for segmental retaining walls reinforced with geogrid and backfilled with residual soil was carried out using statistical methods and artificial neural networks (ANN). Prediction was based on data obtained from 234 segmental retaining wall designs using procedures developed by the National Concrete Masonry Association (NCMA). The study showed that prediction made using ANN was generally more accurate to the target compared with statistical methods using mathematical models of linear, pure quadratic, full quadratic and interactions.


2015 ◽  
Vol 13 (7) ◽  
pp. 2094-2100 ◽  
Author(s):  
Carlos Alberto de Albuquerque Silva ◽  
Adriao Duarte Doria Neto ◽  
Jose Alberto Nicolau Oliveira ◽  
Jorge Dantas Melo ◽  
David Simonetti Barbalho ◽  
...  

Author(s):  
Martín Montes Rivera ◽  
Alejandro Padilla ◽  
Juana Canul-Reich ◽  
Julio Ponce

Vision sense is achieved using cells called rods (luminosity) and cones (color). Color perception is required when interacting with educational materials, industrial environments, traffic signals, among others, but colorblind people have difficulties perceiving colors. There are different tests for colorblindness like Ishihara plates test, which have numbers with colors that are confused with colorblindness. Advances in computer sciences produced digital assistants for colorblindness, but there are possibilities to improve them using artificial intelligence because its techniques have exhibited great results when classifying parameters. This chapter proposes the use of artificial neural networks, an artificial intelligence technique, for learning the colors that colorblind people cannot distinguish well by using as input data the Ishihara plates and recoloring the image by increasing its brightness. Results are tested with a real colorblind people who successfully pass the Ishihara test.


Sign in / Sign up

Export Citation Format

Share Document