scholarly journals Entropy of Artificial Intelligence

Universe ◽  
2022 ◽  
Vol 8 (1) ◽  
pp. 53
Author(s):  
T. S. Biró ◽  
Antal Jakovác

We describe a model of artificial intelligence systems based on the dimension of the probability space of the input set available for recognition. In this scenario, we can understand a subset, which means that we can decide whether an object is an element of a given subset or not in an efficient way. In the machine learning (ML) process we define appropriate features, in this way shrinking the defining bit-length of classified sets during the learning process. This can also be described in the language of entropy: while natural processes tend to increase the disorder, that is, increase the entropy, learning creates order, and we expect that it decreases a properly defined entropy.

2021 ◽  
Vol 2068 (1) ◽  
pp. 012042
Author(s):  
A Kolesnikov ◽  
P Kikin ◽  
E Panidi

Abstract The field of logistics and transport operates with large amounts of data. The transformation of such arrays into knowledge and processing using machine learning methods will help to find additional reserves for optimizing transport and logistics processes and supply chains. This article analyses the possibilities and prospects for the application of machine learning and geospatial knowledge in the field of logistics and transport using specific examples. The long-term impact of geospatial-based artificial intelligence systems on such processes as procurement, delivery, inventory management, maintenance, customer interaction is considered.


2020 ◽  
Author(s):  
Leonardo Guerreiro Azevedo ◽  
Renan Souza ◽  
Raphael Melo Thiago ◽  
Elton Soares ◽  
Marcio Moreno

Machine Learning (ML) is a core concept behind Artificial Intelligence systems, which work driven by data and generate ML models. These models are used for decision making, and it is crucial to trust their outputs by, e.g., understanding the process that derives them. One way to explain the derivation of ML models is by tracking the whole ML lifecycle, generating its data lineage, which may be accomplished by provenance data management techniques. In this work, we present the use of ProvLake tool for ML provenance data management in the ML lifecycle for Well Top Picking, an essential process in Oil and Gas exploration. We show how ProvLake supported the validation of ML models, the understanding of whether the ML models generalize respecting the domain characteristics, and their derivation.


2019 ◽  
Vol 1 (1) ◽  
pp. 912-920
Author(s):  
Małgorzata Suchacka ◽  
Nicole Horáková

AbstractThe main goal of the study will be to pay attention to technologization of the learning process and its social dimensions in the context of artificial intelligence. The reflection will mainly cover selected theories of learning and knowledge management in the organization and its broadly understood environment. Considering the sociological dimensions of these phenomena is supposed to lead to the emphasis on the importance of the security of the human-organization-device relationship. Due to the interdisciplinary nature of the issue, the article will include references to the concept of artificial intelligence and machine learning. Difficult questions will arise around the ideas and will become the conclusion of the considerations.


2021 ◽  
Vol 8 (2) ◽  
pp. 1-2
Author(s):  
Julkar Nine

Vision Based systems have become an integral part when it comes to autonomous driving. The autonomous industry has seen a made large progress in the perception of environment as a result of the improvements done towards vision based systems. As the industry moves up the ladder of automation, safety features are coming more and more into the focus. Different safety measurements have to be taken into consideration based on different driving situations. One of the major concerns of the highest level of autonomy is to obtain the ability of understanding both internal and external situations. Most of the research made on vision based systems are focused on image processing and artificial intelligence systems like machine learning and deep learning. Due to the current generation of technology being the generation of “Connected World”, there is no lack of data any more. As a result of the introduction of internet of things, most of these connected devices are able to share and transfer data. Vision based techniques are techniques that are hugely depended on these vision based data.


2021 ◽  
Author(s):  
Zachary Arnold ◽  
◽  
Helen Toner

As modern machine learning systems become more widely used, the potential costs of malfunctions grow. This policy brief describes how trends we already see today—both in newly deployed artificial intelligence systems and in older technologies—show how damaging the AI accidents of the future could be. It describes a wide range of hypothetical but realistic scenarios to illustrate the risks of AI accidents and offers concrete policy suggestions to reduce these risks.


2021 ◽  
Vol 12 ◽  
Author(s):  
Shayantan Banerjee ◽  
Akram Mohammed ◽  
Hector R. Wong ◽  
Nades Palaniyar ◽  
Rishikesan Kamaleswaran

A complicated clinical course for critically ill patients admitted to the intensive care unit (ICU) usually includes multiorgan dysfunction and subsequent death. Owing to the heterogeneity, complexity, and unpredictability of the disease progression, ICU patient care is challenging. Identifying the predictors of complicated courses and subsequent mortality at the early stages of the disease and recognizing the trajectory of the disease from the vast array of longitudinal quantitative clinical data is difficult. Therefore, we attempted to perform a meta-analysis of previously published gene expression datasets to identify novel early biomarkers and train the artificial intelligence systems to recognize the disease trajectories and subsequent clinical outcomes. Using the gene expression profile of peripheral blood cells obtained within 24 h of pediatric ICU (PICU) admission and numerous clinical data from 228 septic patients from pediatric ICU, we identified 20 differentially expressed genes predictive of complicated course outcomes and developed a new machine learning model. After 5-fold cross-validation with 10 iterations, the overall mean area under the curve reached 0.82. Using a subset of the same set of genes, we further achieved an overall area under the curve of 0.72, 0.96, 0.83, and 0.82, respectively, on four independent external validation sets. This model was highly effective in identifying the clinical trajectories of the patients and mortality. Artificial intelligence systems identified eight out of twenty novel genetic markers (SDC4, CLEC5A, TCN1, MS4A3, HCAR3, OLAH, PLCB1, and NLRP1) that help predict sepsis severity or mortality. While these genes have been previously associated with sepsis mortality, in this work, we show that these genes are also implicated in complex disease courses, even among survivors. The discovery of eight novel genetic biomarkers related to the overactive innate immune system, including neutrophil function, and a new predictive machine learning method provides options to effectively recognize sepsis trajectories, modify real-time treatment options, improve prognosis, and patient survival.


Author(s):  
Т. В. Гавриленко ◽  
А. В. Гавриленко

В статье приведен обзор различных методов атак и подходов к атакам на системы искусственного интеллекта, построенных на основе искусственных нейронных сетей. Показано, что начиная с 2015 года исследователи в различных странах активно развивают методы атак и подходы к атакам на искусственные нейронные сети, при этом разработанные методы и подходы могут иметь критические последствия при эксплуатации систем искусственного интеллекта. Делается вывод о необходимости развития методологической и теоретической базы искусственных нейронных сетей и невозможности создания доверительных систем искусственного интеллекта в текущей парадигме. The paper provides an overview of methods and approaches to attacks on neural network-based artificial intelligence systems. It is shown that since 2015, global researchers have been intensively developing methods and approaches for attacks on artificial neural networks, while the existing ones may have critical consequences for artificial intelligence systems operations. We come to the conclusion that theory and methodology for artificial neural networks is to be elaborated, since trusted artificial intelligence systems cannot be created in the framework of the current paradigm.


Entropy ◽  
2020 ◽  
Vol 22 (1) ◽  
pp. 82 ◽  
Author(s):  
Alexander N. Gorban ◽  
Valery A. Makarov ◽  
Ivan Y. Tyukin

High-dimensional data and high-dimensional representations of reality are inherent features of modern Artificial Intelligence systems and applications of machine learning. The well-known phenomenon of the “curse of dimensionality” states: many problems become exponentially difficult in high dimensions. Recently, the other side of the coin, the “blessing of dimensionality”, has attracted much attention. It turns out that generic high-dimensional datasets exhibit fairly simple geometric properties. Thus, there is a fundamental tradeoff between complexity and simplicity in high dimensional spaces. Here we present a brief explanatory review of recent ideas, results and hypotheses about the blessing of dimensionality and related simplifying effects relevant to machine learning and neuroscience.


Author(s):  
A. N. Gorban ◽  
I. Y. Tyukin

The concentrations of measure phenomena were discovered as the mathematical background to statistical mechanics at the end of the nineteenth/beginning of the twentieth century and have been explored in mathematics ever since. At the beginning of the twenty-first century, it became clear that the proper utilization of these phenomena in machine learning might transform the curse of dimensionality into the blessing of dimensionality . This paper summarizes recently discovered phenomena of measure concentration which drastically simplify some machine learning problems in high dimension, and allow us to correct legacy artificial intelligence systems. The classical concentration of measure theorems state that i.i.d. random points are concentrated in a thin layer near a surface (a sphere or equators of a sphere, an average or median-level set of energy or another Lipschitz function, etc.). The new stochastic separation theorems describe the thin structure of these thin layers: the random points are not only concentrated in a thin layer but are all linearly separable from the rest of the set, even for exponentially large random sets. The linear functionals for separation of points can be selected in the form of the linear Fisher’s discriminant. All artificial intelligence systems make errors. Non-destructive correction requires separation of the situations (samples) with errors from the samples corresponding to correct behaviour by a simple and robust classifier. The stochastic separation theorems provide us with such classifiers and determine a non-iterative (one-shot) procedure for their construction. This article is part of the theme issue ‘Hilbert’s sixth problem’.


Sign in / Sign up

Export Citation Format

Share Document