scholarly journals An Algorithm for Producing Fuzzy Negations via Conical Sections

Algorithms ◽  
2019 ◽  
Vol 12 (5) ◽  
pp. 89 ◽  
Author(s):  
Souliotis ◽  
Papadopoulos

In this paper we introduced a new class of strong negations, which were generated via conical sections. This paper focuses on the fact that simple mathematical and computational processes generate new strong fuzzy negations, through purely geometrical concepts such as the ellipse and the hyperbola. Well-known negations like the classical negation, Sugeno negation, etc., were produced via the suggested conical sections. The strong negations were a structural element in the production of fuzzy implications. Thus, we have a machine for producing fuzzy implications, which can be useful in many areas, as in artificial intelligence, neural networks, etc. Strong Fuzzy Negations refers to the discrepancy between the degree of difficulty of the effort and the significance of its results. Innovative results may, therefore, derive for use in literature in the specific field of mathematics. These data are, moreover, generated in an effortless, concise, as well as self-evident manner.

2020 ◽  
Vol 10 (1) ◽  
Author(s):  
A. Alexiadis ◽  
M. J. H. Simmons ◽  
K. Stamatopoulos ◽  
H. K. Batchelor ◽  
I. Moulitsas

Abstract The algorithm behind particle methods is extremely versatile and used in a variety of applications that range from molecular dynamics to astrophysics. For continuum mechanics applications, the concept of ‘particle’ can be generalized to include discrete portions of solid and liquid matter. This study shows that it is possible to further extend the concept of ‘particle’ to include artificial neurons used in Artificial Intelligence. This produces a new class of computational methods based on ‘particle-neuron duals’ that combines the ability of computational particles to model physical systems and the ability of artificial neurons to learn from data. The method is validated with a multiphysics model of the intestine that autonomously learns how to coordinate its contractions to propel the luminal content forward (peristalsis). Training is achieved with Deep Reinforcement Learning. The particle-neuron duality has the advantage of extending particle methods to systems where the underlying physics is only partially known, but we have observations that allow us to empirically describe the missing features in terms of reward function. During the simulation, the model evolves autonomously adapting its response to the available observations, while remaining consistent with the known physics of the system.


Author(s):  
A.B. Movsisyan ◽  
◽  
A.V. Kuroyedov ◽  
G.A. Ostapenko ◽  
S.V. Podvigin ◽  
...  

Актуальность. Определяется увеличением заболеваемости глаукомой во всем мире как одной из основных причин снижения зрения и поздней постановкой диагноза при имеющихся выраженных изменений со стороны органа зрения. Цель. Повысить эффективность диагностики глаукомы на основании оценки диска зрительного нерва и перипапиллярной сетчатки нейросетью и искусственным интеллектом. Материал и методы. Для обучения нейронной сети были выделены четыре диагноза: первый – «норма», второй – начальная глаукома, третий – развитая стадия глаукомы, четвертый – глаукома далеко зашедшей стадии. Классификация производилась на основе снимков глазного дна: область диска зрительного нерва и перипапиллярной сетчатки. В результате классификации входные данные разбивались на два класса «норма» и «глаукома». Для целей обучения и оценки качества обучения, множество данных было разбито на два подмножества: тренировочное и тестовое. В тренировочное подмножество были включены 8193 снимка с глаукомными изменениями диска зрительного нерва и «норма» (пациенты без глаукомы). Стадии заболевания были верифицированы согласно действующей классификации первичной открытоугольной глаукомы 3 (тремя) экспертами со стажем работы от 5 до 25 лет. В тестовое подмножество были включены 407 снимков, из них 199 – «норма», 208 – с начальной, развитой и далекозашедшей стадиями глаукомы. Для решения задачи классификации на «норма»/«глаукома» была выбрана архитектура нейронной сети, состоящая из пяти сверточных слоев. Результаты. Чувствительность тестирования дисков зрительных нервов с помощью нейронной сети составила 0,91, специфичность – 0,93. Анализ полученных результатов работы показал эффективность разработанной нейронной сети и ее преимущество перед имеющимися методами диагностики глаукомы. Выводы. Использование нейросетей и искусственного интеллекта является современным, эффективным и перспективным методом диагностики глаукомы.


2020 ◽  
Vol 112 (5) ◽  
pp. S50
Author(s):  
Zachary Eller ◽  
Michelle Chen ◽  
Jermaine Heath ◽  
Uzma Hussain ◽  
Thomas Obisean ◽  
...  

2021 ◽  
pp. medethics-2020-106820 ◽  
Author(s):  
Juan Manuel Durán ◽  
Karin Rolanda Jongsma

The use of black box algorithms in medicine has raised scholarly concerns due to their opaqueness and lack of trustworthiness. Concerns about potential bias, accountability and responsibility, patient autonomy and compromised trust transpire with black box algorithms. These worries connect epistemic concerns with normative issues. In this paper, we outline that black box algorithms are less problematic for epistemic reasons than many scholars seem to believe. By outlining that more transparency in algorithms is not always necessary, and by explaining that computational processes are indeed methodologically opaque to humans, we argue that the reliability of algorithms provides reasons for trusting the outcomes of medical artificial intelligence (AI). To this end, we explain how computational reliabilism, which does not require transparency and supports the reliability of algorithms, justifies the belief that results of medical AI are to be trusted. We also argue that several ethical concerns remain with black box algorithms, even when the results are trustworthy. Having justified knowledge from reliable indicators is, therefore, necessary but not sufficient for normatively justifying physicians to act. This means that deliberation about the results of reliable algorithms is required to find out what is a desirable action. Thus understood, we argue that such challenges should not dismiss the use of black box algorithms altogether but should inform the way in which these algorithms are designed and implemented. When physicians are trained to acquire the necessary skills and expertise, and collaborate with medical informatics and data scientists, black box algorithms can contribute to improving medical care.


Author(s):  
Daniel Auge ◽  
Julian Hille ◽  
Etienne Mueller ◽  
Alois Knoll

AbstractBiologically inspired spiking neural networks are increasingly popular in the field of artificial intelligence due to their ability to solve complex problems while being power efficient. They do so by leveraging the timing of discrete spikes as main information carrier. Though, industrial applications are still lacking, partially because the question of how to encode incoming data into discrete spike events cannot be uniformly answered. In this paper, we summarise the signal encoding schemes presented in the literature and propose a uniform nomenclature to prevent the vague usage of ambiguous definitions. Therefore we survey both, the theoretical foundations as well as applications of the encoding schemes. This work provides a foundation in spiking signal encoding and gives an overview over different application-oriented implementations which utilise the schemes.


Author(s):  
Wael H. Awad ◽  
Bruce N. Janson

Three different modeling approaches were applied to explain truck accidents at interchanges in Washington State during a 27-month period. Three models were developed for each ramp type including linear regression, neural networks, and a hybrid system using fuzzy logic and neural networks. The study showed that linear regression was able to predict accident frequencies that fell within one standard deviation from the overall mean of the dependent variable. However, the coefficient of determination was very low in all cases. The other two artificial intelligence (AI) approaches showed a high level of performance in identifying different patterns of accidents in the training data and presented a better fit when compared to the regression model. However, the ability of these AI models to predict test data that were not included in the training process showed unsatisfactory results.


2021 ◽  
Vol 20 ◽  
pp. 153303382110163
Author(s):  
Danju Huang ◽  
Han Bai ◽  
Li Wang ◽  
Yu Hou ◽  
Lan Li ◽  
...  

With the massive use of computers, the growth and explosion of data has greatly promoted the development of artificial intelligence (AI). The rise of deep learning (DL) algorithms, such as convolutional neural networks (CNN), has provided radiation oncologists with many promising tools that can simplify the complex radiotherapy process in the clinical work of radiation oncology, improve the accuracy and objectivity of diagnosis, and reduce the workload, thus enabling clinicians to spend more time on advanced decision-making tasks. As the development of DL gets closer to clinical practice, radiation oncologists will need to be more familiar with its principles to properly evaluate and use this powerful tool. In this paper, we explain the development and basic concepts of AI and discuss its application in radiation oncology based on different task categories of DL algorithms. This work clarifies the possibility of further development of DL in radiation oncology.


Sign in / Sign up

Export Citation Format

Share Document