scholarly journals Artificial Intelligence implemented to recognize patterns of sustainable areas by evaluating the database of socioenvironmental safety restrictions

2021 ◽  
Vol 10 (10) ◽  
pp. e212101018841
Author(s):  
Julio Leite Azancort Neto ◽  
Arleson Lui Silva Gonçalves ◽  
Brennus Caio Carvalho da Cruz ◽  
Larissa Luz Gomes ◽  
Denis Carlos Lima Costa

The several papers recently published, applied to sustainable development, has been considering new methodologies and techniques in identifying the main criteria, in numeric format, that are useful in formulating possible solutions to the solid waste problem. This paper presents the Mathematical and Computational Modeling Process (PM2C), applied in the determination of control variables related to selection of areas destined to the construction of landfills, in order to benefit from new analyzes and values obtained by methods such as AHP (Analytical Hierarchy Process) and GIS (Geographic Information Systems). The main objective of this paper is the use of Artificial Intelligence (AI), through a Decision Tree strategy, as a selective method and optimal solutions in choosing the best area dedicated to the construction of landfills, with the creation and analysis of new values applied to scenarios defined in the paper of Andrade e Barbosa (2015). The results, expressed in analytical and graphical forms, show the individual values for each criterion and new scenarios involved in the phenomena. This paper highlights the importance of incorporating new conditions and criteria to propose a new decision-making rule, simultaneously, associating qualitative and quantitative characteristics, related to social and economic effects, applied to the environment management system. Based on these principles, it was possible to simulate new scenarios that demonstrate, with very high precision, the best values of useful criteria for decision-making in the selection of the optimal area for implementation of a landfill.

2019 ◽  
Vol 45 (8) ◽  
pp. 556-558 ◽  
Author(s):  
Ezio Di Nucci

I analyse an argument according to which medical artificial intelligence (AI) represents a threat to patient autonomy—recently put forward by Rosalind McDougall in the Journal of Medical Ethics. The argument takes the case of IBM Watson for Oncology to argue that such technologies risk disregarding the individual values and wishes of patients. I find three problems with this argument: (1) it confuses AI with machine learning; (2) it misses machine learning’s potential for personalised medicine through big data; (3) it fails to distinguish between evidence-based advice and decision-making within healthcare. I conclude that how much and which tasks we should delegate to machine learning and other technologies within healthcare and beyond is indeed a crucial question of our time, but in order to answer it, we must be careful in analysing and properly distinguish between the different systems and different delegated tasks.


2013 ◽  
pp. 1103-1111
Author(s):  
R. James ◽  
R. Blair

This chapter considers the neurobiology of aggression both the neural systems mediating this behavior as well as how these systems can become perturbed such that the aggression is maladaptive to the individual. A distinction will be drawn between planned, goal directed instrumental aggression and threat/ frustration based reactive aggression. Instrumental aggression implicates the neural systems involved in instrumental motor behavior generally as well as emotional learning and decision making systems that allow the selection of one action over another. Conditions decreasing the responsiveness of neural systems allowing good decision making (amygdala, striatum, ventromedial prefrontal cortex) are associated with an increased risk for maladaptive instrumental aggression. Reactive aggression implicates sub cortical systems involved in the basic response to threat as well as cortical systems involved in emotional modulation and the response to norm violations.


2021 ◽  
Vol 2 (1) ◽  
pp. 106-113
Author(s):  
Ádám Auer

Összefoglaló. A tanulmány kezdő axiómája a mesterséges intelligencia biztonságos alkalmazása. A biztonságos alkalmazás egyik aspektusa a jogi biztonság, az a jogi környezet, amelyben a felmerülő jogi kérdések rendezésére alkalmazható keretrendszer áll rendelkezésre. A tanulmány a Semmelweis Egyetem projektjében fejlesztett mesterséges intelligencia alkalmazásának olyan polgári jogi problémáit vizsgálja, amelyek a mindennapi hasznosítás során merülhetnek fel. A tanulmány következtetése szerint a vizsgált mesterséges intelligencia szerzői műnek minősül és több védelmi forma is alkalmazható. A jogi szabályozás de lege ferenda kiegészítésre szorul a szerzői mű folyamatos változása okán. Szükséges rögzíteni egy referenciapontot, amely a felelősség kiindulópontjául szolgál. Summary. The starting point of the study is the safe use of artificial intelligence. Legal certainty is one aspect of safe usage, the legal environment in which a framework is available that can be used to resolve legal issues. The paper examines the civil law issues that may arise in the everyday use of the artificial intelligence application developed within the Semmelweis University project. The study will first focus on the legal protection of the Semmelweis AI, including whether this protection is currently international, regional (European Union) or national and which of these is the optimal choice. The study also reflects on the legislative preparatory work of the European Union in this regard. Our hypothesis is that the majority of civil law areas concerning AI can be regulated within a contractual framework. The AI software developed by the project is a forward-looking medical and practical solution. If we want to use a legal analogy, we can imagine its operation as if we had a solution that could analyse all the national court decisions in each legal field and provide an answer to the legal problem at hand, while simultaneously learning and applying the latest court decisions every day. For this AI solution, the diagnostic process must be carefully examined in order to identify the legal problems. I believe that the optimal solution is to classify this AI application as ‘software’ because this allows property rights to be acquired in their entirety and it opens the door to clarifying individual associated usage and copyright by contract. An important civil law question arises in relation to parallel copyright protection, when the individual personal contributions (creative development work) to the software cannot be separated. Therefore, it is important to record the process and to separate the individual contributions protecting by copyright. The AI plays a questionable role in the diagnostic process. If the software itself cannot make a decision, but only provides a framework and platform, then it will not be entitled to co-ownership relating to the diagnostic images (e.g. just as a camera will not own the rights to the pictures taken with it). However, if the algorithm is part of the decision-making (e.g. the selecting of negative diagnoses), it would possibly be co-owner of the right, because it was involved in the development of the classification. All this should be clearly stated in the licence agreement, based on full knowledge of the decision-making process. However, de lege ferenda, the legal regime needs to be supplemented in view of the constant changes of the copyright work and the changing authors. There is a need to establish a specific point in the legislation that serves as a reference point for liability and legal protection. The issues under consideration are of a legal security nature, since without precise legal protection both the creator of artificial intelligence and the persons who may be held liable in the event of a malfunctioning of such systems may be uncertain.


2015 ◽  
Vol 773-774 ◽  
pp. 154-157 ◽  
Author(s):  
Muhammad Firdaus Rosli ◽  
Lim Meng Hee ◽  
M. Salman Leong

Machines are the heart of most industries. By ensuring the health of machines, one could easily increase the company revenue and eliminates any safety threat related to machinery catastrophic failures. In condition monitoring (CM), questions often arise during decision making time whether the machine is still safe to run or not? Traditional CM approach depends heavily on human interpretation of results whereby decision is made solely based on the individual experience and knowledge about the machines. The advent of artificial intelligence (AI) and automated ways for decision making in CM provides a more objective and unbiased approach for CM industry and has become a topic of interest in the recent years. This paper reviews the techniques used for automated decision making in CM with emphasis given on Dempster-Shafer (D-S) evident theory and other basic probability assignment (BPA) techniques such as support vector machine (SVM) and etc.


Author(s):  
Michael Beil ◽  
Ingo Proft ◽  
Daniel van Heerden ◽  
Sigal Sviri ◽  
Peter Vernon van Heerden

Abstract Background Prognosticating the course of diseases to inform decision-making is a key component of intensive care medicine. For several applications in medicine, new methods from the field of artificial intelligence (AI) and machine learning have already outperformed conventional prediction models. Due to their technical characteristics, these methods will present new ethical challenges to the intensivist. Results In addition to the standards of data stewardship in medicine, the selection of datasets and algorithms to create AI prognostication models must involve extensive scrutiny to avoid biases and, consequently, injustice against individuals or groups of patients. Assessment of these models for compliance with the ethical principles of beneficence and non-maleficence should also include quantification of predictive uncertainty. Respect for patients’ autonomy during decision-making requires transparency of the data processing by AI models to explain the predictions derived from these models. Moreover, a system of continuous oversight can help to maintain public trust in this technology. Based on these considerations as well as recent guidelines, we propose a pathway to an ethical implementation of AI-based prognostication. It includes a checklist for new AI models that deals with medical and technical topics as well as patient- and system-centered issues. Conclusion AI models for prognostication will become valuable tools in intensive care. However, they require technical refinement and a careful implementation according to the standards of medical ethics.


2021 ◽  
Vol 5 (2) ◽  
pp. 739
Author(s):  
Reva Octaviani Siregar ◽  
Deci Irmayani ◽  
Masrizal Masrizal

Health is a state of well-being of body, soul and society that enables everyone to live productively socially and economically. Therefore, health is the most important element in human life. Health is closely related to economic conditions. factors that affect human health such as healthy food and drinks, a healthy environment and healthy living habits will be fulfilled. Conversely, if a bad economy will make it difficult for individual people to meet some of these factors, if these conditions are ignored, the individual community will find it difficult to improve their health. The problem that occurred at Galang Health Center was the unsatisfactory selection of KIS (Kartu Indonesia Sehat) participants. the selection of KIS (Kartu Indonesia Sehat) participants requires a decision support system (DSS) to speed up and make it easier to make decisions. Decision support systems are used to assist decision making in an organization to facilitate decision making. Where no one knows exactly how decisions should be made. The Promethee method (Preference Ranking Organizational Method for Enrichment Evaluation) is a method used to determine priority (order) in multi-criteria analysis. With the Promethee method, the problem of choosing KIS (Kartu Indonesia Sehat) participants is feasible to use in this study, the stages of research carried out in this study are starting from data collection, problem analysis, method analysis, design and testing of applications to be built, the purpose of this research is to solve the problems of the participants. KIS (Kartu Indonesia Sehat), which have been unsatisfied in terms of the decision of KIS (Kartu Indonesia Sehat) recipients. The results showed that of the three KIS (Kartu Indonesia Sehat) participants who were tested, there were two participants who deserved KIS (Kartu Indonesia Sehat) based on the results of the assessment of the five criteria used so that the calculation results using the Promethee method showed that the net flow from each of the alternatef is -1 for alternative 1 and 0.2 for alternative 2 and 0.8 for the third alternative.


Author(s):  
Shuping Xiao ◽  
A. Shanthini ◽  
Deepa Thilak

Recent advancements in Artificial Intelligence techniques, including machine learning models, have led to the expansion of prevailing and practical prediction simulations for various fields. The quality of teachers’ performance mainly influences the quality of educational services in universities. One of the major challenges of higher education institutions is the increase of data and how to utilize them to enhance the academic program’s quality and administrative decisions. Hence, in this paper, Artificial Intelligence assisted Multi-Objective Decision-Making model (AI-MODM) has been proposed to predict the instructor’s performance in the higher education systems. The proposed AI-assisted prediction model analyzes the numerical values on various elements allocated for a cluster of teachers to evaluate an overall quality evaluation representing the individual instructor’s performance level. Instead of replacing teachers, AI technologies would increase and motivate them. These technologies would reduce the time necessary for routine tasks to enable the faculty to focus on teaching and analysis. The usage for administrative decision-making of artificial intelligence and associated digital tools. The experimental results show that the suggested AI-MODM method enhances the accuracy (93.4%), instructor performance analysis (96.7%), specificity analysis (92.5%), RMSE (28.1 %), and precision ratio (97.9%) compared to other existing methods.


Author(s):  
Meir Russ

The new Post Accelerating Data and Knowledge Online Society, or ‘Padkos’ requires a new model of decision making. This introductory paper proposes a model where decision making and learning are a single symbiotic process, incorporating man and machine, as well as the AADD (ánthrōpos, apparatus, decider, doctrina) diamond model of individual and organizational decision-making and learning processes. The learning is incorporated by using a newly proposed quadruple loop learning model. This model allows for controlled changes of identity, the process of creating and the sense making of new mental models and assumption, and reflections. The model also incorporates the recently proposed model of quantum decision-making, where time collapse of the opted past and the anticipated future (explicitly including its time horizon) into the present play a key role in the process, leveraging decision-making and learning by human as well as Artificial Intelligence (AI) and Machine Learning (ML) algorithms. The paper closes with conclusions.


2019 ◽  
Author(s):  
John Lawrence

<p> We devise a method for political and economic decision making that's applicable to the optimal selection of multiple alternatives from a larger set of alternatives. This method could be used, for example, in the selection of a committee or a parliament. The method combines utilitarian voting with approval voting and sets an optimal threshold above which an individual voter's sincere ratings are turned into approval style votes. Those candidates above threshold are chosen in such a way as to maximize the individual's expected utility for the winning set. We generalize range/approval hybrid voting which deals with a single member outcome to the case of multiple outcomes. The political case easily generalizes to the economic case in which a commodity bundle is to be chosen by each individual from an available set which is first chosen from a larger set by the amalgamation of the individual choosers' inputs. As the set made available gets larger, the individual voter or chooser is more likely to gain greater utility or satisfaction because more of their above threshold candidates will be included in the winning set.</p> <p><br> </p>


Author(s):  
Sam Hepenstal ◽  
David McNeish

Abstract In domains which require high risk and high consequence decision making, such as defence and security, there is a clear requirement for artificial intelligence (AI) systems to be able to explain their reasoning. In this paper we examine what it means to provide explainable AI. We report on research findings to propose that explanations should be tailored, depending upon the role of the human interacting with the system and the individual system components, to reflect different needs. We demonstrate that a ‘one-size-fits-all’ explanation is insufficient to capture the complexity of needs. Thus, designing explainable AI systems involves careful consideration of context, and within that the nature of both the human and AI components.


Sign in / Sign up

Export Citation Format

Share Document