Report on panel discussion on (Re-)Establishing or Increasing Collaborative Links Between Artificial Intelligence and Intelligent Systems

Author(s):  
B. Brent Gordon
Author(s):  
M. G. Koliada ◽  
T. I. Bugayova

The article discusses the history of the development of the problem of using artificial intelligence systems in education and pedagogic. Two directions of its development are shown: “Computational Pedagogic” and “Educational Data Mining”, in which poorly studied aspects of the internal mechanisms of functioning of artificial intelligence systems in this field of activity are revealed. The main task is a problem of interface of a kernel of the system with blocks of pedagogical and thematic databases, as well as with the blocks of pedagogical diagnostics of a student and a teacher. The role of the pedagogical diagnosis as evident reflection of the complex influence of factors and reasons is shown. It provides the intelligent system with operative and reliable information on how various reasons intertwine in the interaction, which of them are dangerous at present, where recession of characteristics of efficiency is planned. All components of the teaching and educational system are subject to diagnosis; without it, it is impossible to own any pedagogical situation optimum. The means in obtaining information about students, as well as the “mechanisms” of work of intelligent systems based on innovative ideas of advanced pedagogical experience in diagnostics of the professionalism of a teacher, are considered. Ways of realization of skill of the teacher on the basis of the ideas developed by the American scientists are shown. Among them, the approaches of researchers D. Rajonz and U. Bronfenbrenner who put at the forefront the teacher’s attitude towards students, their views, intellectual and emotional characteristics are allocated. An assessment of the teacher’s work according to N. Flanders’s system, in the form of the so-called “The Interaction Analysis”, through the mechanism of fixing such elements as: the verbal behavior of the teacher, events at the lesson and their sequence is also proposed. A system for assessing the professionalism of a teacher according to B. O. Smith and M. O. Meux is examined — through the study of the logic of teaching, using logical operations at the lesson. Samples of forms of external communication of the intellectual system with the learning environment are given. It is indicated that the conclusion of the found productive solutions can have the most acceptable and comfortable form both for students and for the teacher in the form of three approaches. The first shows that artificial intelligence in this area can be represented in the form of robotized being in the shape of a person; the second indicates that it is enough to confine oneself only to specially organized input-output systems for targeted transmission of effective methodological recommendations and instructions to both students and teachers; the third demonstrates that life will force one to come up with completely new hybrid forms of interaction between both sides in the form of interactive educational environments, to some extent resembling the educational spaces of virtual reality.


Author(s):  
Christian List

AbstractThe aim of this exploratory paper is to review an under-appreciated parallel between group agency and artificial intelligence. As both phenomena involve non-human goal-directed agents that can make a difference to the social world, they raise some similar moral and regulatory challenges, which require us to rethink some of our anthropocentric moral assumptions. Are humans always responsible for those entities’ actions, or could the entities bear responsibility themselves? Could the entities engage in normative reasoning? Could they even have rights and a moral status? I will tentatively defend the (increasingly widely held) view that, under certain conditions, artificial intelligent systems, like corporate entities, might qualify as responsible moral agents and as holders of limited rights and legal personhood. I will further suggest that regulators should permit the use of autonomous artificial systems in high-stakes settings only if they are engineered to function as moral (not just intentional) agents and/or there is some liability-transfer arrangement in place. I will finally raise the possibility that if artificial systems ever became phenomenally conscious, there might be a case for extending a stronger moral status to them, but argue that, as of now, this remains very hypothetical.


Author(s):  
Nidhi Rajesh Mavani ◽  
Jarinah Mohd Ali ◽  
Suhaili Othman ◽  
M. A. Hussain ◽  
Haslaniza Hashim ◽  
...  

AbstractArtificial intelligence (AI) has embodied the recent technology in the food industry over the past few decades due to the rising of food demands in line with the increasing of the world population. The capability of the said intelligent systems in various tasks such as food quality determination, control tools, classification of food, and prediction purposes has intensified their demand in the food industry. Therefore, this paper reviews those diverse applications in comparing their advantages, limitations, and formulations as a guideline for selecting the most appropriate methods in enhancing future AI- and food industry–related developments. Furthermore, the integration of this system with other devices such as electronic nose, electronic tongue, computer vision system, and near infrared spectroscopy (NIR) is also emphasized, all of which will benefit both the industry players and consumers.


2021 ◽  
Vol 21 (2) ◽  
pp. 97-117
Author(s):  
Dominique Garingan ◽  
Alison Jane Pickard

AbstractIn response to evolving legal technologies, this article by Dominique Garingan and Alison Jane Pickard explores the concept of algorithmic literacy, a technological literacy which facilitates metacognitive practices surrounding the use of artificially intelligent systems and the principles that shape ethical and responsible user experiences. This article examines the extent to which existing information, digital, and computer literacy frameworks and professional competency standards ground algorithmic literacy. It proceeds to identify various elements of algorithmic literacy within existing literature, provide examples of algorithmic literacy initiatives in academic and non-academic settings, and explore the need for an algorithmic literacy framework to ground algorithmic literacy initiatives within the legal information profession.


2017 ◽  
Vol 26 (3) ◽  
pp. 433-437
Author(s):  
Mark Dougherty

AbstractForgetting is an oft-forgotten art. Many artificial intelligence (AI) systems deliver good performance when first implemented; however, as the contextual environment changes, they become out of date and their performance degrades. Learning new knowledge is part of the solution, but forgetting outdated facts and information is a vital part of the process of renewal. However, forgetting proves to be a surprisingly difficult concept to either understand or implement. Much of AI is based on analogies with natural systems, and although all of us have plenty of experiences with having forgotten something, as yet we have only an incomplete picture of how this process occurs in the brain. A recent judgment by the European Court concerns the “right to be forgotten” by web index services such as Google. This has made debate and research into the concept of forgetting very urgent. Given the rapid growth in requests for pages to be forgotten, it is clear that the process will have to be automated and that intelligent systems of forgetting are required in order to meet this challenge.


AI Magazine ◽  
2017 ◽  
Vol 37 (4) ◽  
pp. 83-88
Author(s):  
Christopher Amato ◽  
Ofra Amir ◽  
Joanna Bryson ◽  
Barbara Grosz ◽  
Bipin Indurkhya ◽  
...  

The Association for the Advancement of Artificial Intelligence, in cooperation with Stanford University's Department of Computer Science, presented the 2016 Spring Symposium Series on Monday through Wednesday, March 21-23, 2016 at Stanford University. The titles of the seven symposia were (1) AI and the Mitigation of Human Error: Anomalies, Team Metrics and Thermodynamics; (2) Challenges and Opportunities in Multiagent Learning for the Real World (3) Enabling Computing Research in Socially Intelligent Human-Robot Interaction: A Community-Driven Modular Research Platform; (4) Ethical and Moral Considerations in Non-Human Agents; (5) Intelligent Systems for Supporting Distributed Human Teamwork; (6) Observational Studies through Social Media and Other Human-Generated Content, and (7) Well-Being Computing: AI Meets Health and Happiness Science.


Author(s):  
Дмитрий Александрович Коростелев ◽  
Dmitriy Aleksandrovich Korostelev ◽  
Алексей Радченко ◽  
Aleksey Radchenko ◽  
Никита Сильченко ◽  
...  

The paper describes the solution to the problem of testing the efficiency of new ideas and algorithms for intelligent systems. Simulation of interaction of the corresponding intelligent agents in a competitive form implementing different algorithms is proposed to use as the main approach to the solution. To support this simulation, a specialized software platform is used. The paper describes the platform developed for running competitions in artificial intelligence and its subsystems: a server, a client and visualization. Operational testing of the developed system is also described which helps to evaluate the efficiency of various algorithms of artificial intelligence in relation to the simulation like "Naval Battle".


There are many kinds of uses for artificial intelligence (AI) in almost every field. AI is quite often used for control, computer aided design (CAD) and computer aided manufacturing (CAM), machine control, computer integrated manufacturing (CIM), production spot control, factory control, intelligent control, intelligent systems, deep learning, the cloud, knowledge bases, database, management, production systems, statistics, to assist sales forces, environment examination, agriculture, art, livings, daily life, etc. The present AI uses will be reexamined whether there is any matter to be considered further or not in AI research directions and their purposes behind the current status by looking at the history of AI development.


2021 ◽  
Vol 8 ◽  
Author(s):  
Eric Martínez ◽  
Christoph Winter

To what extent, if any, should the law protect sentient artificial intelligence (that is, AI that can feel pleasure or pain)? Here we surveyed United States adults (n = 1,061) on their views regarding granting 1) general legal protection, 2) legal personhood, and 3) standing to bring forth a lawsuit, with respect to sentient AI and eight other groups: humans in the jurisdiction, humans outside the jurisdiction, corporations, unions, non-human animals, the environment, humans living in the near future, and humans living in the far future. Roughly one-third of participants endorsed granting personhood and standing to sentient AI (assuming its existence) in at least some cases, the lowest of any group surveyed on, and rated the desired level of protection for sentient AI as lower than all groups other than corporations. We further investigated and observed political differences in responses; liberals were more likely to endorse legal protection and personhood for sentient AI than conservatives. Taken together, these results suggest that laypeople are not by-and-large in favor of granting legal protection to AI, and that the ordinary conception of legal status, similar to codified legal doctrine, is not based on a mere capacity to feel pleasure and pain. At the same time, the observed political differences suggest that previous literature regarding political differences in empathy and moral circle expansion apply to artificially intelligent systems and extend partially, though not entirely, to legal consideration, as well.


Sign in / Sign up

Export Citation Format

Share Document