scholarly journals RESEARCH OF TRENDS IN THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE

Author(s):  
D.S. Sarkisian ◽  
◽  
S.R. Saakian

The relevance of the chosen topic is due to the fact that artificial intelligent systems already exist and successfully replace people in many professions. And then everything will only get more serious. The article shows the main areas of work in the field of artificial intelligence

2020 ◽  
Author(s):  
Tore Pedersen ◽  
Christian Johansen

Artificial Intelligence (AI) receives attention in media as well as in academe and business. In media coverage and reporting, AI is predominantly described in contrasted terms, either as the ultimate solution to all human problems or the ultimate threat to all human existence. In academe, the focus of computer scientists is on developing systems that function, whereas philosophy scholars theorize about the implications of this functionality for human life. In the interface between technology and philosophy there is, however, one imperative aspect of AI yet to be articulated: How do intelligent systems make inferences? We use the overarching concept “Artificial Intelligent Behaviour” which would include both cognition/processing and judgment/behaviour. We argue that due to the complexity and opacity of Artificial Inference, one needs to initiate systematic empirical studies of artificial intelligent behavior similar to what has previously been done to study human cognition, judgment and decision making. This will provide valid knowledge, outside of what current computer science methods can offer, about the judgments and decisions made by intelligent systems. Moreover, outside academe – in the public as well as the private sector – expertise in epistemology, critical thinking and reasoning are crucial to ensure human oversight of the artificial intelligent judgments and decisions that are made, because only competent human insight into AI-inference processes will ensure accountability. Such insights require systematic studies of AI-behaviour founded on the natural sciences and philosophy, as well as the employment of methodologies from the cognitive and behavioral sciences.


2018 ◽  
Vol 9 (4) ◽  
pp. 677-689
Author(s):  
Anne GERDES

AbstractsThe article provides an inclusive outlook on artificial intelligence by introducing a three-legged design perspective that includes, but also moves beyond, ethical artificial systems design to stress the role of moral habituation of professionals and the general public. It is held that an inclusive ethical design perspective is essential for a flourishing future with artificial intelligence.


Author(s):  
Laura Pana

We discuss the thesis that the implementation of a moral code in the behaviour of artificial intelligent systems needs a specific form of human and artificial intelligence, not just an abstract intelligence. We present intelligence as a system with an internal structure and the structural levels of the moral system, as well as certain characteristics of artificial intelligent agents which can/must be treated as 1- individual entities (with a complex, specialized, autonomous or selfdetermined, even unpredictable conduct), 2- entities endowed with diverse or even multiple intelligence forms, like moral intelligence, 3- open and, even, free-conduct performing systems (with specific, flexible and heuristic mechanisms and procedures of decision), 4 – systems which are open to education, not just to instruction, 5- entities with “lifegraphy”, not just “stategraphy”, 6- equipped not just with automatisms but with beliefs (cognitive and affective complexes), 7- capable even of reflection (“moral life” is a form of spiritual, not just of conscious activity), 8 – elements/members of some real (corporal or virtual) community, 9 – cultural beings: free conduct gives cultural value to the action of a ”natural” or artificial being. Implementation of such characteristics does not necessarily suppose efforts to design, construct and educate machines like human beings. The human moral code is irremediably imperfect: it is a morality of preference, of accountability (not of responsibility) and a morality of non-liberty, which cannot be remedied by the invention of ethical systems, by the circulation of ideal values and by ethical (even computing) education. But such an imperfect morality needs perfect instruments for its implementation: applications of special logic fields; efficient psychological (theoretical and technical) attainments to endow the machine not just with intelligence, but with conscience and even spirit; comprehensive technical means for supplementing the objective decision with a subjective one. Machine ethics can/will be of the highest quality because it will be derived from the sciences, modelled by techniques and accomplished by technologies. If our theoretical hypothesis about a specific moral intelligence, necessary for the implementation of an artificial moral conduct, is correct, then some theoretical and technical issues appear, but the following working hypotheses are possible: structural, functional and behavioural. The future of human and/or artificial morality is to be anticipated.


2021 ◽  
Vol 11 (5) ◽  
pp. 317-324
Author(s):  
Réka Pusztahelyi

This essay deals with certain civil liability implications of artificial intelligent systems in the light of the recent steps taken by the European Union. In order to create not only an ethical but also a lawful AI, the EU strives to lay down the framework of the future common liability rules for damages and harms caused by any application of AI technology. The Commission’s new Proposal (Artificial Intelligence Act, AIA) reflects on an innovative approach to the regulation which can tackle with the special features of the AI systems, lays down rules according to the risk management approach and the class-of-application-by-class-of-application approach. In this essay, the strict-based liability for high-risk AI systems and the concept of the frontend and backend operators are in the focal point.


Author(s):  
Laura Pana

We discuss the thesis that the implementation of a moral code in the behaviour of artificial intelligent systems needs a specific form of human and artificial intelligence, not just an abstract intelligence. We present intelligence as a system with an internal structure and the structural levels of the moral system, as well as certain characteristics of artificial intelligent agents which can/must be treated as 1- individual entities (with a complex, specialized, autonomous or selfdetermined, even unpredictable conduct), 2- entities endowed with diverse or even multiple intelligence forms, like moral intelligence, 3- open and, even, free-conduct performing systems (with specific, flexible and heuristic mechanisms and procedures of decision), 4 – systems which are open to education, not just to instruction, 5- entities with “lifegraphy”, not just “stategraphy”, 6- equipped not just with automatisms but with beliefs (cognitive and affective complexes), 7- capable even of reflection (“moral life” is a form of spiritual, not just of conscious activity), 8 – elements/members of some real (corporal or virtual) community, 9 – cultural beings: free conduct gives cultural value to the action of a ”natural” or artificial being. Implementation of such characteristics does not necessarily suppose efforts to design, construct and educate machines like human beings. The human moral code is irremediably imperfect: it is a morality of preference, of accountability (not of responsibility) and a morality of non-liberty, which cannot be remedied by the invention of ethical systems, by the circulation of ideal values and by ethical (even computing) education. But such an imperfect morality needs perfect instruments for its implementation: applications of special logic fields; efficient psychological (theoretical and technical) attainments to endow the machine not just with intelligence, but with conscience and even spirit; comprehensive technical means for supplementing the objective decision with a subjective one. Machine ethics can/will be of the highest quality because it will be derived from the sciences, modelled by techniques and accomplished by technologies. If our theoretical hypothesis about a specific moral intelligence, necessary for the implementation of an artificial moral conduct, is correct, then some theoretical and technical issues appear, but the following working hypotheses are possible: structural, functional and behavioural. The future of human and/or artificial morality is to be anticipated.


E-Management ◽  
2021 ◽  
Vol 4 (1) ◽  
pp. 20-28
Author(s):  
A. S. Lobacheva ◽  
O. V. Sobol

The article reveals the main ethical problems and contradictions associated with the use of artificial intelligence. The paper reveals the concept of “artificial intelligence”. The authors analyse two areas of ethical problems of artificial intelligence: fundamental ideas about the ethics of artificial intelligent systems and the creation of ethical norms.The paper investigates the work of world organizations on the development of ethical standards for the use of artificial intelligence: the Institute of Electrical and Electronics Engineers and UNESCO. The study analyses the main difficulties in the implementation of artificial intelligent systems: the attitude of employees to the use of robots in production activities and the automation of processes that affect their work functions and work organization; ethical issues related to retraining and re-certification of employees in connection with the introduction of new software products and robots; ethical issues in reducing staff as a result of the introduction of artificial intelligence and automation of production and business processes; ethical problems of the processing of personal data of employees, including assessments of their psychological and physical condition, personal qualities and character traits, values  and beliefs by specialized programs based on artificial intelligence, as well as tracking the work of employees; ethical contradictions when using special devices and tracking technologies in robotic technology and modern software products, which also extend to the employees interacting with them.


Author(s):  
M. G. Koliada ◽  
T. I. Bugayova

The article discusses the history of the development of the problem of using artificial intelligence systems in education and pedagogic. Two directions of its development are shown: “Computational Pedagogic” and “Educational Data Mining”, in which poorly studied aspects of the internal mechanisms of functioning of artificial intelligence systems in this field of activity are revealed. The main task is a problem of interface of a kernel of the system with blocks of pedagogical and thematic databases, as well as with the blocks of pedagogical diagnostics of a student and a teacher. The role of the pedagogical diagnosis as evident reflection of the complex influence of factors and reasons is shown. It provides the intelligent system with operative and reliable information on how various reasons intertwine in the interaction, which of them are dangerous at present, where recession of characteristics of efficiency is planned. All components of the teaching and educational system are subject to diagnosis; without it, it is impossible to own any pedagogical situation optimum. The means in obtaining information about students, as well as the “mechanisms” of work of intelligent systems based on innovative ideas of advanced pedagogical experience in diagnostics of the professionalism of a teacher, are considered. Ways of realization of skill of the teacher on the basis of the ideas developed by the American scientists are shown. Among them, the approaches of researchers D. Rajonz and U. Bronfenbrenner who put at the forefront the teacher’s attitude towards students, their views, intellectual and emotional characteristics are allocated. An assessment of the teacher’s work according to N. Flanders’s system, in the form of the so-called “The Interaction Analysis”, through the mechanism of fixing such elements as: the verbal behavior of the teacher, events at the lesson and their sequence is also proposed. A system for assessing the professionalism of a teacher according to B. O. Smith and M. O. Meux is examined — through the study of the logic of teaching, using logical operations at the lesson. Samples of forms of external communication of the intellectual system with the learning environment are given. It is indicated that the conclusion of the found productive solutions can have the most acceptable and comfortable form both for students and for the teacher in the form of three approaches. The first shows that artificial intelligence in this area can be represented in the form of robotized being in the shape of a person; the second indicates that it is enough to confine oneself only to specially organized input-output systems for targeted transmission of effective methodological recommendations and instructions to both students and teachers; the third demonstrates that life will force one to come up with completely new hybrid forms of interaction between both sides in the form of interactive educational environments, to some extent resembling the educational spaces of virtual reality.


Author(s):  
Christian List

AbstractThe aim of this exploratory paper is to review an under-appreciated parallel between group agency and artificial intelligence. As both phenomena involve non-human goal-directed agents that can make a difference to the social world, they raise some similar moral and regulatory challenges, which require us to rethink some of our anthropocentric moral assumptions. Are humans always responsible for those entities’ actions, or could the entities bear responsibility themselves? Could the entities engage in normative reasoning? Could they even have rights and a moral status? I will tentatively defend the (increasingly widely held) view that, under certain conditions, artificial intelligent systems, like corporate entities, might qualify as responsible moral agents and as holders of limited rights and legal personhood. I will further suggest that regulators should permit the use of autonomous artificial systems in high-stakes settings only if they are engineered to function as moral (not just intentional) agents and/or there is some liability-transfer arrangement in place. I will finally raise the possibility that if artificial systems ever became phenomenally conscious, there might be a case for extending a stronger moral status to them, but argue that, as of now, this remains very hypothetical.


Author(s):  
Nidhi Rajesh Mavani ◽  
Jarinah Mohd Ali ◽  
Suhaili Othman ◽  
M. A. Hussain ◽  
Haslaniza Hashim ◽  
...  

AbstractArtificial intelligence (AI) has embodied the recent technology in the food industry over the past few decades due to the rising of food demands in line with the increasing of the world population. The capability of the said intelligent systems in various tasks such as food quality determination, control tools, classification of food, and prediction purposes has intensified their demand in the food industry. Therefore, this paper reviews those diverse applications in comparing their advantages, limitations, and formulations as a guideline for selecting the most appropriate methods in enhancing future AI- and food industry–related developments. Furthermore, the integration of this system with other devices such as electronic nose, electronic tongue, computer vision system, and near infrared spectroscopy (NIR) is also emphasized, all of which will benefit both the industry players and consumers.


Sign in / Sign up

Export Citation Format

Share Document