scholarly journals Artificial Intelligence on the Move: A Revolutionary Technology

2019 ◽  
Vol 8 (4) ◽  
pp. 12112-12120

People have used technology to improve themselves throughout the human history. From the ancient times, human beings tried to get their work done by human slaves or inanimate machines. To build intelligent agents each new technology has been exploited. Clockwise, hydraulics, telephone switching systems, holograms, analog computers and digital computers have all been suggested both as mechanisms for intelligent agent and as technological metaphors for intelligence. A new invention of computer system is known as Artificial Intelligence that can perform tasks with the help of human intelligence. Artificial Intelligence associated with computer systems which includes various types of intelligence: systems that understand new concepts and tasks, systems that are able to give reason and draw useful conclusion about the world around us, systems which can learn a natural language and comprehend a visual scene. Artificial Intelligence means intelligence that is demonstrated by machines. This is a device that recognizes its environment and takes action that increases the chances of achieving its goal. The research goal of Artificial Intelligence is to create technology which helps computers and machines to perform various tasks in an intelligent manner. Artificial Intelligence analysis the intelligent acts of computational agents. Computational agent is one whose decisions about his/her actions can be explained in terms of computation. His actions, firstly, may be broken down into primary operation that further can be applied in a physical device. Computations have many forms, for example: In humans, it is in the form of “wetware” and in computers it is in the form of “hardware”. Greatest advances have occurred in the field of game playing. Super computer named Deep Blue defeated world chess champion Gary Kasparov in May, 1997. This research article explains history, features and goals of artificial intelligence. It also explains various types of artificial intelligence like reactive machine,limited memory, theory of mind and self- awareness. This article focus on application of artificial intelligence in many fields like literacy, finance, heavy industries, hospitals, news, publishing, transportations, telecommunication maintenance, telephone and online customer services etc.

Author(s):  
Silviya Serafimova

Abstract Moral implications of the decision-making process based on algorithms require special attention within the field of machine ethics. Specifically, research focuses on clarifying why even if one assumes the existence of well-working ethical intelligent agents in epistemic terms, it does not necessarily mean that they meet the requirements of autonomous moral agents, such as human beings. For the purposes of exemplifying some of the difficulties in arguing for implicit and explicit ethical agents in Moor’s sense, three first-order normative theories in the field of machine ethics are put to test. Those are Powers’ prospect for a Kantian machine, Anderson and Anderson’s reinterpretation of act utilitarianism and Howard and Muntean’s prospect for a moral machine based on a virtue ethical approach. By comparing and contrasting the three first-order normative theories, and by clarifying the gist of the differences between the processes of calculation and moral estimation, the possibility for building what—one might call strong “moral” AI scenarios—is questioned. The possibility of weak “moral” AI scenarios is likewise discussed critically.


Author(s):  
Laura Pana

We discuss the thesis that the implementation of a moral code in the behaviour of artificial intelligent systems needs a specific form of human and artificial intelligence, not just an abstract intelligence. We present intelligence as a system with an internal structure and the structural levels of the moral system, as well as certain characteristics of artificial intelligent agents which can/must be treated as 1- individual entities (with a complex, specialized, autonomous or selfdetermined, even unpredictable conduct), 2- entities endowed with diverse or even multiple intelligence forms, like moral intelligence, 3- open and, even, free-conduct performing systems (with specific, flexible and heuristic mechanisms and procedures of decision), 4 – systems which are open to education, not just to instruction, 5- entities with “lifegraphy”, not just “stategraphy”, 6- equipped not just with automatisms but with beliefs (cognitive and affective complexes), 7- capable even of reflection (“moral life” is a form of spiritual, not just of conscious activity), 8 – elements/members of some real (corporal or virtual) community, 9 – cultural beings: free conduct gives cultural value to the action of a ”natural” or artificial being. Implementation of such characteristics does not necessarily suppose efforts to design, construct and educate machines like human beings. The human moral code is irremediably imperfect: it is a morality of preference, of accountability (not of responsibility) and a morality of non-liberty, which cannot be remedied by the invention of ethical systems, by the circulation of ideal values and by ethical (even computing) education. But such an imperfect morality needs perfect instruments for its implementation: applications of special logic fields; efficient psychological (theoretical and technical) attainments to endow the machine not just with intelligence, but with conscience and even spirit; comprehensive technical means for supplementing the objective decision with a subjective one. Machine ethics can/will be of the highest quality because it will be derived from the sciences, modelled by techniques and accomplished by technologies. If our theoretical hypothesis about a specific moral intelligence, necessary for the implementation of an artificial moral conduct, is correct, then some theoretical and technical issues appear, but the following working hypotheses are possible: structural, functional and behavioural. The future of human and/or artificial morality is to be anticipated.


Author(s):  
Laura Pana

We discuss the thesis that the implementation of a moral code in the behaviour of artificial intelligent systems needs a specific form of human and artificial intelligence, not just an abstract intelligence. We present intelligence as a system with an internal structure and the structural levels of the moral system, as well as certain characteristics of artificial intelligent agents which can/must be treated as 1- individual entities (with a complex, specialized, autonomous or selfdetermined, even unpredictable conduct), 2- entities endowed with diverse or even multiple intelligence forms, like moral intelligence, 3- open and, even, free-conduct performing systems (with specific, flexible and heuristic mechanisms and procedures of decision), 4 – systems which are open to education, not just to instruction, 5- entities with “lifegraphy”, not just “stategraphy”, 6- equipped not just with automatisms but with beliefs (cognitive and affective complexes), 7- capable even of reflection (“moral life” is a form of spiritual, not just of conscious activity), 8 – elements/members of some real (corporal or virtual) community, 9 – cultural beings: free conduct gives cultural value to the action of a ”natural” or artificial being. Implementation of such characteristics does not necessarily suppose efforts to design, construct and educate machines like human beings. The human moral code is irremediably imperfect: it is a morality of preference, of accountability (not of responsibility) and a morality of non-liberty, which cannot be remedied by the invention of ethical systems, by the circulation of ideal values and by ethical (even computing) education. But such an imperfect morality needs perfect instruments for its implementation: applications of special logic fields; efficient psychological (theoretical and technical) attainments to endow the machine not just with intelligence, but with conscience and even spirit; comprehensive technical means for supplementing the objective decision with a subjective one. Machine ethics can/will be of the highest quality because it will be derived from the sciences, modelled by techniques and accomplished by technologies. If our theoretical hypothesis about a specific moral intelligence, necessary for the implementation of an artificial moral conduct, is correct, then some theoretical and technical issues appear, but the following working hypotheses are possible: structural, functional and behavioural. The future of human and/or artificial morality is to be anticipated.


2019 ◽  
Vol 8 (4) ◽  
pp. 3461-3467

Chatbots are also referred to as virtual assistance devices. They are the basic forms of artificial intelligence software that can imitate human conversations. Chatbots are relatively a new technology. The main goal of this survey paper is to provide the information about the various existing chatbots and their history of evaluation, and the applications of various chatbots in different domains. Chatbots are applied in fields like medical, E-commerce, business, education, banking, customer services, entertainment, etc. Chatbots cab be analyzed and improved. Main goal of any chatbot is to allow the user to make a natural conversation with machine. A conversational system consists of dialogue management, speech recognition, speech synthesis and conversation generation.


2012 ◽  
Vol 446-449 ◽  
pp. 975-978
Author(s):  
Tian Yi Qiu ◽  
Song Fu Liu

The current landscape space design ignored the existence of self-awareness and demonstration of Human Beings, meanwhile it also make human beings being dominated constantly. This thesis combined narrative, space, plot and other theories which related with the theory of landscape design explored the design methods which make the landscape views more appealing and space-create strategies which take narrative as spatial clues from the angel of main body in creation and started by aesthetic experience and behaviors of Human Beings, it also reflect harmonious spatial order between Views and Human Beings.


2020 ◽  
Vol 11 (1) ◽  
Author(s):  
Pierre Auloge ◽  
Julien Garnon ◽  
Joey Marie Robinson ◽  
Sarah Dbouk ◽  
Jean Sibilia ◽  
...  

Abstract Objectives To assess awareness and knowledge of Interventional Radiology (IR) in a large population of medical students in 2019. Methods An anonymous survey was distributed electronically to 9546 medical students from first to sixth year at three European medical schools. The survey contained 14 questions, including two general questions on diagnostic radiology (DR) and artificial intelligence (AI), and 11 on IR. Responses were analyzed for all students and compared between preclinical (PCs) (first to third year) and clinical phase (Cs) (fourth to sixth year) of medical school. Of 9546 students, 1459 students (15.3%) answered the survey. Results On DR questions, 34.8% answered that AI is a threat for radiologists (PCs: 246/725 (33.9%); Cs: 248/734 (36%)) and 91.1% thought that radiology has a future (PCs: 668/725 (92.1%); Cs: 657/734 (89.5%)). On IR questions, 80.8% (1179/1459) students had already heard of IR; 75.7% (1104/1459) stated that their knowledge of IR wasn’t as good as the other specialties and 80% would like more lectures on IR. Finally, 24.2% (353/1459) indicated an interest in a career in IR with a majority of women in preclinical phase, but this trend reverses in clinical phase. Conclusions Development of new technology supporting advances in artificial intelligence will likely continue to change the landscape of radiology; however, medical students remain confident in the need for specialty-trained human physicians in the future of radiology as a clinical practice. A large majority of medical students would like more information about IR in their medical curriculum; almost a quarter of students would be interested in a career in IR.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jane Scheetz ◽  
Philip Rothschild ◽  
Myra McGuinness ◽  
Xavier Hadoux ◽  
H. Peter Soyer ◽  
...  

AbstractArtificial intelligence technology has advanced rapidly in recent years and has the potential to improve healthcare outcomes. However, technology uptake will be largely driven by clinicians, and there is a paucity of data regarding the attitude that clinicians have to this new technology. In June–August 2019 we conducted an online survey of fellows and trainees of three specialty colleges (ophthalmology, radiology/radiation oncology, dermatology) in Australia and New Zealand on artificial intelligence. There were 632 complete responses (n = 305, 230, and 97, respectively), equating to a response rate of 20.4%, 5.1%, and 13.2% for the above colleges, respectively. The majority (n = 449, 71.0%) believed artificial intelligence would improve their field of medicine, and that medical workforce needs would be impacted by the technology within the next decade (n = 542, 85.8%). Improved disease screening and streamlining of monotonous tasks were identified as key benefits of artificial intelligence. The divestment of healthcare to technology companies and medical liability implications were the greatest concerns. Education was identified as a priority to prepare clinicians for the implementation of artificial intelligence in healthcare. This survey highlights parallels between the perceptions of different clinician groups in Australia and New Zealand about artificial intelligence in medicine. Artificial intelligence was recognized as valuable technology that will have wide-ranging impacts on healthcare.


2021 ◽  
Vol 13 (3) ◽  
pp. 1130
Author(s):  
Xiaoke Yang ◽  
Yuanhao Huang ◽  
Xiaoying Cai ◽  
Yijing Song ◽  
Hui Jiang ◽  
...  

Upcycled food, a new kind of food, provides an effective solution to reduce the food waste from the source on the premise of food security for human beings. However, the commercial success of upcycled food and its contribution to environmental sustainability are determined by consumers’ purchase intentions. In order to overcome consumers’ unfamiliarity with upcycled food and fear of new technology, based on the cue utility theory, we adopted scenario simulation through online questionnaires in three experiments to explore how mental simulation can improve consumers’ product evaluation and purchase intentions for upcycled food. Through ANOVA, the t-test, and the Bootstrap methods, the results showed that, compared with the control group, consumers’ product evaluation and purchase intentions for upcycled food in the mental simulation group significantly increased. Among them, consumers’ inspiration played a mediation role. The consumers’ future self-continuity could moderate the effect of mental simulation on consumers’ purchase intentions for upcycled food. The higher the consumers’ future self-continuity, the stronger the effect of mental simulation. Based on the above results, in the marketing promotion of upcycled food, promotional methods, such as slogans and posters, could be used to stimulate consumers, especially the mental simulation thinking mode of consumer groups with high future self-continuity, thus improving consumers’ purchase intentions for upcycled food.


2021 ◽  
pp. 146144482199380
Author(s):  
Donghee Shin

How much do anthropomorphisms influence the perception of users about whether they are conversing with a human or an algorithm in a chatbot environment? We develop a cognitive model using the constructs of anthropomorphism and explainability to explain user experiences with conversational journalism (CJ) in the context of chatbot news. We examine how users perceive anthropomorphic and explanatory cues, and how these stimuli influence user perception of and attitudes toward CJ. Anthropomorphic explanations of why and how certain items are recommended afford users a sense of humanness, which then affects trust and emotional assurance. Perceived humanness triggers a two-step flow of interaction by defining the baseline to make a judgment about the qualities of CJ and by affording the capacity to interact with chatbots concerning their intention to interact with chatbots. We develop practical implications relevant to chatbots and ascertain the significance of humanness as a social cue in CJ. We offer a theoretical lens through which to characterize humanness as a key mechanism of human–artificial intelligence (AI) interaction, of which the eventual goal is humans perceive AI as human beings. Our results help to better understand human–chatbot interaction in CJ by illustrating how humans interact with chatbots and explaining why humans accept the way of CJ.


Sign in / Sign up

Export Citation Format

Share Document