scholarly journals THE TRANSFORMATION OF "ARTIFICIAL" SCIENCE INTO ARTIFICIAL INTELLIGENCE: 50 YEARS LATER

2020 ◽  
Vol 19 (3) ◽  
pp. 340-343
Author(s):  
Boris Aberšek

For years, experts have warned against the unanticipated effects of general artificial intelligence (AI) on society. Ray Kurzweil (1998, 2005) predicts that by 2029 intelligent machines will be able to outsmart human beings. Stephen Hawking argues that “once humans develop full AI; it will take off on its own and redesign itself at an ever-increasing rate”. Elon Musk warns that AI may constitute a “fundamental risk to the existence of human civilization”. If the problems of incorporating AI in manufacture and service operations, i.e. using smart machines, are smaller, as the ‘faults’ can be recognized relatively quickly and they do not have a drastic effect on society, then the incorporation of AI in society and especially in the educational process is an extremely risky business that requires a thorough consideration. The consequences of mistakes in this endeavour could be catastrophic and long-term, as the results can be seen only after many years.

Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Yanyan Dong ◽  
Jie Hou ◽  
Ning Zhang ◽  
Maocong Zhang

Artificial intelligence (AI) is essentially the simulation of human intelligence. Today’s AI can only simulate, replace, extend, or expand part of human intelligence. In the future, the research and development of cutting-edge technologies such as brain-computer interface (BCI) together with the development of the human brain will eventually usher in a strong AI era, when AI can simulate and replace human’s imagination, emotion, intuition, potential, tacit knowledge, and other kinds of personalized intelligence. Breakthroughs in algorithms represented by cognitive computing promote the continuous penetration of AI into fields such as education, commerce, and medical treatment to build up AI service space. As to human concern, namely, who controls whom between humankind and intelligent machines, the answer is that AI can only become a service provider for human beings, demonstrating the value rationality of following ethics.


Author(s):  
S. Matthew Liao

This introduction outlines in section I.1 some of the key issues in the study of the ethics of artificial intelligence (AI) and proposes ways to take these discussions further. Section I.2 discusses key concepts in AI, machine learning, and deep learning. Section I.3 considers ethical issues that arise because current machine learning is data hungry; is vulnerable to bad data and bad algorithms; is a black box that has problems with interpretability, explainability, and trust; and lacks a moral sense. Section I.4 discusses ethical issues that arise because current machine learning systems may be working too well and human beings can be vulnerable in the presence of these intelligent systems. Section I.5 examines ethical issues arising out of the long-term impact of superintelligence such as how the values of a superintelligent AI can be aligned with human values. Section I.6 presents an overview of the essays in this volume.


Author(s):  
T.D. Raheni ◽  
P. Thirumoorthi

Artificial intelligence (AI) is a region of computer techniques that deals with the design of intelligent machines that respond like humans. It has the skill to operate as a machine and simulate various human intelligent algorithms according to the user’s choice. It has the ability to solve problems, act like humans, and perceive information. In the current scenario, intelligent techniques minimize human effort especially in industrial fields. Human beings create machines through these intelligent techniques and perform various processes in different fields. Artificial intelligence deals with real-time insights where decisions are made by connecting the data to various resources. To solve real-time problems, powerful machine learning-based techniques such as artificial intelligence, neural networks, fuzzy logic, genetic algorithms, and particle swarm optimization have been used in recent years. This chapter explains artificial neural network-based adaptive linear neuron networks, back-propagation networks, and radial basis networks.


2020 ◽  
Vol 148 (1) ◽  
pp. 79-85
Author(s):  
Ye Yudan

UN led peacekeeping operations began in 1948. Since then, peacekeeping operations have gradually entered an information age that is constantly influenced and defined by computers, the Internet, etc. The invention of computer, whether or not its original intention is limited to the purpose of assisting human beings in numerical calculation, will eventually lead to the generation of intelligent machines that can ex-tend and enhance the abilities of human beings to transform nature and govern so-ciety. When artificial intelligence is widely used and has shaped the society into a hu-man-computer symbiotic society, peacekeeping operations must take the initiative to face the new era environment which is different from the past history of human beings, and make efforts to solve the complex problems they are facing.


Edupedia ◽  
2018 ◽  
Vol 2 (2) ◽  
pp. 73-83
Author(s):  
Ahmad Dahri

The real purpose of education is humanizing human beings. The most prominent thing in humanity is diversity, plurality or multiculturality. Indonesia is a country consisting of a plural society. This should be realized by all individuals in this nusantara society. Providing awareness of the existence of mulitikulturalitas or pluralism can be pursued in the educational process. For the sake of this interest, then in the educational process there must be some kind of integralization effort between forming the intellect and morality of learners. The function of integralization of moral and intellectual education is to know more about diversity then combine with knowledge and practice with morality then achieve the purposes of national education. The conclusions or findings of Freire’s and Ki Hadjar Dewantara’s analysis approach are the absence of differences in the educational portion, the absence of social classes as the limits of education, and the educator has a role as teacher not only as a facilitator but also as a identifierin diversity and be honest about the history, there is a link between learners and educators, mutual understanding, learners receive teaching, and educators learn to understand learners, and this function is summarized in education for freedom and ing ngarsho sung tuladha, ing madyo mangun karsha, tut wur handayani.


This book is the first to examine the history of imaginative thinking about intelligent machines. As real artificial intelligence (AI) begins to touch on all aspects of our lives, this long narrative history shapes how the technology is developed, deployed, and regulated. It is therefore a crucial social and ethical issue. Part I of this book provides a historical overview from ancient Greece to the start of modernity. These chapters explore the revealing prehistory of key concerns of contemporary AI discourse, from the nature of mind and creativity to issues of power and rights, from the tension between fascination and ambivalence to investigations into artificial voices and technophobia. Part II focuses on the twentieth and twenty-first centuries in which a greater density of narratives emerged alongside rapid developments in AI technology. These chapters reveal not only how AI narratives have consistently been entangled with the emergence of real robotics and AI, but also how they offer a rich source of insight into how we might live with these revolutionary machines. Through their close textual engagements, these chapters explore the relationship between imaginative narratives and contemporary debates about AI’s social, ethical, and philosophical consequences, including questions of dehumanization, automation, anthropomorphization, cybernetics, cyberpunk, immortality, slavery, and governance. The contributions, from leading humanities and social science scholars, show that narratives about AI offer a crucial epistemic site for exploring contemporary debates about these powerful new technologies.


Author(s):  
Elana Zeide

This chapter looks at the use of artificial intelligence (AI) in education, which immediately conjures the fantasy of robot teachers, as well as fears that robot teachers will replace their human counterparts. However, AI tools impact much more than instructional choices. Personalized learning systems take on a whole host of other educational roles as well, fundamentally reconfiguring education in the process. They not only perform the functions of robot teachers but also make pedagogical and policy decisions typically left to teachers and policymakers. Their design, affordances, analytical methods, and visualization dashboards construct a technological, computational, and statistical infrastructure that literally codifies what students learn, how they are assessed, and what standards they must meet. However, school procurement and implementation of these systems are rarely part of public discussion. If they are to remain relevant to the educational process itself, as opposed to just its packaging and context, schools and their stakeholders must be more proactive in demanding information from technology providers and setting internal protocols to ensure effective and consistent implementation. Those who choose to outsource instructional functions should do so with sufficient transparency mechanisms in place to ensure professional oversight guided by well-informed debate.


2020 ◽  
Vol 41 (Supplement_2) ◽  
Author(s):  
J Medina-Inojosa ◽  
A Ladejobi ◽  
Z Attia ◽  
M Shelly-Cohen ◽  
B Gersh ◽  
...  

Abstract Background We have demonstrated that artificial intelligence interpretation of ECGs (AI-ECG) can estimate an individual's physiologic age and that the gap between AI-ECG and chronologic age (Age-Gap) is associated with increased mortality. We hypothesized that Age-Gap would predict long-term atherosclerotic cardiovascular disease (ASCVD) and that Age-Gap would refine the ACC/AHA Pooled Cohort Equations' (PCE) predictive abilities. Methods Using the Rochester Epidemiology Project (REP) we evaluated a community-based cohort of consecutive patients seeking primary care between 1998–2000 and followed through March 2016. Inclusion criteria were age 40–79 and complete data to calculate PCE. We excluded those with known ASCVD, AF, HF or an event within 30 days of baseline.A neural network, trained, validated, and tested in an independent cohort of ∼ 500,000 independent patients, using 10-second digital samples of raw, 12 lead ECGs. PCE was categorized as low<5%, intermediate 5–9.9%, high 10–19.9%, and very high≥20%. The primary endpoint was ASCVD and included fatal and non-fatal myocardial infarction and ischemic stroke; the secondary endpoint also included coronary revascularization [Percutaneous Coronary Intervention (PCI) or Coronary Artery Bypass Graft (CABG)], TIA and Cardiovascular mortality. Events were validated in duplicate. Follow-up was truncated at 10 years for PCE analysis. The association between Age-Gap with ASCVD and expanded ASCVD was assessed with cox proportional hazard models that adjusted for chronological age, sex and risk factors. Models were stratified by PCE risk categories to evaluate the effect of PCE predicted risk. Results We included 24,793 patients (54% women, 95% Caucasian) with mean follow up of 12.6±5.1 years. 2,366 (9.5%) developed ASCVD events and 3,401 (13.7%) the expanded ASCVD. Mean chronologic age was 53.6±11.6 years and the AI-ECG age was 54.5±10.9 years, R2=0.7865, p<0.0001. The mean Age-Gap was 0.87±7.38 years. After adjusting for age and sex, those considered older by ECG, compared to their chronologic age had a higher risk for ASCVD when compared to those with <−2 SD age gap (considered younger by ECG). (Figure 1A), with similar results when using the expanded definition of ASCVD (data not shown). Furthermore, Age-Gap enhanced predicted capabilities of the PCE among those with low 10-year predicted risk (<5%): Age and sex adjusted HR 4.73, 95% CI 1.42–15.74, p-value=0.01 and among those with high predicted risk (>20%) age and sex adjusted HR 6.90, 95% CI 1.98–24.08, p-value=0.0006, when comparing those older to younger by ECG respectively (Figure 1B). Conclusion The difference between physiologic AI-ECG age and chronologic age is associated with long-term ASCVD, and enhances current risk calculators (PCE) ability to identify high and low risk individuals. This may help identify individuals who should or should not be treated with newer, expensive risk-reducing therapies. Funding Acknowledgement Type of funding source: Foundation. Main funding source(s): Mayo Clinic


2021 ◽  
pp. 146144482199380
Author(s):  
Donghee Shin

How much do anthropomorphisms influence the perception of users about whether they are conversing with a human or an algorithm in a chatbot environment? We develop a cognitive model using the constructs of anthropomorphism and explainability to explain user experiences with conversational journalism (CJ) in the context of chatbot news. We examine how users perceive anthropomorphic and explanatory cues, and how these stimuli influence user perception of and attitudes toward CJ. Anthropomorphic explanations of why and how certain items are recommended afford users a sense of humanness, which then affects trust and emotional assurance. Perceived humanness triggers a two-step flow of interaction by defining the baseline to make a judgment about the qualities of CJ and by affording the capacity to interact with chatbots concerning their intention to interact with chatbots. We develop practical implications relevant to chatbots and ascertain the significance of humanness as a social cue in CJ. We offer a theoretical lens through which to characterize humanness as a key mechanism of human–artificial intelligence (AI) interaction, of which the eventual goal is humans perceive AI as human beings. Our results help to better understand human–chatbot interaction in CJ by illustrating how humans interact with chatbots and explaining why humans accept the way of CJ.


Sign in / Sign up

Export Citation Format

Share Document