scholarly journals Role of Artificial Intelligence in Shaping Consumer Demand in E-Commerce

2020 ◽  
Vol 12 (12) ◽  
pp. 226
Author(s):  
Laith T. Khrais

The advent and incorporation of technology in businesses have reformed operations across industries. Notably, major technical shifts in e-commerce aim to influence customer behavior in favor of some products and brands. Artificial intelligence (AI) comes on board as an essential innovative tool for personalization and customizing products to meet specific demands. This research finds that, despite the contribution of AI systems in e-commerce, its ethical soundness is a contentious issue, especially regarding the concept of explainability. The study adopted the use of word cloud analysis, voyance analysis, and concordance analysis to gain a detailed understanding of the idea of explainability as has been utilized by researchers in the context of AI. Motivated by a corpus analysis, this research lays the groundwork for a uniform front, thus contributing to a scientific breakthrough that seeks to formulate Explainable Artificial Intelligence (XAI) models. XAI is a machine learning field that inspects and tries to understand the models and steps involved in how the black box decisions of AI systems are made; it provides insights into the decision points, variables, and data used to make a recommendation. This study suggested that, to deploy explainable XAI systems, ML models should be improved, making them interpretable and comprehensible.

Author(s):  
Garret Merriam

Artificial Emotional Intelligence research has focused on emotions in a limited “black box” sense, concerned only with emotions as ‘inputs/outputs’ for the system, disregarding the processes and structures that constitute the emotion itself. We’re teaching machines to act as if they can feel emotions without the capacity to actually feel emotions. Serous moral and social problems will arise if we stick with the black box approach. As A.I.’s become more integrated with our lives, humans will require more than mere emulation of emotion; we’ll need them to have ‘the real thing.’ Moral psychology suggests emotions are necessary for moral reasoning and moral behavior. Socially, the role of ‘affective computing’ foreshadows the intimate ways humans will expect emotional reciprocity from their machines. Three objections are considered and responded to: (1) it’s not possible, (2) not necessary, and (3) too dangerous to give machines genuine emotions.


2020 ◽  
Vol 7 (2) ◽  
pp. 205395172093670
Author(s):  
Nicole Dewandre

In The Black Box Society, Frank Pasquale develops a critique of asymmetrical power: corporations’ secrecy is highly valued by legal orders, but persons’ privacy is continually invaded by these corporations. This response proceeds in three stages. I first highlight important contributions of The Black Box Society to our understanding of political and legal relationships between persons and corporations. I then critique a key metaphor in the book (the one-way mirror, Pasquale’s image of asymmetrical surveillance), and the role of transparency and ‘watchdogging’ in its primary policy prescriptions. I then propose ‘relational selfhood’ as an important new way of theorizing interdependence in an era of artificial intelligence and Big Data, and promoting optimal policies in these spheres.


2020 ◽  
Vol 28 ◽  
Author(s):  
Katrina Ingram

Artificial Intelligence (AI) is playing an increasingly prevalent role in our lives. Whether its landing a job interview, getting a bank loan or accessing a government program, organizations are using automated systems informed by AI enabled technologies in ways that have significant consequences for people. At the same time, there is a lack of transparency around how AI technologies work and whether they are ethical, fair or accurate. This paper examines a body of literature related to the ethical considerations surrounding the use of artificial intelligence and the role of ethical codes. It identifies and explores core issues including bias, fairness and transparency and looks at who is setting the agenda for AI ethics in Canada and globally. Lastly, it offers some suggestions for next steps towards a more inclusive discussion.


2021 ◽  
Author(s):  
Josilene C Santos ◽  
Jeannie Hsiu Ding Wong ◽  
Vinod Pallath ◽  
Kwan Hoong Ng

Abstract Artificial intelligence (AI) is an innovative tool that is revolutionising healthcare and medical physics, possibly impacting clinical practices, research, and the profession. The relevance of AI and its impact on the clinical practice and routine of professionals in medical physics were evaluated by medical physicists and researchers in this field. An online survey questionnaire was designed for distribution to professionals and students in medical physics around the world. In addition to demographics questions, we surveyed opinions on the role of AI in medical physicists’ practice, the possibility of AI threatening/disrupting the medical physicists’ practice and career, the need for medical physicists to acquire knowledge on AI, and the need for teaching AI in postgraduate medical physics programmes. The level of knowledge of medical physicists on AI was also consulted. A total of 1019 responders from 94 countries participated. More than 85% of the responders agree that AI will play an essential role in medical physicists’ practice. AI should be taught in the postgraduate medical physics programmes, and that more applications such as quality control, treatment planning will be performed by AI. Half of them thought AI would not threaten/disrupt the medical physicists’ practice. AI knowledge was mainly acquired through self-taught and work-related activities. Nonetheless, many (40%) admitted that they have no skill in AI. The general perception of medical physicists is that AI is here to stay, and it will influence our practice. Medical physicists should be prepared with education and training for this new reality.


AI & Society ◽  
2020 ◽  
Vol 35 (4) ◽  
pp. 917-926 ◽  
Author(s):  
Karl de Fine Licht ◽  
Jenny de Fine Licht

Abstract The increasing use of Artificial Intelligence (AI) for making decisions in public affairs has sparked a lively debate on the benefits and potential harms of self-learning technologies, ranging from the hopes of fully informed and objectively taken decisions to fear for the destruction of mankind. To prevent the negative outcomes and to achieve accountable systems, many have argued that we need to open up the “black box” of AI decision-making and make it more transparent. Whereas this debate has primarily focused on how transparency can secure high-quality, fair, and reliable decisions, far less attention has been devoted to the role of transparency when it comes to how the general public come to perceive AI decision-making as legitimate and worthy of acceptance. Since relying on coercion is not only normatively problematic but also costly and highly inefficient, perceived legitimacy is fundamental to the democratic system. This paper discusses how transparency in and about AI decision-making can affect the public’s perception of the legitimacy of decisions and decision-makers and produce a framework for analyzing these questions. We argue that a limited form of transparency that focuses on providing justifications for decisions has the potential to provide sufficient ground for perceived legitimacy without producing the harms full transparency would bring.


2021 ◽  
Vol 9 ◽  
Author(s):  
Eduardo Eiji Maeda ◽  
Päivi Haapasaari ◽  
Inari Helle ◽  
Annukka Lehikoinen ◽  
Alexey Voinov ◽  
...  

Modeling is essential for modern science, and science-based policies are directly affected by the reliability of model outputs. Artificial intelligence has improved the accuracy and capability of model simulations, but often at the expense of a rational understanding of the systems involved. The lack of transparency in black box models, artificial intelligence based ones among them, can potentially affect the trust in science driven policy making. Here, we suggest that a broader discussion is needed to address the implications of black box approaches on the reliability of scientific advice used for policy making. We argue that participatory methods can bridge the gap between increasingly complex scientific methods and the people affected by their interpretations


2020 ◽  
Vol 17 (6) ◽  
pp. 76-91
Author(s):  
E. D. Solozhentsev

The scientific problem of economics “Managing the quality of human life” is formulated on the basis of artificial intelligence, algebra of logic and logical-probabilistic calculus. Managing the quality of human life is represented by managing the processes of his treatment, training and decision making. Events in these processes and the corresponding logical variables relate to the behavior of a person, other persons and infrastructure. The processes of the quality of human life are modeled, analyzed and managed with the participation of the person himself. Scenarios and structural, logical and probabilistic models of managing the quality of human life are given. Special software for quality management is described. The relationship of human quality of life and the digital economy is examined. We consider the role of public opinion in the management of the “bottom” based on the synthesis of many studies on the management of the economics and the state. The bottom management is also feedback from the top management.


2020 ◽  
Vol 16 (4) ◽  
pp. 600-612
Author(s):  
L.F. Nikulin ◽  
V.V. Velikorossov ◽  
S.A. Filin ◽  
A.B. Lanchakov

Subject. The article discusses how management transforms as artificial intelligence gets more important in governance, production and social life. Objectives. We identify and substantiate trends in management transformation as artificial intelligence evolves and gets more important in governance, production and social life. The article also provides our suggestions for management and training of managers dealing with artificial intelligence. Methods. The study employs methods of logic research, analysis and synthesis through the systems and creative approach, methodology of technological waves. Results. We analyzed the scope of management as is and found that threats and global challenges escalate due to the advent of artificial intelligence. We provide the rationale for recognizing the strategic culture as the self-organizing system of business process integration. We suggest and substantiate the concept of soft power with reference to strategic culture, which should be raised, inter alia, through the scientific school of conflict studies. We give our recommendations on how management and training of managers should be improved in dealing with artificial intelligence as it evolves. The novelty hereof is that we trace trends in management transformation as the role of artificial intelligence evolves and growth in governance, production and social life. Conclusions and Relevance. Generic solutions are not very effective for the Russian management practice during the transition to the sixth and seventh waves of innovation. Any programming product represents artificial intelligence, which simulates a personality very well, though unable to substitute a manager in motivating, governing and interacting with people.


Sign in / Sign up

Export Citation Format

Share Document