Will Artificial Intelligence Outperform the Clinical Neurologist in the Near Future? No

Author(s):  
Christopher G. Goetz
2021 ◽  
Vol 14 (8) ◽  
pp. 339
Author(s):  
Tatjana Vasiljeva ◽  
Ilmars Kreituss ◽  
Ilze Lulle

This paper looks at public and business attitudes towards artificial intelligence, examining the main factors that influence them. The conceptual model is based on the technology–organization–environment (TOE) framework and was tested through analysis of qualitative and quantitative data. Primary data were collected by a public survey with a questionnaire specially developed for the study and by semi-structured interviews with experts in the artificial intelligence field and management representatives from various companies. This study aims to evaluate the current attitudes of the public and employees of various industries towards AI and investigate the factors that affect them. It was discovered that attitude towards AI differs significantly among industries. There is a significant difference in attitude towards AI between employees at organizations with already implemented AI solutions and employees at organizations with no intention to implement them in the near future. The three main factors which have an impact on AI adoption in an organization are top management’s attitude, competition and regulations. After determining the main factors that influence the attitudes of society and companies towards artificial intelligence, recommendations are provided for reducing various negative factors. The authors develop a proposition that justifies the activities needed for successful adoption of innovative technologies.


2021 ◽  
Vol 54 (6) ◽  
pp. 1-35
Author(s):  
Ninareh Mehrabi ◽  
Fred Morstatter ◽  
Nripsuta Saxena ◽  
Kristina Lerman ◽  
Aram Galstyan

With the widespread use of artificial intelligence (AI) systems and applications in our everyday lives, accounting for fairness has gained significant importance in designing and engineering of such systems. AI systems can be used in many sensitive environments to make important and life-changing decisions; thus, it is crucial to ensure that these decisions do not reflect discriminatory behavior toward certain groups or populations. More recently some work has been developed in traditional machine learning and deep learning that address such challenges in different subdomains. With the commercialization of these systems, researchers are becoming more aware of the biases that these applications can contain and are attempting to address them. In this survey, we investigated different real-world applications that have shown biases in various ways, and we listed different sources of biases that can affect AI applications. We then created a taxonomy for fairness definitions that machine learning researchers have defined to avoid the existing bias in AI systems. In addition to that, we examined different domains and subdomains in AI showing what researchers have observed with regard to unfair outcomes in the state-of-the-art methods and ways they have tried to address them. There are still many future directions and solutions that can be taken to mitigate the problem of bias in AI systems. We are hoping that this survey will motivate researchers to tackle these issues in the near future by observing existing work in their respective fields.


2019 ◽  
Vol 3 (2) ◽  
pp. 34
Author(s):  
Hiroshi Yamakawa

In a human society with emergent technology, the destructive actions of some pose a danger to the survival of all of humankind, increasing the need to maintain peace by overcoming universal conflicts. However, human society has not yet achieved complete global peacekeeping. Fortunately, a new possibility for peacekeeping among human societies using the appropriate interventions of an advanced system will be available in the near future. To achieve this goal, an artificial intelligence (AI) system must operate continuously and stably (condition 1) and have an intervention method for maintaining peace among human societies based on a common value (condition 2). However, as a premise, it is necessary to have a minimum common value upon which all of human society can agree (condition 3). In this study, an AI system to achieve condition 1 was investigated. This system was designed as a group of distributed intelligent agents (IAs) to ensure robust and rapid operation. Even if common goals are shared among all IAs, each autonomous IA acts on each local value to adapt quickly to each environment that it faces. Thus, conflicts between IAs are inevitable, and this situation sometimes interferes with the achievement of commonly shared goals. Even so, they can maintain peace within their own societies if all the dispersed IAs think that all other IAs aim for socially acceptable goals. However, communication channel problems, comprehension problems, and computational complexity problems are barriers to realization. This problem can be overcome by introducing an appropriate goal-management system in the case of computer-based IAs. Then, an IA society could achieve its goals peacefully, efficiently, and consistently. Therefore, condition 1 will be achievable. In contrast, humans are restricted by their biological nature and tend to interact with others similar to themselves, so the eradication of conflicts is more difficult.


2021 ◽  
Vol 8 ◽  
Author(s):  
Eric Martínez ◽  
Christoph Winter

To what extent, if any, should the law protect sentient artificial intelligence (that is, AI that can feel pleasure or pain)? Here we surveyed United States adults (n = 1,061) on their views regarding granting 1) general legal protection, 2) legal personhood, and 3) standing to bring forth a lawsuit, with respect to sentient AI and eight other groups: humans in the jurisdiction, humans outside the jurisdiction, corporations, unions, non-human animals, the environment, humans living in the near future, and humans living in the far future. Roughly one-third of participants endorsed granting personhood and standing to sentient AI (assuming its existence) in at least some cases, the lowest of any group surveyed on, and rated the desired level of protection for sentient AI as lower than all groups other than corporations. We further investigated and observed political differences in responses; liberals were more likely to endorse legal protection and personhood for sentient AI than conservatives. Taken together, these results suggest that laypeople are not by-and-large in favor of granting legal protection to AI, and that the ordinary conception of legal status, similar to codified legal doctrine, is not based on a mere capacity to feel pleasure and pain. At the same time, the observed political differences suggest that previous literature regarding political differences in empathy and moral circle expansion apply to artificially intelligent systems and extend partially, though not entirely, to legal consideration, as well.


Author(s):  
Bhanu Chander

Artificial intelligence (AI) is defined as a machine that can do everything a human being can do and produce better results. Means AI enlightening that data can produce a solution for its own results. Inside the AI ellipsoidal, Machine learning (ML) has a wide variety of algorithms produce more accurate results. As a result of technology, improvement increasing amounts of data are available. But with ML and AI, it is very difficult to extract such high-level, abstract features from raw data, moreover hard to know what feature should be extracted. Finally, we now have deep learning; these algorithms are modeled based on how human brains process the data. Deep learning is a particular kind of machine learning that provides flexibility and great power, with its attempts to learn in multiple levels of representation with the operations of multiple layers. Deep learning brief overview, platforms, Models, Autoencoders, CNN, RNN, and Appliances are described appropriately. Deep learning will have many more successes in the near future because it requires very little engineering by hand.


Author(s):  
Anke Moerland ◽  
Conrado Freitas

Artificial intelligence (AI) has an unparalleled potential for facilitating intellectual property (IP) administration processes, in particular in the context of examining trademark applications and assessing prior marks in opposition and infringement proceedings. Several stakeholders have developed AI-based algorithms that are claimed to enhance the productivity of trademark professionals by carrying out, without human input, (parts of) the legal tests required to register a trademark, oppose it, or claim an infringement thereof. The goal of this chapter is to assess the functionality of the AI tools currently used and to highlight the possible limitations of AI tools to carry out autonomously the legal tests enshrined in trademark law. In fact, many of these tests are rather subjective and highly depend on the facts of the case, such as an assessment of the distinctive character of a mark, whether the relevant public is likely to be confused or whether a third party has taken unfair advantage of a mark. The chapter uses doctrinal research methods and interview data with fourteen stakeholders in the field. It finds that AI tools are so far unable to reflect the nuances of the subjective legal tests in trademark law and, it is argued, even in the near future, AI tools are likely to carry out merely parts of the legal tests and present information that a human will have to assess, taking prior doctrine and the circumstances of the case into account.


Author(s):  
Ryosuke Yokoi ◽  
Kazuya Nakayachi

Objective Autonomous cars (ACs) controlled by artificial intelligence are expected to play a significant role in transportation in the near future. This study investigated determinants of trust in ACs. Background Trust in ACs influences different variables, including the intention to adopt AC technology. Several studies on risk perception have verified that shared value determines trust in risk managers. Previous research has confirmed the effect of value similarity on trust in artificial intelligence. We focused on moral beliefs, specifically utilitarianism (belief in promoting a greater good) and deontology (belief in condemning deliberate harm), and tested the effects of shared moral beliefs on trust in ACs. Method We conducted three experiments ( N = 128, 71, and 196, for each), adopting a thought experiment similar to the well-known trolley problem. We manipulated shared moral beliefs (shared vs. unshared) and driver (AC vs. human), providing participants with different moral dilemma scenarios. Trust in ACs was measured through a questionnaire. Results The results of Experiment 1 showed that shared utilitarian belief strongly influenced trust in ACs. In Experiment 2 and Experiment 3, however, we did not find statistical evidence that shared deontological belief had an effect on trust in ACs. Conclusion The results of the three experiments suggest that the effect of shared moral beliefs on trust varies depending on the values that ACs share with humans. Application To promote AC implementation, policymakers and developers need to understand which values are shared between ACs and humans to enhance trust in ACs.


Diagnostics ◽  
2020 ◽  
Vol 10 (4) ◽  
pp. 231 ◽  
Author(s):  
Adrian P. Brady ◽  
Emanuele Neri

Artificial intelligence (AI) is poised to change much about the way we practice radiology in the near future. The power of AI tools has the potential to offer substantial benefit to patients. Conversely, there are dangers inherent in the deployment of AI in radiology, if this is done without regard to possible ethical risks. Some ethical issues are obvious; others are less easily discerned, and less easily avoided. This paper explains some of the ethical difficulties of which we are presently aware, and some of the measures we may take to protect against misuse of AI.


2021 ◽  
Vol 2 (1) ◽  
pp. 48-69
Author(s):  
André Lopes ◽  

What does it mean to be alive? At what point does artificial intelligence know enough to be alive? Does the Turing test even matter? If we want the best government policy possible, does it matter if it comes from a computer? In this work of philosophical short story fiction, Rain is hired to do cyber-security for Presidential candidate Mr. Booker. There is a cyber-attack into Booker’s computer network and Rain is called to answer for the breach. In the process of digging into the data, Rain finds out that Booker is an actor, what is known in society as a “ghost,” and that all of the policy and speeches he has been given are being written by a sophisticated artificial intelligence using polling and other data. He says, literally, the perfect things at the perfect times, to the perfect audience. While artificial people, like news reporters, bloggers, actors, and influencers, are slowly becoming standard in this near future story, the idea of a politician being nothing more but an actor serving as a vessel for AI is unprecedented. Before Rain can decide what to do with her newfound information she is framed and is forced to use all her computer skills just to keep herself out of jail.


Sign in / Sign up

Export Citation Format

Share Document