scholarly journals The Last Mile: Where Artificial Intelligence Meets Reality (Preprint)

2019 ◽  
Author(s):  
Enrico Coiera

UNSTRUCTURED Although much effort is focused on improving the technical performance of artificial intelligence, there are compelling reasons to focus more on the implementation of this technology class to solve real-world applications. In this “last mile” of implementation lie many complex challenges that may make technically high-performing systems perform poorly. Instead of viewing artificial intelligence development as a linear one of algorithm development through to eventual deployment, there are strong reasons to take a more agile approach, iteratively developing and testing artificial intelligence within the context in which it finally will be used.


10.2196/16323 ◽  
2019 ◽  
Vol 21 (11) ◽  
pp. e16323 ◽  
Author(s):  
Enrico Coiera

Although much effort is focused on improving the technical performance of artificial intelligence, there are compelling reasons to focus more on the implementation of this technology class to solve real-world applications. In this “last mile” of implementation lie many complex challenges that may make technically high-performing systems perform poorly. Instead of viewing artificial intelligence development as a linear one of algorithm development through to eventual deployment, there are strong reasons to take a more agile approach, iteratively developing and testing artificial intelligence within the context in which it finally will be used.



2021 ◽  
Vol 54 (6) ◽  
pp. 1-35
Author(s):  
Ninareh Mehrabi ◽  
Fred Morstatter ◽  
Nripsuta Saxena ◽  
Kristina Lerman ◽  
Aram Galstyan

With the widespread use of artificial intelligence (AI) systems and applications in our everyday lives, accounting for fairness has gained significant importance in designing and engineering of such systems. AI systems can be used in many sensitive environments to make important and life-changing decisions; thus, it is crucial to ensure that these decisions do not reflect discriminatory behavior toward certain groups or populations. More recently some work has been developed in traditional machine learning and deep learning that address such challenges in different subdomains. With the commercialization of these systems, researchers are becoming more aware of the biases that these applications can contain and are attempting to address them. In this survey, we investigated different real-world applications that have shown biases in various ways, and we listed different sources of biases that can affect AI applications. We then created a taxonomy for fairness definitions that machine learning researchers have defined to avoid the existing bias in AI systems. In addition to that, we examined different domains and subdomains in AI showing what researchers have observed with regard to unfair outcomes in the state-of-the-art methods and ways they have tried to address them. There are still many future directions and solutions that can be taken to mitigate the problem of bias in AI systems. We are hoping that this survey will motivate researchers to tackle these issues in the near future by observing existing work in their respective fields.



2021 ◽  
pp. 026638212110619
Author(s):  
Sharon Richardson

During the past two decades, there have been a number of breakthroughs in the fields of data science and artificial intelligence, made possible by advanced machine learning algorithms trained through access to massive volumes of data. However, their adoption and use in real-world applications remains a challenge. This paper posits that a key limitation in making AI applicable has been a failure to modernise the theoretical frameworks needed to evaluate and adopt outcomes. Such a need was anticipated with the arrival of the digital computer in the 1950s but has remained unrealised. This paper reviews how the field of data science emerged and led to rapid breakthroughs in algorithms underpinning research into artificial intelligence. It then discusses the contextual framework now needed to advance the use of AI in real-world decisions that impact human lives and livelihoods.



2020 ◽  
Author(s):  
Hassane Alami ◽  
Pascale Lehoux ◽  
Yannick Auclair ◽  
Michèle de Guise ◽  
Marie-Pierre Gagnon ◽  
...  

UNSTRUCTURED Artificial intelligence (AI) is seen as a strategic lever to improve access, quality, and efficiency of care and services and to build learning and value-based health systems. Many studies have examined the technical performance of AI within an experimental context. These studies provide limited insights into the issues that its use in a real-world context of care and services raises. To help decision makers address these issues in a systemic and holistic manner, this viewpoint paper relies on the health technology assessment core model to contrast the expectations of the health sector toward the use of AI with the risks that should be mitigated for its responsible deployment. The analysis adopts the perspective of payers (ie, health system organizations and agencies) because of their central role in regulating, financing, and reimbursing novel technologies. This paper suggests that AI-based systems should be seen as a health system transformation lever, rather than a discrete set of technological devices. Their use could bring significant changes and impacts at several levels: technological, clinical, human and cognitive (patient and clinician), professional and organizational, economic, legal, and ethical. The assessment of AI’s value proposition should thus go beyond technical performance and cost logic by performing a holistic analysis of its value in a real-world context of care and services. To guide AI development, generate knowledge, and draw lessons that can be translated into action, the right political, regulatory, organizational, clinical, and technological conditions for innovation should be created as a first step.



2021 ◽  
Author(s):  
Jesús Giráldez-Cru ◽  
Pedro Almagro-Blanco

The remarkable advances in SAT solving achieved in the last years have allowed to use this technology in many real-world applications of Artificial Intelligence, such as planning, formal verification, and scheduling, among others. Interestingly, these industrial SAT problems are commonly believed to be easier than classical random SAT formulas, but estimating their actual hardness is still a very challenging question, which in some cases even requires to solve them. In this context, realistic pseudo-industrial random SAT generators have emerged with the aim of reproducing the main features shared by the majority of these application problems. The study of these models may help to better understand the success of those SAT solving techniques and possibly improve them. In this work, we present a model to estimate the temperature of real-world SAT instances. This temperature represents the degree of distortion into the expected structure of the formula, from highly structured benchmarks (more similar to real-world SAT instances) to the complete absence of structure (observed in the classical random SAT model). Our solution is based on the Popularity-Similarity (PS) random model for SAT, which has been recently presented to reproduce two crucial features of application SAT benchmarks: scale-free and community structures. The PS model is able to control the hardness of the generated formula by introducing some randomizations in the expected structure. Our solution is a first step towards a hardness oracle based on the temperature of SAT formulas, which may be able to estimate the cost of solving real-world SAT instances without solving them.



1998 ◽  
Vol 13 (2) ◽  
pp. 185-194 ◽  
Author(s):  
PATRICK BRÉZILLON ◽  
MARCOS CAVALCANTI

The first International and Interdisciplinary Conference on Modeling and Using Context (CONTEXT-97) was held at Rio de Janeiro, Brazil on February 4–6 1997. This article provides a summary of the presentations and discussions during the three days with a focus on context in applications. The notion of context is far from defined, and is dependent in its interpretation on a cognitive science versus an engineering (or system building) point of view. However, the conference makes it possible to identify new trends in the formalization of context at a theoretical level, as well as in the use of context in real-world applications. Results presented at the conference are ascribed in the realm of the works on context over the past few years at specific workshops and symposia. The diversity of the attendees' origins (artificial intelligence, linguistics, philosophy, psychology, etc.) demonstrates that there are different types of context, not a unique one. For instance, logicians model context at the level of the knowledge representation and the reasoning mechanisms, while cognitive scientists consider context at the level of the interaction between two agents (i.e. two humans or a human and a machine). In the latter case, there are now strong arguments proving that one can speak of context only in reference to its use (e.g. context of an item or of a problem solving exercise). Moreover, there are different types of context that are interdependent. This makes it possible to understand why, despite the consensus on some context aspects, agreement on the notion of context is not yet achieved.



Author(s):  
Arthur Kordon

The chapter will focus on some practical issues in human and AI interaction based on the experience of applying AI in several large corporations. The following issues will be discussed: weaknesses of human intelligence, weaknesses of AI, benefits of human intelligence from AI, negative effects of AI on human intelligence, resistance of human intelligence toward AI, and how to improve the interaction between human and artificial intelligence. The discussed issues will be illustrated with examples from real-world applications.



Author(s):  
Kwang-Tzu Yang

The use of artificial intelligence methodologies in a variety of real-world applications has been around for some time. However, the application of such methodologies to thermal science and engineering is relatively new, but is receiving ever-increasing attention in the published literature since the mid 1990s. Such attention is due essentially to special requirements and needs of the field of thermal science and Engineering (TSE) in terms of its increasing complexity and the recognition that it is not feasible to approach many critical problems in this field by the use of traditional analysis. The purpose of the present brief review is to point out the recent advances in the artificial intelligence (AI) field and the successes of such methodologies to the current problems in thermal science and engineering. Some shortfalls and prospect for future applications will also be indicated.



2003 ◽  
Vol 18 (2) ◽  
pp. 147-174 ◽  
Author(s):  
P BRÉZILLON

Over the last ten years a community that is interested in context has emerged. Brézillon (1999) gave a survey of the literature on context in artificial intelligence. There is now a series of conferences on context, a website and a mailing list. The number of web pages with the word “context” has increased tenfold in the last five years. Being among the instigators of the use of context in real-world applications, I present in this paper the evolution of my thoughts over the last years and the results that have been obtained, including a representation formalism based on contextual graphs and the use of this formalism in a real-world application called SART. I present how procedures, practices and context are intertwined, as identified in the SART application and in different domains. I root my view of context in the artificial intelligence area and give a general presentation of my view of context under the three aspects – external knowledge, contextual knowledge and proceduralised context – with the implementation of this view in contextual graphs. I discuss how reasoning is carried out, based on procedure and practices, in the formalism of contextual graphs and show how incremental acquisition of practices is integrated in this formalism.



10.2196/17707 ◽  
2020 ◽  
Vol 22 (7) ◽  
pp. e17707
Author(s):  
Hassane Alami ◽  
Pascale Lehoux ◽  
Yannick Auclair ◽  
Michèle de Guise ◽  
Marie-Pierre Gagnon ◽  
...  

Artificial intelligence (AI) is seen as a strategic lever to improve access, quality, and efficiency of care and services and to build learning and value-based health systems. Many studies have examined the technical performance of AI within an experimental context. These studies provide limited insights into the issues that its use in a real-world context of care and services raises. To help decision makers address these issues in a systemic and holistic manner, this viewpoint paper relies on the health technology assessment core model to contrast the expectations of the health sector toward the use of AI with the risks that should be mitigated for its responsible deployment. The analysis adopts the perspective of payers (ie, health system organizations and agencies) because of their central role in regulating, financing, and reimbursing novel technologies. This paper suggests that AI-based systems should be seen as a health system transformation lever, rather than a discrete set of technological devices. Their use could bring significant changes and impacts at several levels: technological, clinical, human and cognitive (patient and clinician), professional and organizational, economic, legal, and ethical. The assessment of AI’s value proposition should thus go beyond technical performance and cost logic by performing a holistic analysis of its value in a real-world context of care and services. To guide AI development, generate knowledge, and draw lessons that can be translated into action, the right political, regulatory, organizational, clinical, and technological conditions for innovation should be created as a first step.



Sign in / Sign up

Export Citation Format

Share Document