Modeling and using context

1998 ◽  
Vol 13 (2) ◽  
pp. 185-194 ◽  
Author(s):  
PATRICK BRÉZILLON ◽  
MARCOS CAVALCANTI

The first International and Interdisciplinary Conference on Modeling and Using Context (CONTEXT-97) was held at Rio de Janeiro, Brazil on February 4–6 1997. This article provides a summary of the presentations and discussions during the three days with a focus on context in applications. The notion of context is far from defined, and is dependent in its interpretation on a cognitive science versus an engineering (or system building) point of view. However, the conference makes it possible to identify new trends in the formalization of context at a theoretical level, as well as in the use of context in real-world applications. Results presented at the conference are ascribed in the realm of the works on context over the past few years at specific workshops and symposia. The diversity of the attendees' origins (artificial intelligence, linguistics, philosophy, psychology, etc.) demonstrates that there are different types of context, not a unique one. For instance, logicians model context at the level of the knowledge representation and the reasoning mechanisms, while cognitive scientists consider context at the level of the interaction between two agents (i.e. two humans or a human and a machine). In the latter case, there are now strong arguments proving that one can speak of context only in reference to its use (e.g. context of an item or of a problem solving exercise). Moreover, there are different types of context that are interdependent. This makes it possible to understand why, despite the consensus on some context aspects, agreement on the notion of context is not yet achieved.

2021 ◽  
pp. 026638212110619
Author(s):  
Sharon Richardson

During the past two decades, there have been a number of breakthroughs in the fields of data science and artificial intelligence, made possible by advanced machine learning algorithms trained through access to massive volumes of data. However, their adoption and use in real-world applications remains a challenge. This paper posits that a key limitation in making AI applicable has been a failure to modernise the theoretical frameworks needed to evaluate and adopt outcomes. Such a need was anticipated with the arrival of the digital computer in the 1950s but has remained unrealised. This paper reviews how the field of data science emerged and led to rapid breakthroughs in algorithms underpinning research into artificial intelligence. It then discusses the contextual framework now needed to advance the use of AI in real-world decisions that impact human lives and livelihoods.


1998 ◽  
Vol 4 (3) ◽  
pp. 237-257 ◽  
Author(s):  
Moshe Sipper

The study of artificial self-replicating structures or machines has been taking place now for almost half a century. My goal in this article is to present an overview of research carried out in the domain of self-replication over the past 50 years, starting from von Neumann's work in the late 1940s and continuing to the most recent research efforts. I shall concentrate on computational models, that is, ones that have been studied from a computer science point of view, be it theoretical or experimental. The systems are divided into four major classes, according to the model on which they are based: cellular automata, computer programs, strings (or strands), or an altogether different approach. With the advent of new materials, such as synthetic molecules and nanomachines, it is quite possible that we shall see this somewhat theoretical domain of study producing practical, real-world applications.


2021 ◽  
Vol 54 (6) ◽  
pp. 1-35
Author(s):  
Ninareh Mehrabi ◽  
Fred Morstatter ◽  
Nripsuta Saxena ◽  
Kristina Lerman ◽  
Aram Galstyan

With the widespread use of artificial intelligence (AI) systems and applications in our everyday lives, accounting for fairness has gained significant importance in designing and engineering of such systems. AI systems can be used in many sensitive environments to make important and life-changing decisions; thus, it is crucial to ensure that these decisions do not reflect discriminatory behavior toward certain groups or populations. More recently some work has been developed in traditional machine learning and deep learning that address such challenges in different subdomains. With the commercialization of these systems, researchers are becoming more aware of the biases that these applications can contain and are attempting to address them. In this survey, we investigated different real-world applications that have shown biases in various ways, and we listed different sources of biases that can affect AI applications. We then created a taxonomy for fairness definitions that machine learning researchers have defined to avoid the existing bias in AI systems. In addition to that, we examined different domains and subdomains in AI showing what researchers have observed with regard to unfair outcomes in the state-of-the-art methods and ways they have tried to address them. There are still many future directions and solutions that can be taken to mitigate the problem of bias in AI systems. We are hoping that this survey will motivate researchers to tackle these issues in the near future by observing existing work in their respective fields.


2014 ◽  
pp. 8-20
Author(s):  
Kurosh Madani

In a large number of real world dilemmas and related applications the modeling of complex behavior is the central point. Over the past decades, new approaches based on Artificial Neural Networks (ANN) have been proposed to solve problems related to optimization, modeling, decision making, classification, data mining or nonlinear functions (behavior) approximation. Inspired from biological nervous systems and brain structure, Artificial Neural Networks could be seen as information processing systems, which allow elaboration of many original techniques covering a large field of applications. Among their most appealing properties, one can quote their learning and generalization capabilities. The main goal of this paper is to present, through some of main ANN models and based techniques, their real application capability in real world industrial dilemmas. Several examples through industrial and real world applications have been presented and discussed.


To build up a particular profile about a person, the study of examining the comportment is known as Behavior analysis. Initially the Behavior analysis is used in psychology and for suggesting and developing different types the application content for user then it developed in information technology. To make the applications for user's personal needs it becoming a new trends with the use of artificial intelligence (AI). in many applications like innovation to do everything from anticipating buy practices to altering a home's indoor regulator to the inhabitant's optimal temperature for a specific time of day use machine learning and artificial intelligence technology. The technique that is use to advance the rule proficiency that rely upon the past experience is known as machine learning. By utilizing the insights hypothesis it makes the numerical model, and its real work is to infer from the models gave. To take the information clearly from the data the methodology utilizes computational techniques.


2020 ◽  
Vol 68 ◽  
pp. 311-364
Author(s):  
Francesco Trovo ◽  
Stefano Paladino ◽  
Marcello Restelli ◽  
Nicola Gatti

Multi-Armed Bandit (MAB) techniques have been successfully applied to many classes of sequential decision problems in the past decades. However, non-stationary settings -- very common in real-world applications -- received little attention so far, and theoretical guarantees on the regret are known only for some frequentist algorithms. In this paper, we propose an algorithm, namely Sliding-Window Thompson Sampling (SW-TS), for nonstationary stochastic MAB settings. Our algorithm is based on Thompson Sampling and exploits a sliding-window approach to tackle, in a unified fashion, two different forms of non-stationarity studied separately so far: abruptly changing and smoothly changing. In the former, the reward distributions are constant during sequences of rounds, and their change may be arbitrary and happen at unknown rounds, while, in the latter, the reward distributions smoothly evolve over rounds according to unknown dynamics. Under mild assumptions, we provide regret upper bounds on the dynamic pseudo-regret of SW-TS for the abruptly changing environment, for the smoothly changing one, and for the setting in which both the non-stationarity forms are present. Furthermore, we empirically show that SW-TS dramatically outperforms state-of-the-art algorithms even when the forms of non-stationarity are taken separately, as previously studied in the literature.


2021 ◽  
pp. 183933492110376
Author(s):  
Patrick van Esch ◽  
J. Stewart Black

Artificial intelligence (AI)-enabled digital marketing is revolutionizing the way organizations create content for campaigns, generate leads, reduce customer acquisition costs, manage customer experiences, market themselves to prospective employees, and convert their reachable consumer base via social media. Real-world examples of organizations who are using AI in digital marketing abound. For example, Red Balloon and Harley Davidson used AI to automate their digital advertising campaigns. However, we are early in the process of both the practical application of AI by firms broadly and by their marketing functions in particular. One could argue that we are even earlier in the research process of conceptualizing, theorizing, and researching the use and impact of AI. Importantly, as with most technologies of significant potential, the application of AI in marketing engenders not just practical considerations but ethical questions as well. The ability of AI to automate activities, that in the past people did, also raises the issue of whether marketing professionals will embrace AI as a means to free them from more mundane tasks to spend time on higher value activities, or will they view AI as a threat to their employment? Given the nascent nature of research on AI at this point, the full capabilities and limitations of AI in marketing are unknown. This special edition takes an important step in illuminating both what we know and what we yet need to research.


2012 ◽  
pp. 1595-1612 ◽  
Author(s):  
Shigeki Sugiyama

Since the idea of “artificial intelligence with knowledge” had been introduced, so many thoughts, theories, and ideas in various fields of engineering, science, geology, social study, economics, and management methods have been proposed. Those things have been started as an extension of modern engineering control theories and practices. Firstly, expert system by using IF-Then rules came up to at a production spot in manufacturing, and then agent system method by using intelligent software programs for design, planning, scheduling, production, and management in manufacturing. And then after, the idea of “Knowledge” burst into the artificial intelligence field as a real aid for getting any purpose to be accomplished by having augmented the past key knowledge in terms of management (controlling). However, those augmented knowledge methods used to have usages only in a limited small area. In addition to this, lots of works have to be done before making the systems work for a target problem solving. And what is worse, lots of parts of systems have to be customized for a new application. This chapter introduces a new direction and a method in “Knowledge” by inaugurating the brand new idea of “Dynamics in Knowledge,” which will behave more flexibly and intelligently in real usages.


Author(s):  
Adnan Darwiche ◽  
Knot Pipatsrisawat

Complete SAT algorithms form an important part of the SAT literature. From a theoretical perspective, complete algorithms can be used as tools for studying the complexities of different proof systems. From a practical point of view, these algorithms form the basis for tackling SAT problems arising from real-world applications. The practicality of modern, complete SAT solvers undoubtedly contributes to the growing interest in the class of complete SAT algorithms. We review these algorithms in this chapter, including Davis-Putnum resolution, Stalmarck’s algorithm, symbolic SAT solving, the DPLL algorithm, and modern clause-learning SAT solvers. We also discuss the issue of certifying the answers of modern complete SAT solvers.


2019 ◽  
Vol 29 (11n12) ◽  
pp. 1607-1627
Author(s):  
Raul Ceretta Nunes ◽  
Marcelo Colomé ◽  
Fabio André Barcelos ◽  
Marcelo Garbin ◽  
Gustavo Bathu Paulus ◽  
...  

Intelligent computing techniques have a paramount importance to the treatment of cybersecurity incidents. In such Artificial Intelligence (AI) context, while most of the algorithms explored in the cybersecurity domain aim to present solutions to intrusion detection problems, these algorithms seldom approach the correction procedures that are explored in the resolution of cybersecurity incident problems that already took place. In practice, knowledge regarding cybersecurity resolution data and procedures is being under-used in the development of intelligent cybersecurity systems, sometimes even lost and not used at all. In this context, this work proposes the Case-based Cybersecurity Incident Resolution System (CCIRS), a system that implements an approach to integrate case-based reasoning (CBR) techniques and the IODEF standard in order to retain concrete problem-solving experiences of cybersecurity incident resolution to be reused in the resolution of new incidents. Different types of experimental results so far obtained with the CCIRS show that information security knowledge can be retained with our approach in a reusable memory improving the resolution of new cybersecurity problems.


Sign in / Sign up

Export Citation Format

Share Document