scholarly journals WHOSE SHAREDPLANS? SCRIPTS, COLLABORATION, AND FEMINIST AI RESEARCH

Author(s):  
Rachel Bergmann

This paper examines a network of women in AI research who together expanded the range of methodologies and disciplines usually included in AI in the 1980s and 1990s. In particular, Barbara Grosz and Candace Sidner’s concept of $2 offered a way to model conversational context and collaboration in multi-agent AI environments. Drawing on archival work, interviews, conference proceedings, white papers, and departmental reports, I consider the cultural, institutional, and intellectual forces that shaped this network and their research. Using a technofeminist framework (Wajcman 2004; Haraway 1990) and borrowing from Michelle Murphy’s (2012) concept of protocol feminism, this paper examines their “feminist AI protocol.” I outline on one hand an assemblage of techniques, values, methods, and practices that illustrate a protocol rooted in community, interdisciplinarity, and care; these researchers formalized human-computer dialogue as fundamentally collaborative, grounding their approach in the diverse goals and desires of real users. I argue their philosophy of “language as action” mirrors ideas circulating in feminist and critical STS simultaneously. On the other hand, this network of researchers did so from within a particular set of cultural and epistemological parameters of their computer science departments. The research practices of this network offer an opportunity to consider the limits of any feminist AI protocol without a deeper commitment to feminist epistemologies. There remains an urgent need to reflect on how to build feminist AI technologies that make room for and include many different standpoints.

Author(s):  
Aurora Vizcaino ◽  
Juan Pablo Soto ◽  
Javier Portillo ◽  
Mario Piattini

Efforts to develop Knowledge Management have increased in recent years. However, many of the systems implanted in companies are still not greatly used by the employees because the knowledge that these systems have is often not valuable or on other occasions, is useful but employees do not know how to search for that which is most suitable. Moreover, employees often receive too many answers when they consult this kind of systems and they need to waste time evaluating all of them in order to find that which is most suitable for their necessities. On the other hand, many technical aspects should also be considered when developing a multi-agent system such as what knowledge representation or retrieval technique is going to be used. To find a balance between both aspects is important if we want to develop a successful system. However, developers often focus on technical aspects giving less importance to knowledge issues. In order to avoid this, we have developed a model to help computer science engineers to develop these kinds of systems. In our proposal, first we define a knowledge life cycle model that, according to literature and our experience, ponders all the stages that a knowledge management system should give support to. Later, we describe the technology (software agents) that we recommend to support the activities of each stage. The chapter explains why we consider that software agents are suitable for this end and how they can work in order to reach their goals. Furthermore, a prototype that uses these agents is also described.


Author(s):  
Maria G. Juarez ◽  
Vicente J. Botti ◽  
Adriana S. Giret

Abstract With the arises of Industry 4.0, numerous concepts have emerged; one of the main concepts is the digital twin (DT). DT is being widely used nowadays, however, as there are several uses in the existing literature; the understanding of the concept and its functioning can be diffuse. The main goal of this paper is to provide a review of the existing literature to clarify the concept, operation, and main characteristics of DT, to introduce the most current operating, communication, and usage trends related to this technology, and to present the performance of the synergy between DT and multi-agent system (MAS) technologies through a computer science approach.


1971 ◽  
Vol 3 (4) ◽  
pp. 40-45 ◽  
Author(s):  
F. D. Vickers

Author(s):  
Hang Ma ◽  
Glenn Wagner ◽  
Ariel Felner ◽  
Jiaoyang Li ◽  
T. K. Satish Kumar ◽  
...  

We formalize Multi-Agent Path Finding with Deadlines (MAPF-DL). The objective is to maximize the number of agents that can reach their given goal vertices from their given start vertices within the deadline, without colliding with each other. We first show that MAPF-DL is NP-hard to solve optimally. We then present two classes of optimal algorithms, one based on a reduction of MAPF-DL to a flow problem and a subsequent compact integer linear programming formulation of the resulting reduced abstracted multi-commodity flow network and the other one based on novel combinatorial search algorithms. Our empirical results demonstrate that these MAPF-DL solvers scale well and each one dominates the other ones in different scenarios.


2012 ◽  
Vol 32 (4) ◽  
pp. 403 ◽  
Author(s):  
Marcin Lewiński

The principle of charity is used in philosophy of language and argumentation theory as an important principle of interpretation which credits speakers with “the best” plausible interpretation of their discourse. I contend that the argumentation account, while broadly advocated, misses the basic point of a dialectical conception which approaches argumentation as discussion between (at least) two parties who disagree over the issue discussed. Therefore, paradoxically, an analyst who is charitable to one discussion party easily becomes uncharitable to the other. To overcome this paradox, I suggest to significantly limit the application of the principle of charity depending on contextual factors.


2019 ◽  
Author(s):  
Philip Freytag

This work undertakes a systematic reconstruction of the debates that took place over the course of several decades up to the beginning of the 21st century between Derrida on the one hand and Searle and Habermas on the other. It shows that the linguistic theories and the theories of communicative understanding developed by Searle and Habermas are based on inferences from the contingent individual case to the general. Searle draws ontological, Habermas anthropo-political conclusions, both with essentially naturalistic signatures. Derrida, on the other hand, raises epistemological objections and consequently develops a metaphysics of free subjects for whom conversation cannot necessarlily be presumed. The explicit dedication to ethics in Derrida's late work is due to his insight that the possibility of language and understanding is due to silence. Derrida's lasting merit lies in enriching the philosophy of language with a secretology. This study has been awarded the Kant Prize of the Institute of Philosophy of the University of Bonn and the "Prix de la République Française", awarded by the French Embassy and the University of Bonn.


1970 ◽  
Vol 9 (1) ◽  
pp. 203-216
Author(s):  
Robert Janusz

The article is about an interaction between philosophy and informatics. The discussion is based on a complex example - a country, which has an evolving domain. In contemporary computer science very complex systems are modeled. However it would be impossible to model such systems with every detail, because it would be too difficult, it would be as complex as the reality itself. Frequently complex domains don't have an exact description of their behavior: some have an inadequate description, some have a contradictory one. To model such complex domains a computer science specialist acts like a philosopher: makes classifications, explanations, etc. On the other hand there have to be some philosophical presuppositions - a conviction that a logical analysis and design will work in the domain being modeled: a postulate is introduced that logos is able to capture-in the reality. The descriptions are continuously purified from irrational influences.


Author(s):  
B. PanduRanga Narasimharao

Tobias et al. (1995) postulated in their book on “Rethinking Science as a Career” that Master’s programs could produce graduates who provide the same level of expertise and leadership as professionals do in other fields. They say that they would do so by having the ability to use the products of scholarship in their work and by being familiar with the practical aspects of emerging problem areas. If we consider natural science consisting of physical sciences, biological sciences, mathematics, geosciences, and computer science, degrees in computer science and geosciences served as credentials for practice, whereas physics, chemistry, and biological sciences served as classical graduate education. Robbins-Roth (2006) collected 22 career descriptions for science graduates ranging from public policy to investment banking, and from patent examining to broadcast science journalism. There are several sectors of the society where the principles and knowledge of these science disciplines are used. On the other hand, there are many of the graduates in these disciplines who either are working in areas completely unrelated to their education and training or are unemployable. The need for preparing the science graduates professionally is well recognized (Schuster, 2011; Vanderford, 2010; Narasimharao, Shashidhara Prasad and Nair, 2011; Chuck, 2011).


Author(s):  
Betul C. Czerkawski

It has been more than a decade since Jeanette Wing's (2006) influential article about computational thinking (CT) proposed CT to be a “fundamental skill for everyone” (p. 33) and that needs to be added to every child's knowledge and skill set like reading, writing and arithmetic. Wing suggested that CT is a universal skill, and not only for computer scientists. This call resonated with many educators leading to various initiatives by the International Society for Teacher in Education (ISTE) and Computer Science Teachers Association (CSTA) provided the groundwork to integrate CT into the K-12 curriculum. While CT is not a new concept and has been taught in computer science departments for decades, Wing's call created a shift towards educational computing and the need for integrating it into curriculum for all. Since 2006, many scholars have conducted empirical or qualitative research to study the what, how and why of CT. This chapter reviews the most current literature and identifies general research patterns, themes and directions for the future. The purpose of the chapter is to emphasize future research needs by cumulatively looking at what has been done to date in computational thinking research. Consequently, the conclusion and discussion section of the paper presents a research agenda for future.


Author(s):  
Gokce Akcayir ◽  
Zhaorui Chen ◽  
Carrie Demmans Epp ◽  
Velian Pandeliev ◽  
Cosmin Munteanu

In this chapter, two cases that include computer science (CS) instructors' integration of an online discussion platform (Piazza) into their courses were examined. More specifically, the instructors' perspectives and role in these cases were explored to gain insight that might enable further improvements. Employing a mixed methods research design, these cases were investigated with text mining and qualitative data analysis techniques with regard to instructors' integration strategies and students' reactions to them. The results of the study showed that among these cases, one entailed a deep integration (Case 1) and the other a shallow one (Case 2). Instructors' presence and guidance through their posting behaviors had a bigger effect than the nature of the course content. Additionally, TA support in online discussions helped address the limitations of the asynchronous discussion when the TAs had the maturity to only respond to questions for which they were adequately prepared.


Sign in / Sign up

Export Citation Format

Share Document