Managing exactness and vagueness in computer science work: Programming and self-repair in meetings

2021 ◽  
pp. 030631272110109
Author(s):  
Ole Pütz

The formulation of computer algorithms requires the elimination of vagueness. This elimination of vagueness requires exactness in programming, and this exactness can be traced to meeting talk, where it intersects with the indexicality of expressions. This article is concerned with sequences in which a team of computer scientists discuss the functionality of prototypes that are already implemented or possibly to be implemented. The analysis focuses on self-repair because this is a practice where participants can be seen to orient to meanings of different expressions as alternatives. By using self-repair, the computer scientists show a concern with exact descriptions when they talk about existing functionality of their prototypes but not when they talk about potential future functionality. Instead, when participants talk about potential future functionality and attend to meanings during self-repair, they use vague expressions to indicate possibilities. Furthermore, when the computer scientists talk to external stakeholders, they indicate through hedges whenever their descriptions approximate already implemented technical functionality but do not describe it exactly. The article considers whether the code of working prototypes can be said to fix meanings of expressions and how we may account for human agency and non-human resistances during development.

Author(s):  
Betul C. Czerkawski

It has been more than a decade since Jeanette Wing's (2006) influential article about computational thinking (CT) proposed CT to be a “fundamental skill for everyone” (p. 33) and that needs to be added to every child's knowledge and skill set like reading, writing and arithmetic. Wing suggested that CT is a universal skill, and not only for computer scientists. This call resonated with many educators leading to various initiatives by the International Society for Teacher in Education (ISTE) and Computer Science Teachers Association (CSTA) provided the groundwork to integrate CT into the K-12 curriculum. While CT is not a new concept and has been taught in computer science departments for decades, Wing's call created a shift towards educational computing and the need for integrating it into curriculum for all. Since 2006, many scholars have conducted empirical or qualitative research to study the what, how and why of CT. This chapter reviews the most current literature and identifies general research patterns, themes and directions for the future. The purpose of the chapter is to emphasize future research needs by cumulatively looking at what has been done to date in computational thinking research. Consequently, the conclusion and discussion section of the paper presents a research agenda for future.


2015 ◽  
pp. 918-933
Author(s):  
Eric P. Jiang

With the rapid growth of the Internet and telecommunication networks, computer technology has been a driving force in global economic development and in advancing many areas in science, engineering, health care, business, and finance that carry significant impacts on people and society. As a primary source for producing the workforce of software engineers, computer scientists and information technology specialists, computer science education plays a particularly important role in modern economic growth and it has been invested heavily in many countries around the world. This chapter provides a comparative study of undergraduate computer science programs between China and the United States. The study focuses on the current curricula of computer science programs. It in part is based on the author's direct observation from his recent visits to several universities in China and the conversations he had with administrators and faculty of computer science programs at the universities. It is also based on the author's over two decades experience as a computer science educator at several public and private American institutions of higher educations. The education systems in China and the United States have different features and each of the systems has its strengths and weaknesses. This is likely also true for education systems in other countries. It would be an interesting and important task for us to explore an innovative computer science education program, which perhaps blends the best features of different systems and helps better prepare graduates for the challenges working in an increasingly globalized world. We hope the study presented in this chapter provides some useful insights in this direction.


2021 ◽  
pp. 127-132
Author(s):  
Simone Natale

The historical trajectory examined in this book demonstrates that humans’ reactions to machines that are programmed to simulate intelligent behaviors represent a constitutive element of what is commonly called AI. Artificial intelligence technologies are not just designed to interact with human users: they are designed to fit specific characteristics of the ways users perceive and navigate the external world. Communicative AI becomes more effective not only by evolving from a technical standpoint but also by profiting, through the dynamics of banal deception, from the social meanings humans project onto situations and things. In this conclusion, the risks and problems related to AI’s banal deception are explored in relationship with other AI-based technologies such as robotics and social media bots. A call is made for initiating a more serious debate about the role of deception in interface design and computer science. The book concludes with a reflection on the need to develop a critical and skeptical stance in interactions with computing technologies and AI. In order not to be found unprepared for the challenges posed by AI, computer scientists, software developers, designers as well as users have to consider and critically interrogate the potential outcomes of banal deception.


2015 ◽  
Vol 17 (4) ◽  
pp. 20-28
Author(s):  
Flávio Luiz Schiavoni ◽  
Leandro Costalonga

Ubimus is a research field that merges Ubicomp and music and studies the influence of ubiquitous devices and applications in Music. This field has been explored by musicians and social scientists around the world helped by a countless number of computer scientists. Nevertheless, it is not easy to a novice computer scientist understand Ubimus concepts and specially how to take part of this research field. Based on this, the authors present in this paper a point of view of Ubimus associating fields in computer science and hardware and software definitions and suggestions to be explored with Ubimus.


2016 ◽  
Vol 71 (9) ◽  
pp. 869-872
Author(s):  
Adrian Sfarti

AbstractWe investigate the reflection of massive particles from moving mirrors. The adoption of the formalism based on the energy-momentum allowed us to derive the most general set of formulas, valid for massive and, in the limit, also for massless particles. We show that the momentum change of the reflecting particle always lies along the normal to the mirror, independent of the mirror speed. The subject is interesting not only to physicists designing concentrators for fascicles of massive particles and electron microscopes but also to computer scientists working in raytracing operating in the photon sector. The paper, far from being only theoretical, has profound and novel practical applications in both domains of engineering design and computer science.


2020 ◽  
Author(s):  
Leticia Bode ◽  
Pamela Davis-Kean ◽  
Lisa Singh ◽  
Tanya Berger-Wolf ◽  
Ceren Budak ◽  
...  

Social media provides a rich amount of data on the everyday lives, opinions, thoughts, beliefs, and behaviors of individuals and organizations in near real-time. Leveraging these data effectively and responsibly should therefore improve our ability to understand political, psychological, economic, and sociological behaviors and opinions across time. This article is the first in a series of white papers that will provide a summary of the discussions derived from meetings of social scientists and computer scientists with the goal of creating consensus for how social and computer science could converge to answer important questions about complex human behaviors and dynamics using social media data. We present three basic research designs that are commonly used in social science and are applicable to research using social media data: qualitative observation, experiments, and surveys. We also discuss a fourth design that is primarily informed by computer science, non-designed data, but that can inform social science research. After a brief discussion of the general approach of these designs and their applicability for use with social media data, we discuss the challenges associated with their use with social media data and potential solutions for “convergence” of these methods for future quantitative research in the social sciences.


1990 ◽  
Vol 5 (1) ◽  
pp. 33-36
Author(s):  
Martin Loomes

Anyone who is involved in computer science education will be used to engaging in passionate debates over questions such as ‘What programming language should we be teaching'? Moreover, if these debates take place in front of colleagues from other disciplines, for example when joint schemes are being developed, then concern is often expressed about the inability of computer scientists to come to any generally accepted conclusions. In this paper the view is proposed that the key questions of computer science education are really manifestations of a much deeper issue in computing which has been alluded to in various publications, but never discussed to a generally accepted conclusion by the computer science community at large.


Author(s):  
Bruce A. Maxwell

Computer vision is a broad-based field of computer science that requires students to understand and integrate knowledge from numerous disciplines. Computer science (CS) majors, however, do not necessarily have an interdisciplinary background. In the rush to integrate, we can forget, or fail to plan for the fact that our students may not possess a broad undergraduate education. To explore the appropriateness of our education materials, this paper begins with a discussion of what we can expect CS majors to know and how we can use that knowledge to make a computer vision course a more enriching experience. The paper then provides a review of a number of the currently available computer vision textbooks. These texts differ significantly in their coverage, scope, approach, and audience. This comparative review shows that, while there are an increasing number of good textbooks available, there is still a need for new educational materials. In particular, the field would benefit from both an undergraduate computer vision text aimed at computer scientists and from a text with a stronger focus on color computer vision and its applications.


Author(s):  
Subrata Dasgupta

In 1962, purdue University in West Lafayette, Indiana, in the United States opened a department of computer science with the mandate to offer master’s and doctoral degrees in computer science. Two years later, the University of Manchester in England and the University of Toronto in Canada also established departments of computer science. These were the first universities in America, Britain, and Canada, respectively, to recognize a new academic reality formally—that there was a distinct discipline with a domain that was the computer and the phenomenon of automatic computation. There after, by the late 1960s—much as universities had sprung up all over Europe during the 12th and 13th centuries after the founding of the University of Bologna (circa 1150) and the University of Paris (circa 1200)—independent departments of computer science sprouted across the academic maps on North America, Britain, and Europe. Not all the departments used computer science in their names; some preferred computing, some computing science, some computation. In Europe non-English terms such as informatique and informatik were used. But what was recognized was that the time had come to wean the phenomenon of computing away from mathematics and electrical engineering, the two most common academic “parents” of the field; and also from computer centers, which were in the business of offering computing services to university communities. A scientific identity of its very own was thus established. Practitioners of the field could call themselves computer scientists. This identity was shaped around a paradigm. As we have seen, the epicenter of this paradigm was the concept of the stored-program computer as theorized originally in von Neumann’s EDVAC report of 1945 and realized physically in 1949 by the EDSAC and the Manchester Mark I machines (see Chapter 8 ). We have also seen the directions in which this paradigm radiated out in the next decade. Most prominent among the refinements were the emergence of the historically and utterly original, Janus-faced, liminal artifacts called computer programs, and the languages—themselves abstract artifacts—invented to describe and communicate programs to both computers and other human beings.


Sign in / Sign up

Export Citation Format

Share Document