scholarly journals Studying the Publication Pattern of Canadian Computer Scientists / Étude des pratiques de publication des scientifiques canadiens en informatique

2015 ◽  
Vol 39 (1) ◽  
pp. 60-78
Author(s):  
Li Zhang
1998 ◽  
Vol 10 (1-3) ◽  
pp. 124-131
Author(s):  
Lynette Hunter
Keyword(s):  

This essay focusses on the central intellectual challenges of hypermedia especially the extent to which hypermedia is at authorative stage, its rhetoric as yet undeveloped, the forms of publication in their infancy, and above all focussed mainly on text. The challenge in accepting the implications of textuality seems to lie with the computer scientists. Can they take cognisance of discourse beyond the glass womb?


Author(s):  
Pierre-Loïc Garoche

The verification of control system software is critical to a host of technologies and industries, from aeronautics and medical technology to the cars we drive. The failure of controller software can cost people their lives. This book provides control engineers and computer scientists with an introduction to the formal techniques for analyzing and verifying this important class of software. Too often, control engineers are unaware of the issues surrounding the verification of software, while computer scientists tend to be unfamiliar with the specificities of controller software. The book provides a unified approach that is geared to graduate students in both fields, covering formal verification methods as well as the design and verification of controllers. It presents a wealth of new verification techniques for performing exhaustive analysis of controller software. These include new means to compute nonlinear invariants, the use of convex optimization tools, and methods for dealing with numerical imprecisions such as floating point computations occurring in the analyzed software. As the autonomy of critical systems continues to increase—as evidenced by autonomous cars, drones, and satellites and landers—the numerical functions in these systems are growing ever more advanced. The techniques presented here are essential to support the formal analysis of the controller software being used in these new and emerging technologies.


2020 ◽  
Author(s):  
Jess Sullivan ◽  
Michelle Mei ◽  
Andrew Perfors ◽  
Erica H Wojcik ◽  
Michael C. Frank

We introduce a new resource: the SAYCam corpus. Infants aged 6-32 months wore a head-mounted camera for approximately 2 hours per week, over the course of approximately two and a half years. The result is a large, naturalistic, longitudinal dataset of infant- and child-perspective videos. Over 200,000 words of naturalistic speech have already been transcribed. Similarly, the dataset is searchable using a number of criteria (e.g., age of participant, location, setting, objects present). The resulting dataset will be of broad use to psychologists, linguists, and computer scientists.


2018 ◽  
Author(s):  
Jordan Carlson ◽  
J. Aaron Hipp ◽  
Jacqueline Kerr ◽  
Todd Horowitz ◽  
David Berrigan

BACKGROUND Image based data collection for obesity research is in its infancy. OBJECTIVE The present study aimed to document challenges to and benefits from such research by capturing examples of research involving the use of images to assess physical activity- or nutrition-related behaviors and/or environments. METHODS Researchers (i.e., key informants) using image capture in their research were identified through knowledge and networks of the authors of this paper and through literature search. Twenty-nine key informants completed a survey covering the type of research, source of images, and challenges and benefits experienced, developed specifically for this study. RESULTS Most respondents used still images in their research, with only 26.7% using video. Image sources were categorized as participant generated (N = 13; e.g., participants using smartphones for dietary assessment), researcher generated (N = 10; e.g., wearable cameras with automatic image capture), or curated from third parties (N = 7; e.g., Google Street View). Two of the major challenges that emerged included the need for automated processing of large datasets (58.8%) and participant recruitment/compliance (41.2%). Benefit-related themes included greater perspectives on obesity with increased data coverage (34.6%) and improved accuracy of behavior and environment assessment (34.6%). CONCLUSIONS Technological advances will support the increased use of images in the assessment of physical activity, nutrition behaviors, and environments. To advance this area of research, more effective collaborations are needed between health and computer scientists. In particular development of automated data extraction methods for diverse aspects of behavior, environment, and food characteristics are needed. Additionally, progress in standards for addressing ethical issues related to image capture for research purposes are critical. CLINICALTRIAL NA


Author(s):  
S. Lakshmivarahan ◽  
Sudarshan K. Dhall

The prefix operation on a set of data is one of the simplest and most useful building blocks in parallel algorithms. This introduction to those aspects of parallel programming and parallel algorithms that relate to the prefix problem emphasizes its use in a broad range of familiar and important problems. The book illustrates how the prefix operation approach to parallel computing leads to fast and efficient solutions to many different kinds of problems. Students, teachers, programmers, and computer scientists will want to read this clear exposition of an important approach.


2021 ◽  
pp. 030631272110109
Author(s):  
Ole Pütz

The formulation of computer algorithms requires the elimination of vagueness. This elimination of vagueness requires exactness in programming, and this exactness can be traced to meeting talk, where it intersects with the indexicality of expressions. This article is concerned with sequences in which a team of computer scientists discuss the functionality of prototypes that are already implemented or possibly to be implemented. The analysis focuses on self-repair because this is a practice where participants can be seen to orient to meanings of different expressions as alternatives. By using self-repair, the computer scientists show a concern with exact descriptions when they talk about existing functionality of their prototypes but not when they talk about potential future functionality. Instead, when participants talk about potential future functionality and attend to meanings during self-repair, they use vague expressions to indicate possibilities. Furthermore, when the computer scientists talk to external stakeholders, they indicate through hedges whenever their descriptions approximate already implemented technical functionality but do not describe it exactly. The article considers whether the code of working prototypes can be said to fix meanings of expressions and how we may account for human agency and non-human resistances during development.


1998 ◽  
Vol 30 (1) ◽  
pp. 117-120 ◽  
Author(s):  
David G. Kay
Keyword(s):  

Author(s):  
Thomas Taro Lennerfors ◽  
Mikael Laaksoharju ◽  
Matthew Davis ◽  
Peter Birch ◽  
Per Fors

Risks ◽  
2020 ◽  
Vol 9 (1) ◽  
pp. 4 ◽  
Author(s):  
Christopher Blier-Wong ◽  
Hélène Cossette ◽  
Luc Lamontagne ◽  
Etienne Marceau

In the past 25 years, computer scientists and statisticians developed machine learning algorithms capable of modeling highly nonlinear transformations and interactions of input features. While actuaries use GLMs frequently in practice, only in the past few years have they begun studying these newer algorithms to tackle insurance-related tasks. In this work, we aim to review the applications of machine learning to the actuarial science field and present the current state of the art in ratemaking and reserving. We first give an overview of neural networks, then briefly outline applications of machine learning algorithms in actuarial science tasks. Finally, we summarize the future trends of machine learning for the insurance industry.


Sign in / Sign up

Export Citation Format

Share Document