13 Keep Your Cool: Navigating Reader Reports, Contracts, and Other Decision Points

2021 ◽  
pp. 122-134
Keyword(s):  
2002 ◽  
Vol 2 (3) ◽  
pp. 135-141
Author(s):  
Susan L. Hendrix ◽  
Richard Derman ◽  
Richard T. Kloos
Keyword(s):  

2016 ◽  
Vol 26 ◽  
pp. 7-83 ◽  
Author(s):  
Bryan J. Found ◽  
Carolyne Bird

Overview:   This document provides a summary of a practical method that can be used to compare handwriting (whether text-based or signatures) in the forensic environment. It is intended to serve as an approach to forensic handwriting examination for practitioners actively involved in casework, or for those interested in investigating general aspects of the practice of forensic handwriting examination (for example researchers, academics and legal professionals). The method proposed does not cover in detail all aspects of the examination of handwriting. It does, however, form the framework of forensic handwriting methodology in the government environment in Australia and New Zealand as represented by the Document Examination Specialist Advisory Group (DocSAG). It is noted from the outset that handwriting is examined using complex human perceptual and cognitive processes that can be difficult to accurately and validly describe in written form since, for the most part, these processes are hidden. What is presented here is the agreed general approach that DocSAG practitioners use in the majority of the comparisons that they carry out. The method is based around a flow diagram which structures the comparison process and provides the reader with a guide as to the significant landmark stages commonly worked through in practical handwriting examinations. Where decision points occur within the course of the method flow diagram a series of modules have been developed which describe the nature of the decision under consideration and address relevant theoretical and practical issues. Each module is, as far as is practical, independent of other modules in the method. This assists in facilitating changes in the process over time that may result from theoretical, practical or technological advances in the field. Purchase Volume 26 - Special Issue - $40


Author(s):  
Kathryn E. Kanzler ◽  
Donald D. McGeary ◽  
Cindy McGeary ◽  
Abby E. Blankenship ◽  
Stacey Young-McCaughan ◽  
...  

Author(s):  
Haoyang Meng ◽  
Sheng Dong ◽  
Jibiao Zhou ◽  
Shuichao Zhang ◽  
Zhenjiang Li

Green flash light (FG) and green countdown (GC) are the two most common signal formats applied in green-red transition that provides drivers additional alert before termination of green phase. Due to their importance and function in stop-pass decision-making process, proper use of them has become a critical issue to greatly improve the safety and efficiency of signalized intersections. Gradually e-bike riders have become more important commuters in China, however, the influence of FG or GC on them is not clear yet and need pay more attention to it. This study chooses two almost identical intersections to obtain highly accurate trajectory data of e-bike riders to study their decision-making behaviors under FG or GC. The e-bike riders’ behavior is classified into four categories and is to identify their stop-pass decision points using the acceleration trend. Two binary-logit models were built to predict the stop–pass decision behaviors for the different e-bike rider groups, explaining that the potential time to the stop-line is the dominant independent factor of the different behaviors of GC and FG. Furthermore empirical analysis of decision points indicated that GC provides the earlier stop-pass decision point and longer decision making duration on the one side while results in more complexity of decision making and greater risk of stop-line crossing than FG on the other side.


2021 ◽  
Vol 35 (2) ◽  
pp. 621-659
Author(s):  
Lewis Hammond ◽  
Vaishak Belle

AbstractMoral responsibility is a major concern in autonomous systems, with applications ranging from self-driving cars to kidney exchanges. Although there have been recent attempts to formalise responsibility and blame, among similar notions, the problem of learning within these formalisms has been unaddressed. From the viewpoint of such systems, the urgent questions are: (a) How can models of moral scenarios and blameworthiness be extracted and learnt automatically from data? (b) How can judgements be computed effectively and efficiently, given the split-second decision points faced by some systems? By building on constrained tractable probabilistic learning, we propose and implement a hybrid (between data-driven and rule-based methods) learning framework for inducing models of such scenarios automatically from data and reasoning tractably from them. We report on experiments that compare our system with human judgement in three illustrative domains: lung cancer staging, teamwork management, and trolley problems.


2021 ◽  
Vol 5 (1) ◽  
pp. e100126
Author(s):  
Natasha A Karp ◽  
Derek Fry

Within preclinical research, attention has focused on experimental design and how current practices can lead to poor reproducibility. There are numerous decision points when designing experiments. Ethically, when working with animals we need to conduct a harm–benefit analysis to ensure the animal use is justified for the scientific gain. Experiments should be robust, not use more or fewer animals than necessary, and truly add to the knowledge base of science. Using case studies to explore these decision points, we consider how individual experiments can be designed in several different ways. We use the Experimental Design Assistant (EDA) graphical summary of each experiment to visualise the design differences and then consider the strengths and weaknesses of each design. Through this format, we explore key and topical experimental design issues such as pseudo-replication, blocking, covariates, sex bias, inference space, standardisation fallacy and factorial designs. There are numerous articles discussing these critical issues in the literature, but here we bring together these topics and explore them using real-world examples allowing the implications of the choice of design to be considered. Fundamentally, there is no perfect experiment; choices must be made which will have an impact on the conclusions that can be drawn. We need to understand the limitations of an experiment’s design and when we report the experiments, we need to share the caveats that inherently exist.


Author(s):  
Cris M. Sullivan ◽  
Danielle Chiaramonte ◽  
Gabriela López‐Zerón ◽  
Katie Gregory ◽  
Linda Olsen

Sign in / Sign up

Export Citation Format

Share Document