Using directional bit sequences to reveal the property-liability underwriting cycle as an algorithmic process

2017 ◽  
Vol 6 (1-2) ◽  
pp. 3-21
Author(s):  
Joseph D. Haley
2022 ◽  
Author(s):  
Paul Bloom ◽  
Laurie Paul

Some decision-making processes are uncomfortable. Many of us do not like to make significant decisions, such as whether to have a child, solely based on social science research. We do not like to choose randomly, even in cases where flipping a coin is plainly the wisest choice. We are often reluctant to defer to another person, even if we believe that the other person is wiser, and have similar reservations about appealing to powerful algorithms. And, while we are comfortable with considering and weighing different options, there is something strange about deciding solely on a purely algorithmic process, even one that takes place in our own heads.What is the source of our discomfort? We do not present a decisive theory here—and, indeed, the authors have clashing views over some of these issues—but we lay out the arguments for two (consistent) explanations. The first is that such impersonal decision-making processes are felt to be a threat to our autonomy. In all of the examples above, it is not you who is making the decision, it is someone or something else. This is to be contrasted with personal decision-making, where, to put it colloquially, you “own” your decision, though of course you may be informed by social science data, recommendations of others, and so on. A second possibility is that such impersonal decision-making processes are not seen as authentic, where authentic decision making is one in which you intentionally and knowledgably choose an option in a way that is “true to yourself.” Such decision making can be particularly important in contexts where one is making a life-changing decision of great import, such as the choice to emigrate, start a family, or embark on a major career change.


2017 ◽  
Vol 29 (2) ◽  
pp. 329-345 ◽  
Author(s):  
Aureliano M. Robles-Pérez ◽  
José Carlos Rosales

AbstractLet ${{\mathbb{N}}}$ be the set of nonnegative integers. A problem about how to transport profitably an organized group of persons leads us to study the set T formed by the integers n such that the system of inequalities, with nonnegative integer coefficients,$a_{1}x_{1}+\cdots+a_{p}x_{p}<n<b_{1}x_{1}+\cdots+b_{p}x_{p}$has at least one solution in ${{\mathbb{N}}^{p}}$. We will see that ${T\cup\{0\}}$ is a numerical semigroup. Moreover, we will show that a numerical semigroup S can be obtained in this way if and only if ${\{a+b-1,a+b+1\}\subseteq S}$, for all ${a,b\in S\setminus\{0\}}$. In addition, we will demonstrate that such numerical semigroups form a Frobenius variety and we will study this variety. Finally, we show an algorithmic process in order to compute T.


2018 ◽  
pp. 458-472
Author(s):  
Michael Johansson

This article will present and discuss the design thinking, methods, processes and some examples of work that demonstrates how, together with different co-creators, one sets up a work practice using digital 3d objects and images. That in different ways and formats helps us to explore how a database, a set of rules can be used in a dialogue with artistic work practice and how such a process can be used to create images and animation in a variety of design and art projects. The main example is a project called Conversation China that still is in its making, here one works with rather complex processes, involving several digital analogue techniques as the basis for creating the images for a 150 pieces porcelain dinner set. The author's interest in this work is how the intention of the artist or designer is transferred and later embedded in the procedural or algorithmic process and how this intent is organized and set up to secure an desired outcome, mixing the possibilities of the digital media object with manual editing and artistic craftsmanship. What this article tries to put forward is how we designed and set up environments for working with non linear and procedural media, their different expressions and forms by using explorable prototypes and design thinking?


2020 ◽  
Vol 26 (2) ◽  
pp. 274-306 ◽  
Author(s):  
Joel Lehman ◽  
Jeff Clune ◽  
Dusan Misevic ◽  
Christoph Adami ◽  
Lee Altenberg ◽  
...  

Evolution provides a creative fount of complex and subtle adaptations that often surprise the scientists who discover them. However, the creativity of evolution is not limited to the natural world: Artificial organisms evolving in computational environments have also elicited surprise and wonder from the researchers studying them. The process of evolution is an algorithmic process that transcends the substrate in which it occurs. Indeed, many researchers in the field of digital evolution can provide examples of how their evolving algorithms and organisms have creatively subverted their expectations or intentions, exposed unrecognized bugs in their code, produced unexpectedly adaptations, or engaged in behaviors and outcomes, uncannily convergent with ones found in nature. Such stories routinely reveal surprise and creativity by evolution in these digital worlds, but they rarely fit into the standard scientific narrative. Instead they are often treated as mere obstacles to be overcome, rather than results that warrant study in their own right. Bugs are fixed, experiments are refocused, and one-off surprises are collapsed into a single data point. The stories themselves are traded among researchers through oral tradition, but that mode of information transmission is inefficient and prone to error and outright loss. Moreover, the fact that these stories tend to be shared only among practitioners means that many natural scientists do not realize how interesting and lifelike digital organisms are and how natural their evolution can be. To our knowledge, no collection of such anecdotes has been published before. This article is the crowd-sourced product of researchers in the fields of artificial life and evolutionary computation who have provided first-hand accounts of such cases. It thus serves as a written, fact-checked collection of scientifically important and even entertaining stories. In doing so we also present here substantial evidence that the existence and importance of evolutionary surprises extends beyond the natural world, and may indeed be a universal property of all complex evolving systems.


RMD Open ◽  
2020 ◽  
Vol 6 (3) ◽  
pp. e001297 ◽  
Author(s):  
Alwin Sebastian ◽  
Alessandro Tomelleri ◽  
Abdul Kayani ◽  
Diana Prieto-Pena ◽  
Chavini Ranasinghe ◽  
...  

ObjectivesClinical presentations of giant cell arteritis (GCA) are protean, and it is vital to make a secure diagnosis and exclude mimics for urgent referrals with suspected GCA. The main objective was to develop a joined-up, end-to-end, fast-track confirmatory/exclusionary, algorithmic process based on a probability score triage to drive subsequent investigations with ultrasound (US) and any appropriate additional tests as required.MethodsThe algorithm was initiated by stratifying patients to low-risk category (LRC), intermediate-risk category (IRC) and high-risk category (HRC). Retrospective data was extracted from case records. The Southend pretest probability score (PTPS) overall showed a median score of 9 and a 75th percentile score of 12. We, therefore, classified LRC as PTPS <9, IRC 9–12 and HRC >12. GCA diagnosis was made by a combination of clinical, US, and laboratory findings. The algorithm was assessed in all referrals seen in 2018–2019 to test the diagnostic performance of US overall and in individual categories.ResultsOf 354 referrals, 89 had GCA with cases categorised as LRC (151), IRC (137) and HRC (66). 250 had US, whereas 104 did not (score <7, and/or high probability of alternative diagnoses). In HRC, US showed sensitivity 94%, specificity 85%, accuracy 92% and GCA prevalence 80%. In LRC, US showed sensitivity undefined (0/0), specificity 98%, accuracy 98% and GCA prevalence 0%. In IRC, US showed sensitivity 100%, specificity 97%, accuracy 98% and GCA prevalence 26%. In the total population, US showed sensitivity 97%, specificity 97% and accuracy 97%. Prevalence of GCA overall was 25%.ConclusionsThe Southend PTPS successfully stratifies fast-track clinic referrals and excludes mimics. The algorithm interprets US in context, clarifies a diagnostic approach and identifies uncertainty, need for re-evaluation and alternative tests. Test performance of US is significantly enhanced with PTPS.


1986 ◽  
Vol 53 (1) ◽  
pp. 155 ◽  
Author(s):  
LeRoy F. Simmons ◽  
Mark L. Cross

2014 ◽  
Vol 1 (1) ◽  
Author(s):  
R. Jordan Crouser ◽  
Benjamin Hescott ◽  
Remco Chang

This paper introduces the Human Oracle Model as a method for characterizing and quantifying the use of human processing power as part of an algorithmic process. The utility of this model is demonstrated through a comparative algorithmic analysis of several well-known human computation systems, as well as the definition of a preliminary characterization of the space of human computation under this model. Through this research, we hope to gain insight about the challenges unique to human computation and direct the search for efficient human computation algorithms.


2021 ◽  
Vol 332 ◽  
pp. 01022
Author(s):  
Tomasz Kałaczyński ◽  
Valeriy Martynyuk ◽  
Juliy Boiko ◽  
Sergiy Matyukh ◽  
Svitlana Petrashchuk

Knowledge of the technical state and construction of the hydraulic and pneumatic control system of Multimedia Hybrid Mobile Stage HMSM allows you to identify hazards and make a risk assessment. The aspects of the algorithmic process of diagnosing pneumatic and hydraulic control systems presented in the work form the basis for the development and implementation of safe solutions in the process of operation. This article describes the methodology for diagnosing HMSM machine hydraulics and industrial pneumatics systems, detailing the factors determining the correctness of maintaining functional fitness. The considerations developed illustrate the complexity of the diagnostic testing design process for condition assessment.


2015 ◽  
Vol 1 ◽  
pp. e23 ◽  
Author(s):  
Hector Zenil ◽  
Fernando Soler-Toscano ◽  
Jean-Paul Delahaye ◽  
Nicolas Gauvrit

We propose a measure based upon the fundamental theoretical concept in algorithmic information theory that provides a natural approach to the problem of evaluatingn-dimensional complexity by using ann-dimensional deterministic Turing machine. The technique is interesting because it provides a natural algorithmic process for symmetry breaking generating complexn-dimensional structures from perfectly symmetric and fully deterministic computational rules producing a distribution of patterns as described by algorithmic probability. Algorithmic probability also elegantly connects the frequency of occurrence of a pattern with its algorithmic complexity, hence effectively providing estimations to the complexity of the generated patterns. Experiments to validate estimations of algorithmic complexity based on these concepts are presented, showing that the measure is stable in the face of some changes in computational formalism and that results are in agreement with the results obtained using lossless compression algorithms when both methods overlap in their range of applicability. We then use the output frequency of the set of 2-dimensional Turing machines to classify the algorithmic complexity of the space-time evolutions of Elementary Cellular Automata.


Sign in / Sign up

Export Citation Format

Share Document