scholarly journals Just TYPEical: Visualizing Common Function Type Signatures in R

2020 ◽  
Author(s):  
Cameron Moy ◽  
Julia Belyakova ◽  
Alexi Turcotte ◽  
Sara Di Bartolomeo ◽  
Cody Dunne

Data-driven approaches to programming language design are uncommon. Despite the availability of large code repositories, distilling semantically-rich information from programs remains difficult. Important dimensions, like run-time type data, are inscrutable without the appropriate tools. We contribute a task abstraction and interactive visualization, TYPEical, for programming language designers who are exploring and analyzing type information from execution traces. Our approach aids user understanding of function type signatures across many executions. Insights derived from our visualization are aimed at informing language design decisions — specifically of a new gradual type system being developed for the R programming language. A copy of this paper, along with all the supplemental material, is available at osf.io/mc6zt

1990 ◽  
Vol 19 (326) ◽  
Author(s):  
Ole Lehrmann Madsen ◽  
Boris Magnusson ◽  
Birger Møller-Pedersen

This paper is concerned with the relation between <em>subtyping</em> and <em>subclassing</em> and their influence on programming language design. Traditionally subclassing as introduced by Simula has also been used for defining a hierarchical type system. The type system of a language can be characterized as <em>strong</em> or <em> weak</em> and the type checking mechanism as <em>static</em> or <em>dynamic</em>. Parameterized classes in combination with a hierarchical type-system is an example of a language construct that is known to create complicated type checking situations. In this paper these situations are analyzed and several different solutions are found. It is argued that an approach with a combination of static and dynamic type checking gives a reasonable balance also here. It is also concluded that this approach makes it possible to base the type system on the class/subclass mechanism.


1994 ◽  
Vol 24 (1) ◽  
pp. 1-25 ◽  
Author(s):  
Brigham Bell ◽  
Wayne Citrin ◽  
Clayton Lewis ◽  
John Rieman ◽  
Robert Weaver ◽  
...  

Author(s):  
Ramin Nabizadeh ◽  
Mostafa Hadei

Introduction: The wide range of studies on air pollution requires accurate and reliable datasets. However, due to many reasons, the measured concentra-tions may be incomplete or biased. The development of an easy-to-use and reproducible exposure assessment method is required for researchers. There-fore, in this article, we describe and present a series of codes written in R Programming Language for data handling, validating and averaging of PM10, PM2.5, and O3 datasets.   Findings: These codes can be used in any types of air pollution studies that seek for PM and ozone concentrations that are indicator of real concentra-tions. We used and combined criteria from several guidelines proposed by US EPA and APHEKOM project to obtain an acceptable methodology. Separate   .csv files for PM 10, PM 2.5 and O3 should be prepared as input file. After the file was imported to the R Programming software, first, negative and zero values of concentrations within all the dataset will be removed. Then, only monitors will be selected that have at least 75% of hourly concentrations. Then, 24-h averages and daily maximum of 8-h moving averages will be calculated for PM and ozone, respectively. For output, the codes create two different sets of data. One contains the hourly concentrations of the interest pollutant (PM10, PM2.5, or O3) in valid stations and their average at city level. Another is the   final 24-h averages of city for PM10 and PM2.5 or the final daily maximum 8-h averages of city for O3. Conclusion: These validated codes use a reliable and valid methodology, and eliminate the possibility of wrong or mistaken data handling and averaging. The use of these codes are free and without any limitation, only after the cita-tion to this article.


Sign in / Sign up

Export Citation Format

Share Document