scholarly journals Some ideas on data types in high level languages

Author(s):  
David Gries ◽  
Narain Gehani
Keyword(s):  
2021 ◽  
Vol 4 ◽  
pp. 78-87
Author(s):  
Yury Yuschenko

In the Address Programming Language (1955), the concept of indirect addressing of higher ranks (Pointers) was introduced, which allows the arbitrary connection of the computer’s RAM cells. This connection is based on standard sequences of the cell addresses in RAM and addressing sequences, which is determined by the programmer with indirect addressing. Two types of sequences allow programmers to determine an arbitrary connection of RAM cells with the arbitrary content: data, addresses, subroutines, program labels, etc. Therefore, the formed connections of cells can relate to each other. The result of connecting cells with the arbitrary content and any structure is called tree-shaped formats. Tree-shaped formats allow programmers to combine data into complex data structures that are like abstract data types. For tree-shaped formats, the concept of “review scheme” is defined, which is like the concept of “bypassing” trees. Programmers can define multiple overview diagrams for the one tree-shaped format. Programmers can create tree-shaped formats over the connected cells to define the desired overview schemes for these connected cells. The work gives a modern interpretation of the concept of tree-shaped formats in Address Programming. Tree-shaped formats are based on “stroke-operation” (pointer dereference), which was hardware implemented in the command system of computer “Kyiv”. Group operations of modernization of computer “Kyiv” addresses accelerate the processing of tree-shaped formats and are designed as organized cycles, like those in high-level imperative programming languages. The commands of computer “Kyiv”, due to operations with indirect addressing, have more capabilities than the first high-level programming language – Plankalkül. Machine commands of the computer “Kyiv” allow direct access to the i-th element of the “list” by its serial number in the same way as such access is obtained to the i-th element of the array by its index. Given examples of singly linked lists show the features of tree-shaped formats and their differences from abstract data types. The article opens a new branch of theoretical research, the purpose of which is to analyze the expe- diency of partial inclusion of Address Programming in modern programming languages.


The previous chapter overviewed big data including its types, sources, analytic techniques, and applications. This chapter briefly discusses the architecture components dealing with the huge volume of data. The complexity of big data types defines a logical architecture with layers and high-level components to obtain a big data solution that includes data sources with the relation to atomic patterns. The dimensions of the approach include volume, variety, velocity, veracity, and governance. The diverse layers of the architecture are big data sources, data massaging and store layer, analysis layer, and consumption layer. Big data sources are data collected from various sources to perform analytics by data scientists. Data can be from internal and external sources. Internal sources comprise transactional data, device sensors, business documents, internal files, etc. External sources can be from social network profiles, geographical data, data stores, etc. Data massage is the process of extracting data by preprocessing like removal of missing values, dimensionality reduction, and noise removal to attain a useful format to be stored. Analysis layer is to provide insight with preferred analytics techniques and tools. The analytics methods, issues to be considered, requirements, and tools are widely mentioned. Consumption layer being the result of business insight can be outsourced to sources like retail marketing, public sector, financial body, and media. Finally, a case study of architectural drivers is applied on a retail industry application and its challenges and usecases are discussed.


2020 ◽  
Vol 36 (10) ◽  
pp. 3263-3265 ◽  
Author(s):  
Lucas Czech ◽  
Pierre Barbera ◽  
Alexandros Stamatakis

Abstract Summary We present genesis, a library for working with phylogenetic data, and gappa, an accompanying command-line tool for conducting typical analyses on such data. The tools target phylogenetic trees and phylogenetic placements, sequences, taxonomies and other relevant data types, offer high-level simplicity as well as low-level customizability, and are computationally efficient, well-tested and field-proven. Availability and implementation Both genesis and gappa are written in modern C++11, and are freely available under GPLv3 at http://github.com/lczech/genesis and http://github.com/lczech/gappa. Supplementary information Supplementary data are available at Bioinformatics online.


Author(s):  
Norman Y. Foo ◽  
Roslyn B. Riley

AbstractThe calculus for equational implication languages given by Selman is generalized to handle the logical equivalent if the if…then…else… construct of high level programming languages. The relevance of these results to current investigations in the algebraic specifications of data types is discussed.


Author(s):  
Harry H. Cheng ◽  
Xudong Hu ◽  
Bin Lin

Abstract This paper presents the design and implementation of high-level numerical analysis functions in CH, a superset of C language developed for the convenience of scientific and engineering computations. In CH, complex number is treated as a built-in data type, so that the syntaxes of complex arithmetic, relational operations, and built-in mathematical functions are the same as those for real numbers. The variable number of arguments is used in the built-in mathematical functions to simplify the computation of different branches of multi-valued complex functions. The computational arrays are introduced to handle the arrays in the numerical computations. Passing arrays of variable length by arrays of deferred-shape and arrays of assumed-shape to functions are discussed. These methods allow the arrays to be passed with their rank, dimensions and data types. A list of high-level numerical functions and two examples of the applications in the scientific and engineering are given in the paper.


2020 ◽  
Vol 10 (4) ◽  
pp. 1377 ◽  
Author(s):  
Mattia Previtali ◽  
Raffaella Brumana ◽  
Chiara Stanga ◽  
Fabrizio Banfi

In recent years, many efforts have been invested in the cultural heritage digitization: surveying, modelling, diagnostic analysis and historic data collection. Nowadays, this effort is finalized in many cases towards historical building information modelling (HBIM). However, the architecture, engineering, construction and facility management (AEC-FM) domain is very fragmented and many experts operating with different data types and models are involved in HBIM projects. This prevents effective communication and sharing of the results not only among different professionals but also among different projects. Semantic web tools may significantly contribute in facilitating sharing, connection and integration of data provided in different domains and projects. The paper describes this aspect specifically focusing on managing the information and models acquired on the case of vaulted systems. Information is collected within a semantic based hub platform to perform cross correlation. Such functionality allows the reconstructing of the rich history of the construction techniques and skilled workers across Europe. To this purpose an ontology-based vaults database has been undertaken and an example of its implementation is presented. The developed ontology-based vaults database is a database that makes uses of a set of ontologies to effectively combine data and information from multiple heterogeneous sources. The defined ontologies provide a high-level schema of a data source and provides a vocabulary for user queries.


1995 ◽  
Vol 5 (1) ◽  
pp. 81-110 ◽  
Author(s):  
Peter Achten ◽  
Rinus Plasmeijer

AbstractFunctional programming languages have banned assignment because of its undesirable properties. The reward of this rigorous decision is that functional programming languages are side-effect free. There is another side to the coin: because assignment plays a crucial role in Input/Output (I/O), functional languages have a hard time dealing with I/O. Functional programming languages have therefore often been stigmatised as inferior to imperative programming languages because they cannot deal with I/O very well. In this paper, we show that I/O can be incorporated in a functional programming language without loss of any of the generally accepted advantages of functional programming languages. This discussion is supported by an extensive account of the I/O system offered by the lazy, purely functional programming language Clean. Two aspects that are paramount in its I/O system make the approach novel with respect to other approaches. These aspects are the technique of explicit multiple environment passing, and the Event I/O framework to program Graphical User I/O in a highly structured and high-level way. Clean file I/O is as powerful and flexible as it is in common imperative languages (one can read, write, and seek directly in a file). Clean Event I/O provides programmers with a high-level framework to specify complex Graphical User I/O. It has been used to write applications such as a window-based text editor, an object based drawing program, a relational database, and a spreadsheet program. These graphical interactive programs are completely machine independent, but still obey the look-and-feel of the concrete window environment being used. The specifications are completely functional and make extensive use of uniqueness typing, higher-order functions, and algebraic data types. Efficient implementations are present on the Macintosh, Sun (X Windows under Open Look) and PC (OS/2).


2004 ◽  
Vol 14 (4) ◽  
pp. 527-586 ◽  
Author(s):  
PETER SELINGER

We propose the design of a programming language for quantum computing. Traditionally, quantum algorithms are frequently expressed at the hardware level, for instance in terms of the quantum circuit model or quantum Turing machines. These approaches do not encourage structured programming or abstractions such as data types. In this paper, we describe the syntax and semantics of a simple quantum programming language with high-level features such as loops, recursive procedures, and structured data types. The language is functional in nature, statically typed, free of run-time errors, and has an interesting denotational semantics in terms of complete partial orders of superoperators.


2010 ◽  
Vol 23 (22) ◽  
pp. 6027-6035 ◽  
Author(s):  
Yi Huang ◽  
Stephen S. Leroy ◽  
James G. Anderson

Abstract The authors investigate whether combining a data type derived from radio occultation (RO) with the infrared spectral data in an optimal detection method improves the quantification of longwave radiative forcing and feedback. Signals derived from a doubled-CO2 experiment in a theoretical study are used. When the uncertainties in both data types are conservatively estimated, jointly detecting the feedbacks of tropospheric temperature and water vapor, stratospheric temperature, and high-level cloud from the two data types should reduce the mean errors by more than 50%. This improvement is achieved because the RO measurement helps disentangle the radiance signals that are ambiguous in the infrared spectrum. The result signifies the complementary information content in infrared spectral and radio occultation data types, which can be effectively combined in optimal detection to accurately quantify the longwave radiative forcing and feedback. The results herein show that the radiative forcing of CO2 and the longwave radiative feedbacks of tropospheric temperature, tropospheric water vapor, and stratospheric temperature can be accurately quantified from the combined data types, with relative errors in their global mean values being less than 4%, 10%, 15%, and 20%, respectively.


Sign in / Sign up

Export Citation Format

Share Document