Binding Complex Data Types Part I

Author(s):  
Adam Freeman
Keyword(s):  
2012 ◽  
Author(s):  
Marty Kraimer, ◽  
John dalesio
Keyword(s):  

2021 ◽  
Vol 4 ◽  
pp. 78-87
Author(s):  
Yury Yuschenko

In the Address Programming Language (1955), the concept of indirect addressing of higher ranks (Pointers) was introduced, which allows the arbitrary connection of the computer’s RAM cells. This connection is based on standard sequences of the cell addresses in RAM and addressing sequences, which is determined by the programmer with indirect addressing. Two types of sequences allow programmers to determine an arbitrary connection of RAM cells with the arbitrary content: data, addresses, subroutines, program labels, etc. Therefore, the formed connections of cells can relate to each other. The result of connecting cells with the arbitrary content and any structure is called tree-shaped formats. Tree-shaped formats allow programmers to combine data into complex data structures that are like abstract data types. For tree-shaped formats, the concept of “review scheme” is defined, which is like the concept of “bypassing” trees. Programmers can define multiple overview diagrams for the one tree-shaped format. Programmers can create tree-shaped formats over the connected cells to define the desired overview schemes for these connected cells. The work gives a modern interpretation of the concept of tree-shaped formats in Address Programming. Tree-shaped formats are based on “stroke-operation” (pointer dereference), which was hardware implemented in the command system of computer “Kyiv”. Group operations of modernization of computer “Kyiv” addresses accelerate the processing of tree-shaped formats and are designed as organized cycles, like those in high-level imperative programming languages. The commands of computer “Kyiv”, due to operations with indirect addressing, have more capabilities than the first high-level programming language – Plankalkül. Machine commands of the computer “Kyiv” allow direct access to the i-th element of the “list” by its serial number in the same way as such access is obtained to the i-th element of the array by its index. Given examples of singly linked lists show the features of tree-shaped formats and their differences from abstract data types. The article opens a new branch of theoretical research, the purpose of which is to analyze the expe- diency of partial inclusion of Address Programming in modern programming languages.


Circuit World ◽  
2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Hiren K. Mewada ◽  
Jitendra Chaudhari ◽  
Amit V. Patel ◽  
Keyur Mahant ◽  
Alpesh Vala

Purpose Synthetic aperture radar (SAR) imaging is the most computational intensive algorithm and this makes its implementation challenging for real-time application. This paper aims to present the chirp-scaling algorithm (CSA) for real-time SAR applications, using advanced field programmable gate array (FPGA) processor. Design/methodology/approach A chirp signal is generated and compressed using range Doppler algorithm in MATAB for validation. Fast Fourier transform (FFT) and multiplication operations with complex data types are the major units requiring heavy computation. Therefore, hardware acceleration is proposed and implemented on NEON-FPGA processor using NE10 and CEPHES library. Findings The heuristic analysis of the algorithm using timing analysis and resource usage is presented. It has been observed that FFT execution time is reduced by 61% by boosting the performance of the algorithm and speed of multiplication operation has been doubled because of the optimization. Originality/value Very few literatures have presented the FPGA-based SAR imaging implementation, where analysis of windowing technique was a major interest. This is a unique approach to implement the SAR CSA using a hybrid approach of hardware–software integration on Zynq FPGA. The timing analysis propagates that it is suitable to use this model for real-time SAR applications.


Author(s):  
Qiong Chen ◽  
Mengxing Huang

AbstractFeature discretization is an important preprocessing technology for massive data in industrial control. It improves the efficiency of edge-cloud computing by transforming continuous features into discrete ones, so as to meet the requirements of high-quality cloud services. Compared with other discretization methods, the discretization based on rough set has achieved good results in many applications because it can make full use of the known knowledge base without any prior information. However, the equivalence class of rough set is an ordinary set, which is difficult to describe the fuzzy components in the data, and the accuracy is low in some complex data types in big data environment. Therefore, we propose a rough fuzzy model based discretization algorithm (RFMD). Firstly, we use fuzzy c-means clustering to get the membership of each sample to each category. Then, we fuzzify the equivalence class of rough set by the obtained membership, and establish the fitness function of genetic algorithm based on rough fuzzy model to select the optimal discrete breakpoints on the continuous features. Finally, we compare the proposed method with the discretization algorithm based on rough set, the discretization algorithm based on information entropy, and the discretization algorithm based on chi-square test on remote sensing datasets. The experimental results verify the effectiveness of our method.


2020 ◽  
Author(s):  
John S. Hughes ◽  
Daniel J. Crichton

<p>The PDS4 Information Model (IM) Version 1.13.0.0 was released for use in December 2019. The ontology-based IM remains true to its foundational principles found in the Open Archive Information System (OAIS) Reference Model (ISO 14721) and the Metadata Registry (MDR) standard (ISO/IEC 11179). The standards generated from the IM have become the de-facto data archiving standards for the international planetary science community and have successfully scaled to meet the requirements of the diverse and evolving planetary science disciplines.</p><p>A key foundational principle is the use of a multi-level governance scheme that partitions the IM into semi-independent dictionaries. The governance scheme first partitions the IM vertically into three levels, the common, discipline, and project/mission levels. The IM is then partitioned horizontally across both discipline and project/mission levels into individual Local Data Dictionaries (LDDs).</p><p>The Common dictionary defines the classes used across the science disciplines such as product, collection, bundle, data formats, data types, and units of measurement. The dictionary resulted from a large collaborative effort involving domain experts across the community. An ontology modeling tool was used to enforce a modeling discipline, for configuration management, to ensure consistency and extensibility, and to enable interoperability. The Common dictionary encompasses the information categories defined in the OAIS RM, specifically data representation, provenance, fixity, identification, reference, and context. Over the last few years, the Common dictionary has remained relatively stable in spite of requirements levied by new missions, instruments, and more complex data types.</p><p>Since the release of the Common dictionary, the creation of a significant number of LDDs has proved the effectiveness of multi-level, steward-based governance. This scheme is allowing the IM to scale to meet the archival and interoperability demands of the evolving disciplines. In fact, an LDD development “cottage industry” has emerged that required improvements to the development processes and configuration management.  An LDD development tool now allows dictionary stewards to quickly produce specialized LDDs that are consistent with the Common dictionary.</p><p>The PDS4 Information Model is a world-class knowledge-base that governs the Planetary Science community's trusted digital repositories. This presentation will provide an overview of the model and additional information about its multi-level governance scheme including the topics of stewardship, configuration management, processes, and oversight.</p>


2007 ◽  
Vol 25 (18_suppl) ◽  
pp. 6525-6525
Author(s):  
M. E. Kho ◽  
E. M. Lepisto ◽  
J. C. Niland ◽  
A. terVeer ◽  
A. S. LaCasce ◽  
...  

6525 Background: Clinical trials and outcomes studies often rely on non-physicians to abstract complex data from medical records. We assessed the reliability of chart abstraction among personnel groups in a multi-center outcomes study of indolent/aggressive NHL treated in NCCN centers. Methods: We developed 20 standardized charts of patients with newly-diagnosed NHL. Raters included 6 Clinical Research Associates from participating sites (CRAs), 3 project staff who conduct CRA training, and 3 medical oncologists. Raters each received a set of standardized charts, detailed instructions and training on a sample chart and abstracted all charts independently. We assessed reliability on 5 variables: MD-reported and rater-determined disease stage; International Prognostic Index (IPI- low-low intermediate, intermediate-high, high); Charlson comorbidity index score; and presence of any item from the Charlson index. We used intraclass correlation coefficients (ICCs) to calculate reliability. We considered coefficients from 0–0.20 ‘slight’, 0.21–0.40 ‘fair’, 0.41–0.60 ‘moderate’, 0.61–0.80 ‘substantial’ and >0.80 ‘almost perfect’(1). Results: Overall reliability was “almost perfect/substantial” for MD-reported stage, rater-determined stage, and IPI, but only “moderate” for the 2 Charlson-based comorbidity measures (Table). Reliability varied by rater group; no rater group was consistently more reliable than others. Conclusions: Trained CRAs abstracted key clinical variables with a very high degree of reliability, and performed at a level similar to study trainers and oncologists. Elements of the Charlson index were less reliable than other data types, possibly due to inherent ambiguity in the index itself. Use of trained CRA staff is reasonable to collect stage and IPI scores in a multi-center outcomes study, however abstracted Charlson scores should be interpreted with caution. (1)Biometrics. 1977. 33:159–74. No significant financial relationships to disclose. [Table: see text]


2000 ◽  
Vol 5 (1) ◽  
pp. 44-54 ◽  
Author(s):  
J. Dongarra ◽  
J. Waśniewski

LAPACK95 is a set of FORTRAN95 subroutines which interfaces FORTRAN95 with LAPACK. All LAPACK driver subroutines (including expert drivers) and some LAPACK computationals have both generic LAPACK95 interfaces and generic LAPACK77 interfaces. The remaining computationals have only generic LAPACK77 interfaces. In both types of interfaces no distinction is made between single and double precision or between real and complex data types.


2016 ◽  
Vol 3 (3) ◽  
pp. 96-114 ◽  
Author(s):  
Ani Aghababyan ◽  
Taylor Martin ◽  
Phillip Janisiewicz ◽  
Kevin Close

Learning analytics is an emerging discipline and, as such, it benefits from new tools and methodological approaches.  This work reviews and summarizes our workshop on microgenetic data analysis techniques using R, held at the 2nd annual Learning Analytics Summer Institute in Cambridge, Massachusetts on June 30th, 2014. Specifically, this paper introduces educational researchers to our experience using data analysis techniques with the RStudio development environment to analyze temporal records of 52 elementary students’ affective and behavioral responses to a digital learning environment. In the RStudio development environment, we used methods such as hierarchical clustering and sequential pattern mining. We also used RStudio to create effective data visualizations of our complex data. The scope of the workshop, and this paper, assumes little prior knowledge of the R programming language, and thus covers everything from data import and cleanup to advanced microgenetic analysis techniques. Additionally, readers will be introduced to software setup, R data types, and visualizations. This paper not only adds to the toolbox for learning analytics researchers (particularly when analyzing time series data), but also shares our experience interpreting a unique and complex dataset.


Author(s):  
Yingxu Wang ◽  
Xinming Tan ◽  
Cyprian F. Ngolah ◽  
Philip Sheu

Type theories are fundamental for underpinning data object modeling and system architectural design in computing and software engineering. Abstract Data Types (ADTs) are a set of highly generic and rigorously modeled data structures in type theory. ADTs also play a key role in Object-Oriented (OO) technologies for software system design and implementation. This paper presents a formal modeling methodology for ADTs using the Real-Time Process Algebra (RTPA), which allows both architectural and behavioral models of ADTs and complex data objects. Formal architectures, static behaviors, and dynamic behaviors of a set of ADTs are comparatively studied. The architectural models of the ADTs are created using RTPA architectural modeling methodologies known as the Unified Data Models (UDMs). The static behaviors of the ADTs are specified and refined by a set of Unified Process Models (UPMs) of RTPA. The dynamic behaviors of the ADTs are modeled by process dispatching technologies of RTPA. This work has been applied in a number of real-time and non-real-time system designs such as a Real-Time Operating System (RTOS+), a Cognitive Learning Engine (CLE), and the automatic code generator based on RTPA.


Sign in / Sign up

Export Citation Format

Share Document