scholarly journals Reflections on the Intermediate Data Structure (IDS)

2021 ◽  
Vol 10 ◽  
pp. 71-75
Author(s):  
George Alter

The Intermediate Data Structure (IDS) encourages sharing historical life course data by storing data in a common format. To encompass the complexity of life histories, IDS relies on data structures that are unfamiliar to most social scientists. This article examines four features of IDS that make it flexible and expandable: the Entity-Attribute-Value model, the relational database model, embedded metadata, and the Chronicle file. I also consider IDS from the perspective of current discussions about sharing data across scientific domains. We can find parallels to IDS in other fields that may lead to future innovations.

2018 ◽  
Vol 5 ◽  
pp. 1-2
Author(s):  
Paul Puschmann ◽  
Luciana Quaranta

Historical Life Course Studies, a journal in population studies, aims to stimulate and facilitate the implementation of IDS (Intermediate Data Structure, a standard data format for large historical databases), and to publish the results from (comparative) research with the help of large historical databases. The journal publishes not only empirical articles, but also descriptions (of the construction) of new and existing large historical databases, as well as articles dealing with database documentation, the transformation of existing databases into the IDS format, the development of algorithms and extraction software and all other issues related to the methodology of large historical databases.


2015 ◽  
Vol 2 ◽  
pp. 37-37
Author(s):  
Koen Matthijs ◽  
Paul Puschmann

Historical Life Course Studies, a journal in population studies, aims to stimulate and facilitate the implementation of IDS (Intermediate Data Structure, a standard data format for large historical databases), and to publish the results from (comparative) research with the help of large historical databases. The journal publishes not only empirical articles, but also descriptions (of the construction) of new and existing large historical databases, as well as articles dealing with database documentation, the transformation of existing databases into the IDS format, the development of algorithms and extraction software and all other issues related to the methodology of large historical databases.


2015 ◽  
Vol 733 ◽  
pp. 867-870
Author(s):  
Zhen Zhong Jin ◽  
Zheng Huang ◽  
Hua Zhang

The suffix tree is a useful data structure constructed for indexing strings. However, when it comes to large datasets of discrete contents, most existing algorithms become very inefficient. Discrete datasets are need to be indexed in many fields like record analysis, data analyze in sensor network, association analysis etc. This paper presents an algorithm, STD, which stands for Suffix Tree for Discrete contents, that performs very efficiently with discrete input datasets. It imports several wonderful intermediate data structures for discrete strings; we also take care of the situation that the discrete input strings have similar characteristics. Moreover, STD keeps the advantages of existing implementations which are for successive input strings. Experiments were taken to evaluate the performance and shown that the method works well.


2012 ◽  
Vol 18 (4) ◽  
pp. 29
Author(s):  
John Field

The nature of transitions across the lifecourse is changing, as are the ways in which these transitions are understoodand investigated by social scientists. Much earlier debate on older adults’ transitions has tended to be rooted in acco-unts of relatively fixed social roles and age-based social stages. However, while we can detect some tendencies towardsdestandardization and restandardization of the lifecourse in later life, we can also see significant continuities in theinfluences of socio-economic position, gender, and ethnicity, as well as of generational position, that continue to affectpeople’s life chances, as well as the expectations and experiences of transition of older people. The paper examines theinterplay of these complex and contradictory structural positions and cultural locations on transitions, and considersthe ways in which older people use and understand learning, formally and informally, as a way of exercising agencyand recreating meaning. It will draw on recent research into the life histories of adults in Scotland, a relatively smallcountry with a typically European pattern of demographic change. The study was concerned with agency, identity,change and learning across the life course, and this paper will concentrate on the evidence relating to experiences oftransition in later life. It will particularly focus on the idea of ‘educational generations’ as a key concept that helps usunderstand how adults use and interpret learning in later life.


2014 ◽  
Vol 1 ◽  
pp. 1-26
Author(s):  
George Alter ◽  
Kees Mandemakers

The Intermediate Data Structure (IDS) is a standard data format that has been adopted by several large longitudinal databases on historical populations. Since the publication of the first version in Historical Social Research in 2009, two improved and extended versions have been published in the Collaboratory Historical Life Courses. In this publication we present version 4 which is the latest ‘official’ standard of the IDS. Discussions with users over the last four years resulted in important changes, like the inclusion of a new table defining the hierarchical relationships among ‘contexts’, decision schemes for recording relationships, additional fields in the metadata table, rules for handling stillbirths, a reciprocal model for relationships, guidance for linking IDS data with geospatial information, and the introduction of an extended IDS for computed variables.


2017 ◽  
Vol 4 ◽  
pp. 59-96
Author(s):  
Emily Klancher Merchant ◽  
George Alter

The Intermediate Data Structure (IDS) provides a standard format for storing and sharing individual-level longitudinal life-course data (Alter and Mandemakers 2014; Alter, Mandemakers and Gutmann 2009). Once the data are in the IDS format, a standard set of programs can be used to extract data for analysis, facilitating the analysis of data across multiple databases. Currently, life-course databases store information in a variety of formats, and the process of translating data into IDS can be long and tedious. The IDS Transposer is a software tool that automates this process for source data in any format, allowing database administrators to specify how their datasets are to be represented in IDS. This article describes how the IDS Transposer works, first by going through an example step-bystep, and then by discussing each part of the process and potential options and exceptions in detail.


2005 ◽  
Vol 12 (3) ◽  
Author(s):  
Olivier Danvy ◽  
Mayer Goldberg

We present a programming pattern where a recursive function defined over a data structure traverses another data structure at return time. The idea is that the recursive calls get us `there' by traversing the first data structure and the returns get us `back again' while traversing the second data structure. We name this programming pattern of traversing a data structure at call time and another data structure at return time ``There And Back Again'' (TABA).<br /> <br />The TABA pattern directly applies to computing symbolic convolutions and to multiplying polynomials. It also blends well with other programming patterns such as dynamic programming and traversing a list at double speed. We illustrate TABA and dynamic programming with Catalan numbers. We illustrate TABA and traversing a list at double speed with palindromes and we obtain a novel solution to this traditional exercise. Finally, through a variety of tree traversals, we show how to apply TABA to other data structures than lists.<br /> <br />A TABA-based function written in direct style makes full use of an ALGOL-like control stack and needs no heap allocation. Conversely, in a TABA-based function written in continuation-passing style and recursively defined over a data structure (traversed at call time), the continuation acts as an iterator over a second data structure (traversed at return time). In general, the TABA pattern saves one from accumulating intermediate data structures at call time.


1994 ◽  
Vol 04 (04) ◽  
pp. 447-453
Author(s):  
L.K. SWIFT ◽  
T. JOHNSON ◽  
P.E. LIVADAS

Quadtrees and octrees are hierarchical data structures for efficiently storing image data. Quadtrees represent two dimensional images, while octrees are a generalization to three dimensions. The linear form of each is an abstraction of the tree structure to reduce storage requirements. We have developed a parallel algorithm to efficiently create a linear octree from quadtree slices of an object without the use of an intermediate data structure. We also propose the d-slice, which is a generalization of an octree, and which efficiently represents non-cubic volumes.


1980 ◽  
Vol 3 (3) ◽  
pp. 363-377
Author(s):  
John Grant

In this paper we investigate the inclusion of incomplete information in the relational database model. This is done by allowing nonatomic entries, i.e. sets, as elements in the database. A nonatomic entry is interpreted as a set of possible elements, one of which is the correct one. We deal primarily with numerical entries where an allowed set is an interval, and character string entries. We discuss the various operations of the relational algebra as well as the notion of functional dependency for the database model.


2021 ◽  
Vol 13 (4) ◽  
pp. 559
Author(s):  
Milto Miltiadou ◽  
Neill D. F. Campbell ◽  
Darren Cosker ◽  
Michael G. Grant

In this paper, we investigate the performance of six data structures for managing voxelised full-waveform airborne LiDAR data during 3D polygonal model creation. While full-waveform LiDAR data has been available for over a decade, extraction of peak points is the most widely used approach of interpreting them. The increased information stored within the waveform data makes interpretation and handling difficult. It is, therefore, important to research which data structures are more appropriate for storing and interpreting the data. In this paper, we investigate the performance of six data structures while voxelising and interpreting full-waveform LiDAR data for 3D polygonal model creation. The data structures are tested in terms of time efficiency and memory consumption during run-time and are the following: (1) 1D-Array that guarantees coherent memory allocation, (2) Voxel Hashing, which uses a hash table for storing the intensity values (3) Octree (4) Integral Volumes that allows finding the sum of any cuboid area in constant time, (5) Octree Max/Min, which is an upgraded octree and (6) Integral Octree, which is proposed here and it is an attempt to combine the benefits of octrees and Integral Volumes. In this paper, it is shown that Integral Volumes is the more time efficient data structure but it requires the most memory allocation. Furthermore, 1D-Array and Integral Volumes require the allocation of coherent space in memory including the empty voxels, while Voxel Hashing and the octree related data structures do not require to allocate memory for empty voxels. These data structures, therefore, and as shown in the test conducted, allocate less memory. To sum up, there is a need to investigate how the LiDAR data are stored in memory. Each tested data structure has different benefits and downsides; therefore, each application should be examined individually.


Sign in / Sign up

Export Citation Format

Share Document