Objects collection management in multidimensional DBMS data model

Author(s):  
T.-M. Lim ◽  
S.-P. Lee
2014 ◽  
Vol 12 (5) ◽  
pp. 383
Author(s):  
Nuala M. Cowan, DSc, MA, BA

Objective: An effectual emergency response effort is contingent upon the quality and timeliness of information provided to both the decision making and coordinating functions; conditions that are hard to guarantee in the urgent climate of the response effort. The purpose of this paper is to present a validated Humanitarian Data Model (HDM) that can assist in the rapid assessment of disaster needs and subsequent decision making. Substandard, inconsistent information can lead to poorly informed decisions, and subsequently, inappropriate response activities. Here we present a novel, organized, and fluid information management workflow to be applied during the rapid assessment phase of an emergency response. A comprehensive, peer-reviewed geospatial data model not only directs the design of data collection tools but also allows for more systematic data collection and management, leading to improved analysis and response outcomes.Design: This research involved the development of a comprehensive geospatial data model to guide the collection, management and analysis of geographically referenced assessment information, for implementation at the rapid response phase of a disaster using a mobile data collection app based on key outcome parameters. A systematic review of literature and best practices was used to identify and prioritize the minimum essential data variables.Subjects: The data model was critiqued for variable content, structure, and usability by a group of subject matter experts in the fields of humanitarian information management and geographical information systems.Conclusions: Consensus found that the adoption of a standardized system of data collection, management, and processing, such as the data model presented here, could facilitate the collection and sharing of information between agencies with similar goals, facilitate the better coordination of efforts by unleashing the power of geographic information for humanitarian decision support.


Author(s):  
Mikko Heikkinen ◽  
Ville-Matti Riihikoski ◽  
Anniina Kuusijärvi ◽  
Dare Talvitie ◽  
Tapani Lahti ◽  
...  

Many natural history museums share a common problem: a multitude of legacy collection management systems (CMS) and the difficulty of finding a new system to replace them. Kotka is a CMS created by the Finnish Museum of Natural History (Luomus) to solve this problem. Its development started in late 2011 and was put into operational use in 2012. Kotka was first built to replace dozens of in-house systems previously used at Luomus, but eventually grew into a national system, which is now used by 10 institutions in Finland. Kotka currently holds c. 1.7 million specimens from zoological, botanical, paleontological, microbial and botanic garden collections, as well as data from genomic resource collections. Kotka is designed to fit the needs of different types of collections and can be further adapted when new needs arise. Kotka differs in many ways from traditional CMS's. It applies simple and pragmatic approaches. This has helped it to grow into a widely used system despite limited development resources – on average less than one full-time equivalent developer (FTE). The aim of Kotka is to improve collection management efficiency by providing practical tools. It emphasizes the quantity of digitized specimens over completeness of the data. It also harmonizes collection management practices by bringing all types of collections under one system. Kotka stores data mostly in a denormalized free text format using a triplestore and a simple hierarchical data model (Fig. 1). This allows greater flexibility of use and faster development compared to a normalized relational database. New data fields and structures can easily be added as needs arise. Kotka does some data validation, but quality control is seen as a continuous process and is mostly done after the data has been recorded into the system. The data model is loosely based on the ABCD (Access to Biological Collection Data) standard, but has been adapted to support practical needs. Kotka is a web application and data can be entered, edited, searched and exported through a browser-based user interface. However, most users prefer to enter new data in customizable MS-Excel templates, which support the hierarchical data model, and upload these to Kotka. Batch updates can also be done using Excel. Kotka stores all revisions of the data to avoid any data loss due to technical or human error. Kotka also supports designing and printing specimen labels, annotations by external users, as well as handling accessions, loan transactions, and the Nagoya protocol. Taxonomy management is done using a separate system provided by the Finnish Biodiversity Information Facility (FinBIF). This decoupling also allows entering specimen data before the taxonomy is updated, which speeds up specimen digitization. Every specimen is given a persistent unique HTTP-URI identifier (CETAF stable identifiers). Specimen data is accessible through the FinBIF portal at species.fi, and will later be shared to GBIF according to agreements with data holders. Kotka is continuously developed and adapted to new requirements in close collaboration with curators and technical collection staff, using agile software development methods. It is available as open source, but is tightly integrated with other FinBIF infrastructure, and currently only offered as an online service (Software as a Service) hosted by FinBIF.


2008 ◽  
Author(s):  
Pedro J. M. Passos ◽  
Duarte Araujo ◽  
Keith Davids ◽  
Ana Diniz ◽  
Luis Gouveia ◽  
...  

2019 ◽  
Vol 13 (1-2) ◽  
pp. 95-115
Author(s):  
Brandon Plewe

Historical place databases can be an invaluable tool for capturing the rich meaning of past places. However, this richness presents obstacles to success: the daunting need to simultaneously represent complex information such as temporal change, uncertainty, relationships, and thorough sourcing has been an obstacle to historical GIS in the past. The Qualified Assertion Model developed in this paper can represent a variety of historical complexities using a single, simple, flexible data model based on a) documenting assertions of the past world rather than claiming to know the exact truth, and b) qualifying the scope, provenance, quality, and syntactics of those assertions. This model was successfully implemented in a production-strength historical gazetteer of religious congregations, demonstrating its effectiveness and some challenges.


MIS Quarterly ◽  
2013 ◽  
Vol 37 (1) ◽  
pp. 125-147 ◽  
Author(s):  
Rui Chen ◽  
◽  
Raj Sharman ◽  
H. Raghav Rao ◽  
Shambhu J. Upadhyaya ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document