scholarly journals Kotka - A national multi-purpose collection management system

Author(s):  
Mikko Heikkinen ◽  
Ville-Matti Riihikoski ◽  
Anniina Kuusijärvi ◽  
Dare Talvitie ◽  
Tapani Lahti ◽  
...  

Many natural history museums share a common problem: a multitude of legacy collection management systems (CMS) and the difficulty of finding a new system to replace them. Kotka is a CMS created by the Finnish Museum of Natural History (Luomus) to solve this problem. Its development started in late 2011 and was put into operational use in 2012. Kotka was first built to replace dozens of in-house systems previously used at Luomus, but eventually grew into a national system, which is now used by 10 institutions in Finland. Kotka currently holds c. 1.7 million specimens from zoological, botanical, paleontological, microbial and botanic garden collections, as well as data from genomic resource collections. Kotka is designed to fit the needs of different types of collections and can be further adapted when new needs arise. Kotka differs in many ways from traditional CMS's. It applies simple and pragmatic approaches. This has helped it to grow into a widely used system despite limited development resources – on average less than one full-time equivalent developer (FTE). The aim of Kotka is to improve collection management efficiency by providing practical tools. It emphasizes the quantity of digitized specimens over completeness of the data. It also harmonizes collection management practices by bringing all types of collections under one system. Kotka stores data mostly in a denormalized free text format using a triplestore and a simple hierarchical data model (Fig. 1). This allows greater flexibility of use and faster development compared to a normalized relational database. New data fields and structures can easily be added as needs arise. Kotka does some data validation, but quality control is seen as a continuous process and is mostly done after the data has been recorded into the system. The data model is loosely based on the ABCD (Access to Biological Collection Data) standard, but has been adapted to support practical needs. Kotka is a web application and data can be entered, edited, searched and exported through a browser-based user interface. However, most users prefer to enter new data in customizable MS-Excel templates, which support the hierarchical data model, and upload these to Kotka. Batch updates can also be done using Excel. Kotka stores all revisions of the data to avoid any data loss due to technical or human error. Kotka also supports designing and printing specimen labels, annotations by external users, as well as handling accessions, loan transactions, and the Nagoya protocol. Taxonomy management is done using a separate system provided by the Finnish Biodiversity Information Facility (FinBIF). This decoupling also allows entering specimen data before the taxonomy is updated, which speeds up specimen digitization. Every specimen is given a persistent unique HTTP-URI identifier (CETAF stable identifiers). Specimen data is accessible through the FinBIF portal at species.fi, and will later be shared to GBIF according to agreements with data holders. Kotka is continuously developed and adapted to new requirements in close collaboration with curators and technical collection staff, using agile software development methods. It is available as open source, but is tightly integrated with other FinBIF infrastructure, and currently only offered as an online service (Software as a Service) hosted by FinBIF.

Author(s):  
Mikko Heikkinen ◽  
Ville-Matti Riihikoski ◽  
Anniina Kuusijärvi ◽  
Anne Koivunen ◽  
Kari Lahti ◽  
...  

Kotka is a collection management system (CMS) developed by the Finnish Museum of Natural History (Luomus) and used by all major institutes with natural history collections in Finland. It is one of the primary data sources of the Finnish Biodiversity Information Facility through which the data managed in Kotka is distributed (species.fi). Kotka is designed to allow flexible development of new tools to support collection management practices. This paper describes some of the tools included in Kotka. Label designer Printed labels have traditionally been used to store and display data about specimens. In modern collection informatics their role in connecting specimens to electronic data and in collection and data curation is emphasised. Requirements for the labels are changing rapidly and different types of specimens have completely different label designs. To give the collection managers an option to design suitable labels to meet their needs, we have created a WYSIWYG (What You See is What You Get) label designer (Fig. 1). It allows users to pick desired data fields and arrange them on the label, adjust styles and sizes, and generate HTTP-URI identifiers with QR Codes (Quick Response Codes). The label designer can be used with a modern web browser or as a standalone desktop application, on both Windows and Mac. It is open source and can be integrated with other applications by using it as an Angular module or as an custom element to any site. The label designer serves both scientific natural history collection managers and amateurs managing private collections. At Luomus the tool is used both in the Kotka CMS and in the Notebook system for species occurrence records. Annotations As collections are increasingly used and managed digitally, the traditional means of quality control and data improvement are no longer sufficient. We have developed tools which allow e.g. external researchers and citizen experts to review specimen data online at species.fi (species.fi), and annotate the data by evaluating its quality, propose and/or change species identifications and include comments. The annotations are delivered to collection managers who can then use them to update and improve the specimen data at the primary data source. QR Code reader Modern specimen labels often include specimen identifiers such as QR Codes or barcodes. To improve collection management efficiency, we have made a QR Code reader application, which allows collection managers to read these codes and display specimen data quickly on a handheld device (e.g. mobile phone) or a desktop computer with a web camera. The application is open source and can be configured to connect to any system utilizing HTTP-URI identifiers.


Author(s):  
Mikko Heikkinen ◽  
Anniina Kuusijärvi ◽  
Ville-Matti Riihikoski ◽  
Leif Schulman

Many natural history museums share a common problem: a multitude of legacy collection management systems (CMS) and the difficulty of finding a new system to replace them. Kotka is a CMS developed starting in 2011 at the Finnish Museum of Natural History (Luomus) and Finnish Biodiversity Information Facility (FinBIF) (Heikkinen et al. 2019, Schulman et al. 2019) to solve this problem. It has grown into a national system used by all natural history museums in Finland, and currently contains over two million specimens from several domains (zoological, botanical, paleontological, microbial, tissue sample and botanic garden collections). Kotka is a web application where data can be entered, edited, searched and exported through a browser-based user interface. It supports designing and printing specimen labels, handling collection metadata and specimen transactions, and helps support Nagoya protocol compliance. Creating a shared system for multiple institutions and collection types is difficult due to differences in their current processes, data formats, future needs and opinions. The more independent actors there are involved, the more complicated the development becomes. Successful development requires some trade-offs. Kotka has chosen features and development principles that emphasize fast development into a multitude of different purposes. Kotka was developed using agile methods with a single person (a product owner) making development decisions, based on e.g., strategic objectives, customer value and user feedback. Technical design emphasizes efficient development and usage over completeness and formal structure of the data. It applies simple and pragmatic approaches and improves collection management by providing practical tools for the users. In these regards, Kotka differs in many ways from a traditional CMS. Kotka stores data in a mostly denormalized free text format and uses a simple hierarchical data model. This allows greater flexibility and makes it easy to add new data fields and structures based on user feedback. Data harmonization and quality assurance is a continuous process, instead of doing it before entering data into the system. For example, specimen data with a taxon name can be entered into Kotka before the taxon name has been entered into the accompanying FinBIF taxonomy database. Example: simplified data about two specimens in Kotka, which have not been fully harmonized yet. Taxon: Corvus corone cornix Country: FI Collector: Doe, John Coordinates: 668, 338 Coordinate system: Finnish uniform coordinate system Taxon: Corvus corone cornix Country: FI Collector: Doe, John Coordinates: 668, 338 Coordinate system: Finnish uniform coordinate system Taxon: Corvus cornix Country: Finland Collector: Doe, J. Coordinates: 60.2442, 25,7201 Coordinate system: WGS84 Taxon: Corvus cornix Country: Finland Collector: Doe, J. Coordinates: 60.2442, 25,7201 Coordinate system: WGS84 Kotka’s data model does not follow standards, but has grown organically to reflect practical needs from the users. This is true particularly of data collected in research projects, which are often unique and complicated (e.g. complex relationships between species), requiring new data fields and/or storing data as free text. The majority of the data can be converted into simplified standard formats (e.g. Darwin Core) for sharing. The main challenge with this has been vague definitions of many data sharing formats (e.g. Darwin Core, CETAF Specimen Preview Profile (CETAF 2020), allowing different interpretations. Kotka trusts its users: it places very few limitations on what users can do, and has very simple user role management. Kotka stores the full history of all data, which allows fixing any possible errors and prevents data loss. Kotka is open source software, but is tightly coupled with the infrastructure of the Finnish Biodiversity Information Facility (FinBIF). Currently, it is only offered as an online service (Software as a Service) hosted by FinBIF. However, it could be developed into a more modular system that could, for example, utilize multiple different database backends and taxonomy data sources.


2019 ◽  
Vol 42 (1) ◽  
pp. 1-16
Author(s):  
Max Caspers ◽  
Luc Willemse ◽  
Eulàlia Gassó Miracle ◽  
Erik J. van Nieukerken

In terms of amateurs and professionals studying and collecting insects, Lepidoptera represent one of the most popular groups. It is this popularity, in combination with wings being routinely spread during mounting, which results in Lepidoptera often taking up the largest number of drawers and space in entomological collections. As resources grow increasingly scarce in natural history museums, any process that results in more efficient use of resources is a welcome addition to collection management practices. Therefore, we propose an alternative method to process papered Lepidoptera: a workflow to digitize (imaging and data registration) papered specimens and to store them (semi)permanently, still unmounted, in glassine envelopes. The mounting of specimens will be limited to those for which it is considered essential. The entire workflow of digitization and repacking can be carried out by non-expert volunteers. By releasing data and images on the internet, taxonomic experts worldwide can assist with identifications. This method was tested for Papilionidae. Results suggest that the workflow and permanent storage in glassine envelopes described here can be applied to most groups of Lepidoptera.


Author(s):  
Falko Glöckler ◽  
James Macklin ◽  
David Shorthouse ◽  
Christian Bölling ◽  
Satpal Bilkhu ◽  
...  

The DINA Consortium (DINA = “DIgital information system for NAtural history data”, https://dina-project.net) is a framework for like-minded practitioners of natural history collections to collaborate on the development of distributed, open source software that empowers and sustains collections management. Target collections include zoology, botany, mycology, geology, paleontology, and living collections. The DINA software will also permit the compilation of biodiversity inventories and will robustly support both observation and molecular data. The DINA Consortium focuses on an open source software philosophy and on community-driven open development. Contributors share their development resources and expertise for the benefit of all participants. The DINA System is explicitly designed as a loosely coupled set of web-enabled modules. At its core, this modular ecosystem includes strict guidelines for the structure of Web application programming interfaces (APIs), which guarantees the interoperability of all components (https://github.com/DINA-Web). Important to the DINA philosophy is that users (e.g., collection managers, curators) be actively engaged in an agile development process. This ensures that the product is pleasing for everyday use, includes efficient yet flexible workflows, and implements best practices in specimen data capture and management. There are three options for developing a DINA module: create a new module compliant with the specifications (Fig. 1), modify an existing code-base to attain compliance (Fig. 2), or wrap a compliant API around existing code that cannot be or may not be modified (e.g., infeasible, dependencies on other systems, closed code) (Fig. 3). create a new module compliant with the specifications (Fig. 1), modify an existing code-base to attain compliance (Fig. 2), or wrap a compliant API around existing code that cannot be or may not be modified (e.g., infeasible, dependencies on other systems, closed code) (Fig. 3). All three of these scenarios have been applied in the modules recently developed: a module for molecular data (SeqDB), modules for multimedia, documents and agents data and a service module for printing labels and reports: The SeqDB collection management and molecular tracking system (Bilkhu et al. 2017) has evolved through two of these scenarios. Originally, the required architectural changes were going to be added into the codebase, but after some time, the development team recognised that the technical debt inherent in the project wasn’t worth the effort of modification and refactoring. Instead a new codebase was created bringing forward the best parts of the system oriented around the molecular data model for Sanger Sequencing and Next Generation Sequencing (NGS) workflows. In the case of the Multimedia and Document Store module and the Agents module, a brand new codebase was established whose technology choices were aligned with the DINA vision. These two modules have been created from fundamental use cases for collection management and digitization workflows and will continue to evolve as more modules come online and broaden their scope. The DINA Labels & Reporting module is a generic service for transforming data in arbitrary printable layouts based on customizable templates. In order to use the module in combination with data managed in collection management software Specify (http://specifysoftware.org) for printing labels of collection objects, we wrapped the Specify 7 API with a DINA-compliant API layer called the “DINA Specify Broker”. This allows for using the easy-to-use web-based template engine within the DINA Labels & Reports module without changing Specify’s codebase. In our presentation we will explain the DINA development philosophy and will outline benefits for different stakeholders who directly or indirectly use collections data and related research data in their daily workflows. We will also highlight opportunities for joining the DINA Consortium and how to best engage with members of DINA who share their expertise in natural science, biodiversity informatics and geoinformatics.


2015 ◽  
Vol 42 (2) ◽  
pp. 245-252 ◽  
Author(s):  
P. G. Moore

The coverage of natural history in British newspapers has evolved from a “Nature notes” format – usually a regular column submitted by a local amateur naturalist – to professional, larger-format, presentations by dedicated environmental correspondents. Not all such environmental correspondents, however, have natural-history expertise or even a scientific background. Yorkshire's Michael Clegg was a man who had a life-long love of nature wedded to a desire to communicate that passion. He moved from a secure position in the museum world (with a journalistic sideline) to become a freelance newspaper journalist and (subsequently) commentator on radio and television dealing with, and campaigning on, environmental issues full-time. As such, he exemplified the transition in how natural history coverage in the media evolved in the final decades of the twentieth century reflecting modern concerns about biodiversity, conservation, pollution and sustainable development.


2021 ◽  
Author(s):  
Subhendu Sengupta ◽  
Vincent Goveas

Abstract This paper is based on successful implementation of procedural automation of Ethane (C2) recovery - rejection mode change using Yokogawa's Exapilot software, wherein ADNONC Gas Processing Habshan 5 & Sulphur management approved the implementation based on similar success of the Sulphur Recover Unit start-up/shutdown procedural automation & company's drive for digitalisation. Scope was to develop modules for automating C2 Recovery /Rejection change over procedure in NGL unit using M/s Yokogawa Exapilot software. These automated procedures aimed to standardize said mode change over operations by incorporating the operating know how and the expertise of skilled-experienced operators into the Exapilot system as a set of Standard Operating Procedures (SOPs) that are executed in right operating sequence for enhanced operating efficiency. Two main procedures & associated modules were designed, engineered and built using Exapilot to enable single-click change over automation for NGL units. Those were validated with operation and deployed in the Exapilot Server and were integrated with the Operator Consoles (HIS) for access, and was supplemented with operator training. Ethane Recovery to Rejection Mode Change Ethane Rejection to Recovery Mode Change Besides standardization and reduced change over time, this improved the critical asset integrity and lifespan of NGL section equipment by advocating systematic operations. Following benefits including major take away from this project: ➢ Standardized the mode change-over procedures & minimized human error by the digitalization of paper documentation procedures into electronic workflow process. Procedural Automation like Exapilot is powerful tool for digital transformation of batch/discrete operation like unit/equipment start-up/shutdown or grade/mode change over. ➢ Reduced inherent delay due to manual change over. Hence, minimizing the loss-opportunities & operating cost. Besides this tool can be used as training tool (when used in offline mode) which help operator succession plan & effective knowledge transfer ➢ Automated critical operation such as temperature/flow ramping, improved equipment integrity and prolonged equipment life. Procedural Automation using Exapilot thus can improve operation efficiency, asset integrity, equipment or material life span This paper presents a success story of procedural automation of batch operation in continuation of similar success in SRU start-up & shutdown automation. This tool along with proper integration work with DCS, has opened door for automation/digitalization in batch operation in continuous process not only in other sites of ADNOC Gas Processing and other ADNOC Group Companies but also in other industries that helps companies to enhance efficiency and fulfil their digitalization journey. Though Exapilot software belongs to M/s Yokogawa, however other DCS systems have similar software such as Honeywell DCS EPKS has E-procedure for procedural automation.


2020 ◽  
Vol 18 (1) ◽  
Author(s):  
Jessica Lawler ◽  
Katrina Maclaine ◽  
Alison Leary

Abstract Background This study aims to understand how the implementation of the advanced clinical practice framework in England (2017) was experienced by the workforce to check assumptions for a national workforce modelling project. The advanced clinical practice framework was introduced in England in 2017 by Health Education England to clarify the role of advanced practice in the National Health Service. Methods As part of a large-scale workforce modelling project, a self-completed questionnaire was distributed via the Association of Advanced Practice Educators UK aimed at those studying to be an Advanced Clinical Practitioner or who are practicing at this level in order to check assumptions. Semi-structured phone interviews were carried out with this same group. Questionnaires were summarised using descriptive statistics in Excel for categorical responses and interviews and survey free-text were analysed using thematic analysis in NVivo 10. Results The questionnaire received over 500 respondents (ten times that expected) and 15 interviews were carried out. Advanced clinical practice was considered by many respondents the only viable clinical career progression. Respondents felt that employers were not clear about what practicing at this level involved or its future direction. 54% (287) thought that ‘ACP’ was the right job title for them. 19% (98) of respondents wanted their origin registered profession to be included in their title. Balancing advanced clinical practice education concurrently with a full-time role was challenging, participants underestimated the workload and expectations of employer’s training. There is an apparent dichotomy that has developed from the implementation of the 2017 framework: that of advanced clinical practice as an advanced level of practice within a profession, and that of Advanced Clinical Practitioner as a new generic role in the medical model. Conclusions Efforts to establish further clarity and structure around advanced clinical practice are needed for both the individuals practising at this level and their employers. A robust evaluation of the introduction of this role should take place.


2020 ◽  
Vol 16 (02) ◽  
pp. 37-40
Author(s):  
Indira A. ◽  
V. Bala Chandra Maree

The modern woman is toiling hard to prove her worth on the fronts, her household and her place of employment. Taking up careers creates the need for the homemakers to fulfill dual roles – homemaking and wage earning. Homemaking itself is a full time job, over which the career demands another eight to ten hours of homemaker’s time daily. Good time management provided the ability to keep a balance in our lives, or to recognize where the imbalance is. For instance, is all our focus on work rather than on leisure and social activities good? What about our family and those near and dear to us-are they allowed to play an important role in our lives, or are they constantly brushed to one side? The overall objective of the study is to analyze the socio economic conditions of women married teachers in Higher Education in Dindigul and to examine their time management practices and skill. The nature of adjustments made by the respondents to solve the problems mainly includes help from family members and friends, postponement of less important activity and use of leave. The study observed that for majority of the respondents, achieving of goals related to use of time is mainly due to proper use of available time and efficiency in attaining responsibilities in limited time. This again reflects how the women teachers are successful in meeting their responsibilities.


Sign in / Sign up

Export Citation Format

Share Document