scholarly journals Methods to Capture User Information Needs: Design Principles for Open Data Intermediaries and Data Providers

2021 ◽  
Vol 20 (1) ◽  
pp. 37
Author(s):  
Elisabeth Gebka ◽  
Jonathan Crusoe ◽  
Karin Ahlin
2021 ◽  
pp. 89-104
Author(s):  
Dennis Meredith

The first step to a quality website is to plan design, layout, and content with an understanding of audiences’ information needs. User-friendly design principles are also important to a website’s effectiveness. Web writing needs to be more concise than other types of writing, taking into account the reading practices of visitors to websites. Including a broad range of content makes the site a go-to resource and increases visibility. Usability testing once a site is developed also offers important insights that can guide improvements. Keeping a site fresh with new content and marketing the site also enhance its value.


2020 ◽  
Vol 45 (1) ◽  
pp. 19-23
Author(s):  
Emmanuelle Delmas-Glass ◽  
Robert Sanderson

The PHAROS consortium is adopting the Linked Art data model to make its descriptions of photo archives collections available as Linked Open Data to further support scholars in their research. Linked Art is both a community and a data model. As an international community, it works together to create a shared data model to describe art. As a data model, it is a data profile of the CIDOC Conceptual Reference Model and using Linked Open Data techniques. The goal of Linked Art is to enable museums and developers to engage in LOD initiatives more easily by providing them with shared data modelling decisions and consistent design principles.


2020 ◽  
Vol 27 (5) ◽  
pp. 690-699 ◽  
Author(s):  
Deborah J Cohen ◽  
Tamar Wyte-Lake ◽  
David A Dorr ◽  
Rachel Gold ◽  
Richard J Holden ◽  
...  

Abstract Objectives To identify the unmet information needs of clinical teams delivering care to patients with complex medical, social, and economic needs; and to propose principles for redesigning electronic health records (EHR) to address these needs. Materials and Methods In this observational study, we interviewed and observed care teams in 9 community health centers in Oregon and Washington to understand their use of the EHR when caring for patients with complex medical and socioeconomic needs. Data were analyzed using a comparative approach to identify EHR users’ information needs, which were then used to produce EHR design principles. Results Analyses of > 300 hours of observations and 51 interviews identified 4 major categories of information needs related to: consistency of social determinants of health (SDH) documentation; SDH information prioritization and changes to this prioritization; initiation and follow-up of community resource referrals; and timely communication of SDH information. Within these categories were 10 unmet information needs to be addressed by EHR designers. We propose the following EHR design principles to address these needs: enhance the flexibility of EHR documentation workflows; expand the ability to exchange information within teams and between systems; balance innovation and standardization of health information technology systems; organize and simplify information displays; and prioritize and reduce information. Conclusion Developing EHR tools that are simple, accessible, easy to use, and able to be updated by a range of professionals is critical. The identified information needs and design principles should inform developers and implementers working in community health centers and other settings where complex patients receive care.


2014 ◽  
Vol 8 (2) ◽  
pp. 185-204 ◽  
Author(s):  
Anneke Zuiderwijk ◽  
Marijn Janssen ◽  
Sunil Choenni ◽  
Ronald Meijer

Purpose – The purpose of this paper is to derive design principles for improving the open data publishing process of public organizations. Although governments create large amounts of data, the publication of open data is often cumbersome and there are no standard procedures and processes for opening data, blocking the easy publication of government data. Design/methodology/approach – Action design research (ADR) was used to derive design principles. The literature was used as a foundation, and discussion sessions with civil servants were used to evaluate the usefulness of the principles. Findings – Barriers preventing easy and low-cost publication of open data were identified and connected to design principles, which can be used to guide the design of an open data publishing process. Five new principles are: start thinking about the opening of data at the beginning of the process; develop guidelines, especially about privacy and policy sensitivity of data; provide decision support by integrating insight in the activities of other actors involved in the publishing process; make data publication an integral, well-defined and standardized part of daily procedures and routines; and monitor how the published data are reused. Research limitations/implications – The principles are derived using ADR in a single case. A next step can be to investigate multiple comparative case studies and detail the principles further. We recommend using these principles to develop a reference architecture. Practical implications – The design principles can be used by public organizations to improve their open data publishing processes. The design principles are derived from practice and discussed with practitioners. The discussions showed that the principles could improve the publication process. Social implications – Decreasing the barriers for publishing open government data could result in the publication of more open data. These open data can then be used to stimulate various public values, such as transparency, accountability, innovation, economic growth and informed decision- and policymaking. Originality/value – Publishing data by public organizations is a complex and ill-understood activity. The lack of suitable business processes and the unclear division of responsibilities block publication of open data. This paper contributes to the literature by presenting design principles which can be used to improve the open data publishing process of public sector organizations.


2020 ◽  
Author(s):  
Karalyn Rose Ostler ◽  
Bree Norlander ◽  
Nic Weber

This article describes the curation and use of open demographic data to inform public library services. A case study of census data curated for the Seattle Public Library (SPL) system is described. To understand the information needs of library branches, a set of SPL regional managers were interviewed, a set of use cases were created, and a prototype dashboard tool using open census data was developed to address the needs of two SPL regions. The utility of available open data to meet the needs of regional managers is reviewed, as well as the potential development of replicable data analysis tools for keeping public libraries aware of shifting neighborhood demographics.


2014 ◽  
Vol 08 (04) ◽  
pp. 389-413
Author(s):  
Moritz von Hoffen ◽  
Abdulbaki Uzun

The amount of data within the Linking Open Data (LOD) Cloud is steadily increasing and resembles a rich source of information. Since Context-aware Services (CAS) are based on the correlation of heterogeneous data sources for deriving the contextual situation of a target, it makes sense to leverage that enormous amount of data already present in the LOD Cloud to enhance the quality of these services. Within this work, the applicability of the LOD Cloud as a context provider for enriching CAS is investigated. For this purpose, a deep analysis according to the discoverability and availability of datasets is performed. Furthermore, in order to ease the process of finding a dataset that matches the information needs of a CAS developer, techniques for retrieving contents of LOD datasets are discussed and different approaches to condense the dataset to its most important concepts are shown. Finally, a Context Data Lookup Service is introduced that enables context data discovery within the LOD Cloud and its applicability is highlighted based on an example.


2018 ◽  
Vol 26 (2) ◽  
pp. 95-105 ◽  
Author(s):  
Jeffery L Belden ◽  
Pete Wegier ◽  
Jennifer Patel ◽  
Andrew Hutson ◽  
Catherine Plaisant ◽  
...  

AbstractObjectiveMost electronic health records display historical medication information only in a data table or clinician notes. We designed a medication timeline visualization intended to improve ease of use, speed, and accuracy in the ambulatory care of chronic disease.Materials and MethodsWe identified information needs for understanding a patient medication history, then applied human factors and interaction design principles to support that process. After research and analysis of existing medication lists and timelines to guide initial requirements, we hosted design workshops with multidisciplinary stakeholders to expand on our initial concepts. Subsequent core team meetings used an iterative user-centered design approach to refine our prototype. Finally, a small pilot evaluation of the design was conducted with practicing physicians.ResultsWe propose an open-source online prototype that incorporates user feedback from initial design workshops, and broad multidisciplinary audience feedback. We describe the applicable design principles associated with each of the prototype’s key features. A pilot evaluation of the design showed improved physician performance in 5 common medication-related tasks, compared to tabular presentation of the same information.DiscussionThere is industry interest in developing medication timelines based on the example prototype concepts. An open, standards-based technology platform could enable developers to create a medication timeline that could be deployable across any compatible health IT application.ConclusionThe design goal was to improve physician understanding of a patient’s complex medication history, using a medication timeline visualization. Such a design could reduce temporal and cognitive load on physicians for improved and safer care.


AI Magazine ◽  
2015 ◽  
Vol 36 (1) ◽  
pp. 55-64 ◽  
Author(s):  
Anna Lisa Gentile ◽  
Ziqi Zhang ◽  
Fabio Ciravegna

Information extraction (IE) is the technique for transforming unstructured textual data into structured representation that can be understood by machines. The exponential growth of the Web generates an exceptional quantity of data for which automatic knowledge capture is essential. This work describes the methodology for web scale information extraction in the LODIE project (linked open data information extraction) and highlights results from the early experiments carried out in the initial phase of the project. LODIE aims to develop information extraction techniques able to scale at web level and adapt to user information needs. The core idea behind LODIE is the usage of linked open data, a very large-scale information resource, as a ground-breaking solution for IE, which provides invaluable annotated data on a growing number of domains. This article has two objectives. First, describing the LODIE project as a whole and depicting its general challenges and directions. Second, describing some initial steps taken towards the general solution, focusing on a specific IE subtask, wrapper induction.


2013 ◽  
Vol 07 (04) ◽  
pp. 455-477 ◽  
Author(s):  
EDGARD MARX ◽  
TOMMASO SORU ◽  
SAEEDEH SHEKARPOUR ◽  
SÖREN AUER ◽  
AXEL-CYRILLE NGONGA NGOMO ◽  
...  

Over the last years, a considerable amount of structured data has been published on the Web as Linked Open Data (LOD). Despite recent advances, consuming and using Linked Open Data within an organization is still a substantial challenge. Many of the LOD datasets are quite large and despite progress in Resource Description Framework (RDF) data management their loading and querying within a triple store is extremely time-consuming and resource-demanding. To overcome this consumption obstacle, we propose a process inspired by the classical Extract-Transform-Load (ETL) paradigm. In this article, we focus particularly on the selection and extraction steps of this process. We devise a fragment of SPARQL Protocol and RDF Query Language (SPARQL) dubbed SliceSPARQL, which enables the selection of well-defined slices of datasets fulfilling typical information needs. SliceSPARQL supports graph patterns for which each connected subgraph pattern involves a maximum of one variable or Internationalized resource identifier (IRI) in its join conditions. This restriction guarantees the efficient processing of the query against a sequential dataset dump stream. Furthermore, we evaluate our slicing approach on three different optimization strategies. Results show that dataset slices can be generated an order of magnitude faster than by using the conventional approach of loading the whole dataset into a triple store.


Sign in / Sign up

Export Citation Format

Share Document