Enabling query based failure detection for EH&S compliance by a domain specific workflow meta data model

Author(s):  
Heiko Thimm
Author(s):  
Heiko Henning Thimm

Today’s companies are able to automate the enforcement of Environmental, Health and Safety (EH&S) duties through the use of workflow management technology. This approach requires to specify activities that are combined to workflow models for EH&S enforcement duties. In order to meet given safety regulations these activities are to be completed correctly and within given deadlines. Otherwise, activity failures emerge which may lead to breaches against safety regulations. A novel domain-specific workflow meta data model is proposed. The model enables a system to detect and predict activity failures through the use of data about the company, failure statistics, and activity proxies. Since the detection and prediction methods are based on the evaluation of constraints specified on EH&S regulations, a system approach is proposed that builds on the integration of a Workflow Management System (WMS) with an EH&S Compliance Information System. Main principles of the failure detection and prediction are described. For EH&S managers the system shall provide insights into the current failure situation. This can help to prevent and mitigate critical situations such as safety enforcement measures that are behind their deadlines. As a result a more reliable enforcement of safety regulations can be achieved.


2016 ◽  
pp. 1756-1773
Author(s):  
Grzegorz Spyra ◽  
William J. Buchanan ◽  
Peter Cruickshank ◽  
Elias Ekonomou

This paper proposes a new identity, and its underlying meta-data, model. The approach enables secure spanning of identity meta-data across many boundaries such as health-care, financial and educational institutions, including all others that store and process sensitive personal data. It introduces the new concepts of Compound Personal Record (CPR) and Compound Identifiable Data (CID) ontology, which aim to move toward own your own data model. The CID model ensures authenticity of identity meta-data; high availability via unified Cloud-hosted XML data structure; and privacy through encryption, obfuscation and anonymity applied to Ontology-based XML distributed content. Additionally CID via XML ontologies is enabled for identity federation. The paper also suggests that access over sensitive data should be strictly governed through an access control model with granular policy enforcement on the service side. This includes the involvement of relevant access control model entities, which are enabled to authorize an ad-hoc break-glass data access, which should give high accountability for data access attempts.


2019 ◽  
Vol 9 (3) ◽  
pp. 23-47
Author(s):  
Sumita Gupta ◽  
Neelam Duhan ◽  
Poonam Bansal

With the rapid growth of digital information and user need, it becomes imperative to retrieve relevant and desired domain or topic specific documents as per the user query quickly. A focused crawler plays a vital role in digital libraries to crawl the web so that researchers can easily explore the domain specific search results list and find the desired content against the query. In this article, a focused crawler is being proposed for online digital library search engines, which considers meta-data of the query in order to retrieve the corresponding document or other relevant but missing information (e.g. paid publication from ACM, IEEE, etc.) against the user query. The different query strategies are made by using the meta-data and submitted to different search engines which aim to find more relevant information which is missing. The result comes out from these search engines are filtered and then used further for crawling the Web.


Author(s):  
Arthur Lauriot Dit Prevost ◽  
Marie Trencart ◽  
Vianney Gaillard ◽  
Guillaume Bouzille ◽  
Rémi Besson ◽  
...  

Introduction: Although electronic health records have been facilitating the management of medical information, there is still room for improvement in daily production of medical report. Possible areas for improvement would be: to improve reports quality (by increasing exhaustivity), to improve patients’ understanding (by mean of a graphical display), to save physicians’ time (by helping reports writing), and to improve sharing and storage (by enhancing interoperability). We set up the ICIPEMIR project (Improving the completeness, interoperability and patients explanation of medical imaging reports) as an academic solution to optimize medical imaging reports production. Such a project requires two layers: one engineering layer to build the automation process, and a second medical layer to determine domain-specific data models for each type of report. We describe here the medical layer of this project. Methods: We designed a reproducible methodology to identify -for a given medical imaging exam- mandatory fields, and describe a corresponding simple data model using validated formats. The mandatory fields had to meet legal requirements, domain-specific guidelines, and results of a bibliographic review on clinical studies. An UML representation, a JSON Schema, and a YAML instance dataset were defined. Based on this data model a form was created using Goupile, an open source eCRF script-based editor. In addition, a graphical display was designed and mapped with the data model, as well as a text template to automatically produce a free-text report. Finally, the YAML instance was encoded in a QR-Code to allow offline paper-based transmission of structured data. Results: We tested this methodology in a specific domain: computed tomography for urolithiasis. We successfully extracted 73 fields, and transformed them into a simple data model, with mapping to a simple graphical display, and textual report template. The offline QR-code transmission of a 2,615 characters YAML file was successful with simple smartphone QR-Code scanner. Conclusion: Although automated production of medical report requires domain-specific data model and mapping, these can be defined using a reproducible methodology. Hopefully this proof of concept will lead to a computer solution to optimize medical imaging reports, driven by academic research.


Author(s):  
P. K. Parida ◽  
S. Tripathi

<p><strong>Abstract.</strong> Recognizing the potential utility and importance of a large quantity of spatial data generated using public funds by the Government Departments, organizations and institutions of the State for good governance and taking into consideration that most of such spatial data remains inaccessible to common citizen although most of such data may be unrestricted and not sensitive in nature and also most of such data generated at different State Government Departments do not have compatibility due to lack of common standards and non-interoperability and further taking note of that Government of India framed the “National Data Sharing and Accessibility Policy (NDSAP)”, National Map Policy (2005) and “Remote Sensing Data Policy (RSDP- 2001 and 2011)” to spell out sharing principles of information generated using public funds, Government of Odisha has institutionalised “Odisha Spatial Data Infrastructure(OSDI)”, in the line of National Spatial Data Infrastructure(NSDI)”. The Government of Odisha gazetted “Odisha Spatial Data Policy (OSDP)” in 22nd August 2015, in the line of NDSAP, to institute a policy frame work to facilitate sharing of such Government owned data through OSDI, in open format, for supporting sustainable and inclusive governance and effective planning, implementation and monitoring of developmental programmes, managing and mitigating disasters and scientific research aiding informed decisions, for public good. The OSDI has already been operational and made live.</p><p>This paper highlights the Data Model, Meta Data Standard and Sharing Policy adopted in OSDI, apart from other institutional / operational issues in smooth grounding and operationalisation of the OSDI in a State framework.</p>


2020 ◽  
Author(s):  
Hindrik HD Kerstens ◽  
Jayne Y Hehir-Kwa ◽  
Ellen van de Geer ◽  
Chris van Run ◽  
Eugène TP Verwiel ◽  
...  

AbstractMotivationThe increase in speed, reliability and cost-effectiveness of high-throughput sequencing has led to the widespread clinical application of genome (WGS), exome (WXS) and transcriptome analysis. WXS and RNA sequencing is now being implemented as standard of care for patients and for patients included in clinical studies. To keep track of sample relationships and analyses, a platform is needed that can unify metadata for diverse sequencing strategies with sample metadata whilst supporting automated and reproducible analyses. In essence ensuring that analysis is conducted consistently, and data is Findable, Accessible, Interoperable and Reusable (FAIR).ResultsWe present “Trecode”, a framework that records both clinical and research sample (meta) data and manages computational genome analysis workflows executed for both settings. Thereby achieving tight integration between analyses results and sample metadata. With complete, consistent and FAIR (meta) data management in a single platform, stacked bioinformatic analyses are performed automatically and tracked by the database ensuring data provenance, reproducibility and reusability which is key in worldwide collaborative translational research.Availability and implementationThe Trecode data model, codebooks, NGS workflows and client programs are currently being cleared from local compute infrastructure dependencies and will become publicly available in spring [email protected]


Sign in / Sign up

Export Citation Format

Share Document