Procedural Creation of Medical Reports with Hierarchical Information Processing in Radiation Oncology

2019 ◽  
Vol 10 (1) ◽  
Author(s):  
Martin Vogel ◽  
Harald Fahrner ◽  
Mark Gainey ◽  
Marianne Schmucker ◽  
Stefan Kirrmann ◽  
...  

Background: For many years, the oncological doctor's letter has been the pivotal means of information transfer to general practitioners, medical specialists or medical consultants. Yet, both creator and recipient require a high level of abstraction, retentiveness and analysis due to the large number of diagnoses and therapies. In contrast to the commonly used structure of doctor's letters, where all diagnoses and therapies are listed in sequential order with all diagnoses first, it is by no means trivial to establish the important chronological and hierarchical context in the description of oncological cases. Additional aspects of importance are the integration of these letters into existing clinical and departmental information systems (for example via HL7 interface), various export formats (for example PDF, HTML), fax and encrypted email. Moreover these letters need a modern layout that, among others, meets the requirements of corporate design. Methods: The requirements for a doctor's letter system are manifold and can only be represented rudimentarily via a normal word processing system. Due to this deficiency we developed a system that covers all special features and requirements for clinical use. The system is based on a scalable and extensible client-server architecture. We use the programming languages Harbour, C++, PHP and JavaScript, Microsoft SQL database for data storage and the HL7 standard as the interface to other information systems such as hospital information system (HIS). Export formats are PDF, HTML/XML. Layouts are generated with TeX, LaTeX and MikTeX. Results: The aforementioned requirements were resolved with the doctor's letter and finding system IntDok. The hierarchical presentation of diagnoses, histologies and therapies provides the recipient with a first outline of the course of the disease. A strict procedure controls the whole process of document compilation and assists the user with many highly regarded tools such as text blocks, import and export (PDF and HTML/XML including barcodes) functions or HL7 interface to other information systems. The software also provides a sophisticated mail merging. All content from previous letters can easily be inserted into the current document. A TeX-server automatically provides document layout including supreme hyphenation so that uniform and perfect appearance (corporate design) is guaranteed. The documents are saved in a MS-SQL database (almost 230,000 documents since 1991), independent of any proprietary formats such as MS-Word. Conclusion: Creation of documents is fast, simple and well-structured. Sophisticated tools guarantee the optimal use of human resources and time. The system is an important module in our overall digital work environment.

2014 ◽  
Vol 599-601 ◽  
pp. 1407-1410
Author(s):  
Xu Liang ◽  
Ke Ming Wang ◽  
Gui Yu Xin

Comparing with other High-level programming languages, C Sharp (C#) is more efficient in software development. While MATLAB language provides a series of powerful functions of numerical calculation that facilitate adoption of algorithms, which are widely applied in blind source separation (BSS). Combining the advantages of the two languages, this paper presents an implementation of mixed programming and the development of a simplified blind signal processing system. Application results show the system developed by mixed programming is successful.


Author(s):  
Denys Chernyshev ◽  
Svitlana Tsiutsiura ◽  
Tamara Lyashchenko ◽  
Yuliia Luzina ◽  
Vitalii Borodynia

This article highlights the issue of the importance of information, the need to ensure its proper storage and use. Nowadays, the question of the importance of studying the functioning of data storage facilities in one of the most common design models – Data Flow Diagram, and data storage (DS or DW) in the whole. A study of abstract devices of this information security model was carried out: types, features and types of data storage, which are used in information systems (relational, multidimensional, and hybrid repositories), from the point of view of using models for presenting data; appearance of warehouses and building rules; Letter identifiers of storages “D”, “C”, “M” and “T” with the help of which the type of storages is determined; features of the numerical part of the identifiers for decompositions of the first and second tier processes; mechanisms that support data retention for their intermediate processing in information systems. Transition of properties and characteristics from physical to logical representation, rationalization of data warehouses by considering the features of the logical model. In the course of the work, the peculiarities of the construction of the DFD and the reflection of the interrelationships in all the component diagrams, defined in the general rules that are valid for the two, were considered. The highlighted issues concern the elements of the diagrams, which can be freely exploited within the borders of Ukraine, as well as the elemental content of the different types of diagrams in the diagrams. The unique functionality of domestic DFD diagrams allows to use a sufficient, albeit limited, elemental structure for construction. Depending on the actual possibilities, the diagrams of the Ukrainian manufacturer are somewhat simplified and imperfect according to modern technologies, but these factors do not diminish the importance and necessity of adequate data protection at the highest level. Therefore, in this work, the main most informative moments of the use of this or that type of storage, the demand for DS, in the field, are highlighted, possible advantages and disadvantages of using physical data storage facilities and features of virtual operation.


2019 ◽  
Vol 9 (1) ◽  
pp. 1-8
Author(s):  
Marliana Budhiningtias Winanti ◽  
Meylan Lesnusa

The development of globalization brings with significant impact for every layer of society, especially the development of many technologies needed by every human being, not least in the areas of employment such as health, and others. In this case the Public Lung Health Center (BKPM) Maluku province is the center of the health inspection service laboratory. Where every day a lot of people who come from different places to check their condition to obtain the required health outcomes, but the increase in performance of health services is still not properly fit most people's expectations, because of patient data recording system is still done manually, a long time patient data storage system still manually which takes in the search for patient data it is considered not effective, patient examination data processing is still considered a long time because the process is done. Therefore created an information system to assist agencies in addressing the problem and help some of the difficulties that exist. The inspection data processing system designed to help process patient data input, data storage, and so forth process is computerized. The method used a structured and methods of development of information systems data processing desktop-based health care checks are made using the method Prototype, with tools such as system development flowmap, context diagram, DFD and database design tool. Keywords: Information Systems, Health Services, data processing.


SinkrOn ◽  
2019 ◽  
Vol 3 (2) ◽  
pp. 180
Author(s):  
Suparni Suparni ◽  
Lilyani Asri Utami ◽  
Elsa Dwi Selviana

PT. Pratama Mega Konstruksindo is one of the companies engaged in Property, especially Housing. One of the fields that requires technological progress is one of them is the property sector, the rapid development in the property sector is currently urging property service companies to meet the demands of the wider community. Implementation of work related to housing sales. In managing the data, this company still uses a manual system, starting from the recording and calculation aspects so that its performance has not been effective. At PT Pratama Mega Konstruksindo this still manages data using Ms Excel. As well as down payment, cash payments and consumer data are recorded using Ms Excel. This can cause errors in recording transactions, data security that is not guaranteed confidentiality, ineffective employees at work because it requires more time to input and make sales reports and even loss of data. Therefore, PT. Pratama Mega Konstruksindo requires a system that can solve the problem. This data processing system is designed web-based using the PHP and MySql programming languages as data storage databases. With the existence of this website, it can help processing sales data more effectively and efficiently, reports can be printed in realtime and data security can be maintained


2013 ◽  
Vol 61 (3) ◽  
pp. 569-579 ◽  
Author(s):  
A. Poniszewska-Marańda

Abstract Nowadays, the growth and complexity of functionalities of current information systems, especially dynamic, distributed and heterogeneous information systems, makes the design and creation of such systems a difficult task and at the same time, strategic for businesses. A very important stage of data protection in an information system is the creation of a high level model, independent of the software, satisfying the needs of system protection and security. The process of role engineering, i.e. the identification of roles and setting up in an organization is a complex task. The paper presents the modeling and design stages in the process of role engineering in the aspect of security schema development for information systems, in particular for dynamic, distributed information systems, based on the role concept and the usage concept. Such a schema is created first of all during the design phase of a system. Two actors should cooperate with each other in this creation process, the application developer and the security administrator, to determine the minimal set of user’s roles in agreement with the security constraints that guarantee the global security coherence of the system.


2021 ◽  
Vol 11 (12) ◽  
pp. 5523
Author(s):  
Qian Ye ◽  
Minyan Lu

The main purpose of our provenance research for DSP (distributed stream processing) systems is to analyze abnormal results. Provenance for these systems is not nontrivial because of the ephemerality of stream data and instant data processing mode in modern DSP systems. Challenges include but are not limited to an optimization solution for avoiding excessive runtime overhead, reducing provenance-related data storage, and providing it in an easy-to-use fashion. Without any prior knowledge about which kinds of data may finally lead to the abnormal, we have to track all transformations in detail, which potentially causes hard system burden. This paper proposes s2p (Stream Process Provenance), which mainly consists of online provenance and offline provenance, to provide fine- and coarse-grained provenance in different precision. We base our design of s2p on the fact that, for a mature online DSP system, the abnormal results are rare, and the results that require a detailed analysis are even rarer. We also consider state transition in our provenance explanation. We implement s2p on Apache Flink named as s2p-flink and conduct three experiments to evaluate its scalability, efficiency, and overhead from end-to-end cost, throughput, and space overhead. Our evaluation shows that s2p-flink incurs a 13% to 32% cost overhead, 11% to 24% decline in throughput, and few additional space costs in the online provenance phase. Experiments also demonstrates the s2p-flink can scale well. A case study is presented to demonstrate the feasibility of the whole s2p solution.


2021 ◽  
Vol 43 (1) ◽  
pp. 1-46
Author(s):  
David Sanan ◽  
Yongwang Zhao ◽  
Shang-Wei Lin ◽  
Liu Yang

To make feasible and scalable the verification of large and complex concurrent systems, it is necessary the use of compositional techniques even at the highest abstraction layers. When focusing on the lowest software abstraction layers, such as the implementation or the machine code, the high level of detail of those layers makes the direct verification of properties very difficult and expensive. It is therefore essential to use techniques allowing to simplify the verification on these layers. One technique to tackle this challenge is top-down verification where by means of simulation properties verified on top layers (representing abstract specifications of a system) are propagated down to the lowest layers (that are an implementation of the top layers). There is no need to say that simulation of concurrent systems implies a greater level of complexity, and having compositional techniques to check simulation between layers is also desirable when seeking for both feasibility and scalability of the refinement verification. In this article, we present CSim 2 a (compositional) rely-guarantee-based framework for the top-down verification of complex concurrent systems in the Isabelle/HOL theorem prover. CSim 2 uses CSimpl, a language with a high degree of expressiveness designed for the specification of concurrent programs. Thanks to its expressibility, CSimpl is able to model many of the features found in real world programming languages like exceptions, assertions, and procedures. CSim 2 provides a framework for the verification of rely-guarantee properties to compositionally reason on CSimpl specifications. Focusing on top-down verification, CSim 2 provides a simulation-based framework for the preservation of CSimpl rely-guarantee properties from specifications to implementations. By using the simulation framework, properties proven on the top layers (abstract specifications) are compositionally propagated down to the lowest layers (source or machine code) in each concurrent component of the system. Finally, we show the usability of CSim 2 by running a case study over two CSimpl specifications of an Arinc-653 communication service. In this case study, we prove a complex property on a specification, and we use CSim 2 to preserve the property on lower abstraction layers.


2010 ◽  
Vol 19 (01) ◽  
pp. 65-99 ◽  
Author(s):  
MARC POULY

Computing inference from a given knowledgebase is one of the key competences of computer science. Therefore, numerous formalisms and specialized inference routines have been introduced and implemented for this task. Typical examples are Bayesian networks, constraint systems or different kinds of logic. It is known today that these formalisms can be unified under a common algebraic roof called valuation algebra. Based on this system, generic inference algorithms for the processing of arbitrary valuation algebras can be defined. Researchers benefit from this high level of abstraction to address open problems independently of the underlying formalism. It is therefore all the more astonishing that this theory did not find its way into concrete software projects. Indeed, all modern programming languages for example provide generic sorting procedures, but generic inference algorithms are still mythical creatures. NENOK breaks a new ground and offers an extensive library of generic inference tools based on the valuation algebra framework. All methods are implemented as distributed algorithms that process local and remote knowledgebases in a transparent manner. Besides its main purpose as software library, NENOK also provides a sophisticated graphical user interface to inspect the inference process and the involved graphical structures. This can be used for educational purposes but also as a fast prototyping architecture for inference formalisms.


Sign in / Sign up

Export Citation Format

Share Document