AMAP - A General Optical Mark Reader Form Evaluation Program

1973 ◽  
Vol 12 (04) ◽  
pp. 211-222 ◽  
Author(s):  
P. R. Pocklington

Due to the increasing use of Optical Mark Reader forms for recording data it is desirable that we have a system to process such data from various aspects.This paper describes the requirements of a system that performs such evaluation along with the statements that enable the user to formulate conditions both for the plausibility of marking of such forms and also the necessary evaluation.Also touched upon is the use of such statements for general data evaluation and the link between such a data acquisition program and a general patient data-base.The interpretation modules for AMAP meet the criteria of a problem-oriented compiler and the system makes use of the ASSEMBLER and ASSEMBLER MACRO languages. It has been implemented on the IBM/360—67 computer at the Medical School Hannover.AMAP is a subsystem of the project »DIES« (Data Interpretation and Evaluation System) the aim of which is the production of program modules for the interpretation and evaluation of freely defined structured data.

1976 ◽  
Vol 15 (02) ◽  
pp. 69-74
Author(s):  
M. Goldberg ◽  
B. Doyon

This paper describes a general data base management package, devoted to medical applications. SARI is a user-oriented system, able to take into account applications very different by their nature, structure, size, operating procedures and general objectives, without any specific programming. It can be used in conversational mode by users with no previous knowledge of computers, such as physicians or medical clerks.As medical data are often personal data, the privacy problem is emphasized and a satisfactory solution implemented in SARI.The basic principles of the data base and program organization are described ; specific efforts have been made in order to increase compactness and to make maintenance easy.Several medical applications are now operational with SARI. The next steps will mainly consist in the implementation of highly sophisticated functions.


Author(s):  
Marcel von Lucadou ◽  
Thomas Ganslandt ◽  
Hans-Ulrich Prokosch ◽  
Dennis Toddenroth

Abstract Background The secondary use of electronic health records (EHRs) promises to facilitate medical research. We reviewed general data requirements in observational studies and analyzed the feasibility of conducting observational studies with structured EHR data, in particular diagnosis and procedure codes. Methods After reviewing published observational studies from the University Hospital of Erlangen for general data requirements, we identified three different study populations for the feasibility analysis with eligibility criteria from three exemplary observational studies. For each study population, we evaluated the availability of relevant patient characteristics in our EHR, including outcome and exposure variables. To assess data quality, we computed distributions of relevant patient characteristics from the available structured EHR data and compared them to those of the original studies. We implemented computed phenotypes for patient characteristics where necessary. In random samples, we evaluated how well structured patient characteristics agreed with a gold standard from manually interpreted free texts. We categorized our findings using the four data quality dimensions “completeness”, “correctness”, “currency” and “granularity”. Results Reviewing general data requirements, we found that some investigators supplement routine data with questionnaires, interviews and follow-up examinations. We included 847 subjects in the feasibility analysis (Study 1 n = 411, Study 2 n = 423, Study 3 n = 13). All eligibility criteria from two studies were available in structured data, while one study required computed phenotypes in eligibility criteria. In one study, we found that all necessary patient characteristics were documented at least once in either structured or unstructured data. In another study, all exposure and outcome variables were available in structured data, while in the other one unstructured data had to be consulted. The comparison of patient characteristics distributions, as computed from structured data, with those from the original study yielded similar distributions as well as indications of underreporting. We observed violations in all four data quality dimensions. Conclusions While we found relevant patient characteristics available in structured EHR data, data quality problems may entail that it remains a case-by-case decision whether diagnosis and procedure codes are sufficient to underpin observational studies. Free-text data or subsequently supplementary study data may be important to complement a comprehensive patient history.


2019 ◽  
Vol 25 (06) ◽  
pp. 677-692
Author(s):  
Ralph Grishman

AbstractInformation extraction is the process of converting unstructured text into a structured data base containing selected information from the text. It is an essential step in making the information content of the text usable for further processing. In this paper, we describe how information extraction has changed over the past 25 years, moving from hand-coded rules to neural networks, with a few stops on the way. We connect these changes to research advances in NLP and to the evaluations organized by the US Government.


1984 ◽  
Vol 28 ◽  
pp. 241-247
Author(s):  
D. S. Dunn ◽  
T. F. Marinis

We have automated a Seeman-Bohlin Guinier x-ray diffractometer by interfacing it to a minimally configured PDP 11/23 computer. The programs that run on the microcomputer to control the operation of the diffractometer are stored on a mainframe host running the UNIX+ operating system. A software interface allows a particular data acquisition program to be downloaded from the UNIX host and executed on the satellite processor. This same interface allows the collected data to be periodically off-loaded to the host for processing and storage.


1978 ◽  
Vol 32 (4) ◽  
pp. 443-452 ◽  
Author(s):  
J.M. Zarzycki

There is an increasing demand for terrain information in digital form in addition to graphical format. Under such circumstances the establishment of a national digital topographic data base can be envisaged and implemented to serve the needs for terrain data in either digital and/or graphical format. This paper discusses the digital interactive photogrammetric data acquisition system, interactive cartographic editing and automated drafting. The concepts of digital topographic data bank and data base are discussed as well as the requirements for selective retrieval and data base query language capability.


2013 ◽  
Vol 482 ◽  
pp. 386-389
Author(s):  
Peng Qin ◽  
Hao Lu ◽  
Zhi Ye Jiang ◽  
Jin Liang Bai ◽  
Lu Gao ◽  
...  

To sample wideband IF signal with large amounts of data, a high-speed data acquisition program is presented. The program focus on circuit design, issues that need attention, and high-speed sampling signal deceleration strategy. The 2.4GHz rate sampling data acquisition, reception and demux are completed with ADC083000 and Field-Programmable Gate Array (FPGA). At last, a result of sampling with the converter is offered by chipscope software. The result verified ADC083000 has an excellent performance with more than 6.5 bit ENOB and good phase coherence. In engineering practice, the design has been used and has good performance.


Sign in / Sign up

Export Citation Format

Share Document