Functional goals and problems in large-scale patient record management and automated screening

Author(s):  
Morris F. Collen ◽  
Lou S. Davis ◽  
Edmund E. Van Brunt ◽  
Joseph F. Terdiman
Eye ◽  
2020 ◽  
Author(s):  
Anna M. Horwood ◽  
◽  
Helen J. Griffiths ◽  
Jill Carlton ◽  
Paolo Mazzone ◽  
...  

Abstract Background Amblyopia screening can target reduced visual acuity (VA), its refractive risk factors, or both. VA testing is imprecise under 4 years of age, so automated risk-factor photoscreening appears an attractive option. This review considers photoscreening used in community services, focusing on costs, cost-effectiveness and scope of use, compared with EUSCREEN project Country Reports describing how photo- and automated screening is used internationally. Methods A systematic narrative review was carried out of all English language photoscreening literature to September 10th 2018, using publicly available search terms. Where costs were considered, a CASP economic evaluation checklist was used to assess data quality. Results Of 370 abstracts reviewed, 55 reported large-scale community photoscreening projects. Five addressed cost-effectiveness specifically, without original data. Photoscreening was a stand-alone, single, test event in 71% of projects. In contrast, 25 of 45 EUSCREEN Country Reports showed that if adopted, photoscreening often supplements other tests in established programmes and is rarely used as a stand-alone test. Reported costs varied widely and evidence of cost-effectiveness was sparse in the literature, or in international practice. Only eight (13%) papers compared the diagnostic accuracy or cost-effectiveness of photoscreening and VA testing, and when they did, cost-effectiveness of photoscreening compared unfavourably. Discussion Evidence that photoscreening reduces amblyopia or strabismus prevalence or improves overall outcomes is weak, as is evidence of cost-effectiveness, compared to later VA screening. Currently, the most cost-effective option seems to be a later, expert VA screening with the opportunity for a re-test before referral.


Author(s):  
Omar Alamir ◽  
Ramakrishnan Raman ◽  
Amal Faisal Alhashimi ◽  
Fatima Ali Almoaber ◽  
Amna Humaid Alremeithi

2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Rashi Gupta ◽  
Mona N. Shah ◽  
Satya N. Mandal

PurposeThe purpose of this paper is to establish the importance of land records for urban development. The study focuses on how traditionally land records were managed and presently what are the important parameters impacting the land record management systems in India.Design/methodology/approachThe framework adopted for the study was as follows: 1) literature study: aim to study the historical issues, to study global systems across the globe, various government reforms. 2) Present system of land management: to study the administrative, legal, economic issues, problems and potential in the present system. 3) Technology interventions: to study how technology can help to make the system more robust and trustworthy. 4) Conclusion: to study how the recommended technological measures will work and how to implement it in the system. Several pilot interviews were carried out to understand how the present system of land record management works in India, and important parameters were established through the pilot interviews of various stakeholders in the system.FindingsThe study brings out certain striking facts about the inefficiencies in the system since centuries, which are still being carried forward. Any reforms by the authorities have not been able to solve the issues and reduce the number of litigations because digitisation was only a step forward to replicate the wrong entries of records in digitised format. Thus, a paradigm shift in technology is required to bring a considerable change in the present management system.Originality/valueVarious studies worldwide have been done in several countries regarding land records, but all the studies are in piecemeal basis. Very less literature is available on the study that how land records effect large scale urban development projects. This study is an attempt to study impact of land records on urban development and to bring back transparency in the system to reduce the number of litigations on the most important ingredient of built environment, which is land.


2004 ◽  
Vol 43 (03) ◽  
pp. 256-267 ◽  
Author(s):  
A. Häber ◽  
B. Brigl ◽  
A. Winter ◽  
T. Wendt

Summary Objectives: We introduce the 3LGM2 tool, a tool for modeling information systems, and describe the process of modeling parts of the hospital information system of the Leipzig University Hospital (UKLa). We modeled the sub information systems of five patient record archiving sections to support the creation of a proposal for governmental financial support for a new document management and archiving system. We explain the steps of identifying the model elements and their relations as well as the analyzing capabilities of the 3LGM2 tool to answer questions about the information system. Methods: The 3LGM2 tool was developed on the basis of the meta model 3LGM2 which is described in detail in [1]. 3LGM2 defines an ontological basis, divided into three layers and their relationships. In addition to usual meta CASE tools, the 3LGM2 tool meets certain requirements of information management in hospitals. The model described in this article was created on the base of on-site surveys in five archiving sections of the UKL. Results: A prototype of the 3LGM2 tool is available and is currently tested in some projects at the UKL and partner institutions. The model presented in this article is a structured documentation about the current state of patient record archiving at the UKL. The analyzing capabilities of the 3LGM2 tool help to use the model and to answer questions about the information system. Conclusions: The 3LGM2 tool can be used to model and analyze information systems. The presentation capabilities and the reliability of the prototype have to be improved. The initial modeling effort of an institution is only valuable if the model is maintained regularly and reused in other projects. Reference catalogues and reference models are needed to decrease this effort and to support the creation of comparable models.


Author(s):  
Cory R. Schaffhausen ◽  
Timothy M. Kowalewski

Open innovation often enjoys large quantities of submitted content. Yet the need to effectively process such large quantities of content impede the widespread use of open innovation in practice. This article presents an exploration of needs-based open innovation using state-of-the art natural language processing (NLP) algorithms to address existing limitations of exploiting large amounts of incoming data. The Semantic Textual Similarity (STS) algorithms were specifically developed to compare sentence-length text passages and were used to rate the semantic similarity of pairs of text sentences submitted by users of a custom open innovation platform. A total of 341 unique users submitted 1,735 textual problem statements or unmet needs relating to multiple topics: cooking, cleaning, and travel. Scores of equivalence generated by a consensus of ten human evaluators for a subset of the needs provided a benchmark for similarity comparison. The semantic analysis allowed for rapid (1 day per topic), automated screening of redundancy to facilitate identification of quality submissions. In addition, a series of permutation analyses provided critical crowd characteristics for the rates of redundant entries as crowd size increases. The results identify top modern STS algorithms for needfinding. These predicted similarity with Pearson correlations of up to .85 when trained using need-based training data and up to .83 when trained using generalized data. Rates of duplication varied with crowd size and may be approximately linear or appear asymptotic depending on the degree of similarity used as a cutoff. Semantic algorithm performance has shown rapid improvements in recent years. Potential applications to screen duplicates and also to screen highly unique sentences for rapid exploration of a space are discussed.


2009 ◽  
Vol 62-64 ◽  
pp. 67-74
Author(s):  
M.O. Eyinagho ◽  
E. Solomon ◽  
T. Adeyemi ◽  
D. Ebhohimen ◽  
D. Adeniyi ◽  
...  

In this paper, a prototype of an electronic patient record management system using smart cards is described. An application using visual basic was developed, a database using Microsoft access was built, the visual-basic-based application was then interfaced to the database. An interface module that allows any person with no programming knowledge to store easily, required information on a smart card was also developed. The application was then interfaced to a smartcard reader. With this system, relevant patient information including, but not limited to allergies, blood-group, and past operations can be retrieved from the smart card.


Sign in / Sign up

Export Citation Format

Share Document