Human factors problem analysis of a voice-recognition computer-based medical record

Author(s):  
J.W. Gosbee ◽  
M. Clay
1994 ◽  
Vol 28 (1) ◽  
pp. 99-104 ◽  
Author(s):  
Dale B. Christensen ◽  
Barbara Williams ◽  
Harold I. Goldberg ◽  
Diane P. Martin ◽  
Ruth Engelberg ◽  
...  

OBJECTIVE: To determine the completeness of prescription records, and the extent to which they agreed with medical record drug entries for antihypertensive medications. SETTING: Three clinics affiliated with two staff model health maintenance organizations (HMOs). PARTICIPANTS: Randomly selected HMO enrollees (n=982) with diagnosed hypertension. METHODS: Computer-based prescription records for antihypertensive medications were reviewed at each location using an algorithm to convert the directions-for-use codes into an amount to be consumed per day (prescribed daily dosage). The medical record was analyzed similarly for the presence of drug notations and directions for use. RESULTS: There was a high level of agreement between the medical record and prescription file with respect to identifying the drug prescribed by drug name. Between 5 and 14 percent of medical record drug entries did not have corresponding prescription records, probably reflecting patient decisions not to have prescriptions filled at HMO-affiliated pharmacies or at all. Further, 5–8 percent of dispensed prescription records did not have corresponding medical record drug entry notations, probably reflecting incomplete recording of drug information on the medical record. The percentage of agreement of medical records on dosage ranged from 68 to 70 percent across two sites. Approximately 14 percent of drug records at one location and 21 percent of records at the other had nonmatching dosage information, probably reflecting dosage changes noted on the medical record but not reflected on pharmacy records. CONCLUSIONS: In the sites studied, dispensed prescription records reasonably reflect chart drug entries for drug name, but not necessarily dosage.


2003 ◽  
Vol 127 (6) ◽  
pp. 721-725
Author(s):  
Maamoun M. Al-Aynati ◽  
Katherine A. Chorneyko

Abstract Context.—Software that can convert spoken words into written text has been available since the early 1980s. Early continuous speech systems were developed in 1994, with the latest commercially available editions having a claimed accuracy of up to 98% of speech recognition at natural speech rates. Objectives.—To evaluate the efficacy of one commercially available voice-recognition software system with pathology vocabulary in generating pathology reports and to compare this with human transcription. To draw cost analysis conclusions regarding human versus computer-based transcription. Design.—Two hundred six routine pathology reports from the surgical pathology material handled at St Joseph's Healthcare, Hamilton, Ontario, were generated simultaneously using computer-based transcription and human transcription. The following hardware and software were used: a desktop 450-MHz Intel Pentium III processor with 192 MB of RAM, a speech-quality sound card (Sound Blaster), noise-canceling headset microphone, and IBM ViaVoice Pro version 8 with pathology vocabulary support (Voice Automated, Huntington Beach, Calif). The cost of the hardware and software used was approximately Can $2250. Results.—A total of 23 458 words were transcribed using both methods with a mean of 114 words per report. The mean accuracy rate was 93.6% (range, 87.4%–96%) using the computer software, compared to a mean accuracy of 99.6% (range, 99.4%–99.8%) for human transcription (P < .001). Time needed to edit documents by the primary evaluator (M.A.) using the computer was on average twice that needed for editing the documents produced by human transcriptionists (range, 1.4–3.5 times). The extra time needed to edit documents was 67 minutes per week (13 minutes per day). Conclusions.—Computer-based continuous speech-recognition systems in pathology can be successfully used in pathology practice even during the handling of gross pathology specimens. The relatively low accuracy rate of this voice-recognition software with resultant increased editing burden on pathologists may not encourage its application on a wide scale in pathology departments with sufficient human transcription services, despite significant potential financial savings. However, computer-based transcription represents an attractive and relatively inexpensive alternative to human transcription in departments where there is a shortage of transcription services, and will no doubt become more commonly used in pathology departments in the future.


Author(s):  
Jessica M. Ray ◽  
John S. Barnett

As training researchers and developers, we strive to understand and produce effective and efficient training. Research suggests the most effective form of instruction is individualized human tutoring. Yet this is rarely the most efficient form of instruction monetarily or in instructor time. Technological advances and a vision of effective, yet more efficient, computer based tutors has led to the development of sophisticated new training technologies such as Intelligent Tutoring Systems (ITSs). These systems have yet to reach their full forecast potential. In this paper we theorize that issues key to successful advancement of ITSs are human factors issues. Primary of these issues is determining how technology mediation impacts not only cognition, but also other key learning issues such as affect, emotions, motivation, and trust.


Author(s):  
H.J. Tange ◽  
V.A.B. Dreessen ◽  
H.H.L.M. Donkers ◽  
A. Hasman

1983 ◽  
Vol 27 (6) ◽  
pp. 455-458
Author(s):  
Richard Halstead-Nussloch ◽  
Mark C. Detweiler ◽  
M. Peter Jurkat ◽  
Elissa L.A. Hamilton ◽  
Leon S. Gold

The undergraduate human factors course was improved at the Stevens Institute of Technology. The objectives of the course improvement were twofold: 1) to increase the quality of the course, and 2) to increase enrollment. Computer-based modules were developed and implemented to achieve these objectives. Three primary findings emerged from their use. First, students finished the course with a firm grounding in the empirical and experimental methods of human factors. Second, students generated more design solution alternatives by using the modules. Third, course enrollment increased by seventy-five percent.


1997 ◽  
Vol 73 (4) ◽  
pp. 459-477 ◽  
Author(s):  
Douglas G. Pitt ◽  
Robert G. Wagner ◽  
Ronald J. Hall ◽  
Douglas J. King ◽  
Donald G. Leckie ◽  
...  

Forest managers require accurate and timely data that describe vegetation conditions on cutover areas to assess vegetation development and prescribe actions necessary to achieve forest regeneration objectives. Needs for such data are increasing with current emphasis on ecosystem management, escalating silvicultural treatment costs, evolving computer-based decision support tools, and demands for greater accountability. Deficiencies associated with field survey methods of data acquisition (e.g. high costs, subjectivity, and low spatial and temporal coverage) frequently limit decision-making effectiveness. The potential for remotely sensed data to supplement field-collected forest vegetation management data was evaluated in a problem analysis consisting of a comprehensive literature review and consultation with remote sensing and vegetation management experts at a national workshop. Among curently available sensors, aerial photographs appear to offer the most suitable combination of characteristics, including high spatial resolution, stereo coverage, a range of image scales, a variety of film, lens, and camera options, capability for geometric correction, versatility, and moderate cost. A flexible strategy that employs a sequence of 1:10,000-, 1:5,000-, and 1:500-scale aerial photographs is proposed to: 1) accurately map cutover areas, 2) facilitate location-specific prescriptions for silvicultural treatments, sampling, buffer zones, wildlife areas, etc., and 3) monitor and document conditions and activities at specific points during the regeneration period. Surveys that require very detailed information on smaller plants (<0.5-m tall) and/or individual or rare plant species are not likely to be supported by current remote sensing technologies. Recommended areas for research include : 1) digital frame cameras, or other cost-effective digital imagers, as replacements for conventional cameras, 2) computer-based classification and interpretation algorithms for digital image data, 3) relationships between image measures and physical measures, such as leaf-area index and biomass, 4) imaging standards, 5) airborne video, laser altimeters, and radar as complementary sensors, and 6) remote sensing applications in partial cutting systems. Key words: forest vegetation management, regeneration, remote sensing, aerial photography


1994 ◽  
Vol 33 (05) ◽  
pp. 464-472 ◽  
Author(s):  
N. Hardiker ◽  
J. Kirby ◽  
R. Tallis ◽  
M. Gonsalkarale ◽  
H. A. Heathfield

Abstract:The PEN & PAD Medical Record model describes a framework for an information model, designed to meet the requirements of an electronic medical record. This model has been successfully tested in a computer-based record system for General Practitioners as part of the PEN & PAD (GP) Project.Experiences of using the model for developing computer-based nursing records are reported. Results show that there are some problems with directly applying the model to the nursing domain. Whilst the main purpose of the nursing record is to document and communicate a patient’s care, it has several other, possibly incompatible, roles. Furthermore, the structure and content of the information contained within the nursing record is heavily influenced by the need for the nursing profession to visibly demonstrate the philosophical frameworks underlying their work. By providing new insights into the professional background of nursing records, this work has highlighted the need for nurses to clarify and make explicit their uses of information, and also provided them with some tools to assist in this task.


1988 ◽  
Vol 32 (18) ◽  
pp. 1237-1240
Author(s):  
J. Peter Kincaid ◽  
Richard Braby ◽  
John E. Mears ◽  
A.J.G. Babu

This paper describes current developments in automating the processes to author technical information (TI) and deliver it using microcomputers. It describes desirable characteristics which support the presentation of TI for technicians varying in skill levels. Addressed are human factors issues relating to information access, user acceptance, and display formats. Programming is being done in Smalltalk/V, an object oriented language, on a Zenith 248 computer, which is compatible with the IBM PC/AT. The project emphasizes low cost authoring and delivery of information which traditionally has been contained in paper technical manuals. Our intent is to support the Department of Defense initiative to shift from paper to paperless technical manuals.


Author(s):  
David R Desaulniers ◽  
Stephen Fleger

Since 1980 the Institute of Electrical and Electronics Engineers (IEEE) has supported development of human factors (HF) standards. Within IEEE, Subcommittee 5 (SC5) of the Nuclear Power Engineering Committee develops and maintains HF standards applicable to nuclear facilities. These standards are structured in a hierarchical fashion. The top-level standard (IEEE Std. 1023) defines the HF tasks required to support the integration of human performance into the design process. Five lower tier documents (IEEE Std. 845, 1082, 1289, 1786 and 1707) expand upon the upper tier standard. Presently, two new HF standards projects are underway; one to provide HF guidance for the validation of the system interface design and integrated systems operation and another for designing and developing computer-based displays for monitoring and control of nuclear facilities. SC5 is also involved in outreach activities, including sponsorship of a series of conferences on human factors and nuclear power plants.


Sign in / Sign up

Export Citation Format

Share Document