scholarly journals Computerizing Large Systems: Lessons from an Application

1988 ◽  
Vol 13 (3) ◽  
pp. 29-38
Author(s):  
Abhimanyu Singh

Large volume data systems, such as the tabulation of university examination results, often conceal innumerable exceptions and complexities, accumulated over the years. Handled manually with flexibility and ease and along with the large volume of routine, they can wreck poorly conceived computer applications. Abhimanyu Singh narrates such an application, namely, computerizing the tabulation of examination results of the University of Rajasthan in 1982. Instead of cutting the delays in the announcement of results, computerization added to the delays and complicated it with many errors. Abhimanyu Singh analyses that experience and provides the highlights of the steps taken from 1983 onwards to remedy the situation and make a success of the application. He contrasts the approaches taken towards computerization in 1982 and from 1983 to 1985 to arrive at useful lessons for computerizing large data systems.

2021 ◽  
Vol 28 (1) ◽  
pp. e100307
Author(s):  
Janice Miller ◽  
Frances Gunn ◽  
Malcolm G Dunlop ◽  
Farhat VN Din ◽  
Yasuko Maeda

ObjectivesA customised data management system was required for a rapidly implemented COVID-19-adapted colorectal cancer pathway in order to mitigate the risks of delayed and missed diagnoses during the pandemic. We assessed its performance and robustness.MethodsA system was developed using Microsoft Excel (2007) to retain the spreadsheets’ intuitiveness of direct data entry. Visual Basic for Applications (VBA) was used to construct a user-friendly interface to enhance efficiency of data entry and segregate the data for operational tasks.ResultsLarge data segregation was possible using VBA macros. Data validation and conditional formatting minimised data entry errors. Computation by the COUNT function facilitated live data monitoring.ConclusionIt is possible to rapidly implement a makeshift database system with clinicians’ regular input. Large-volume data management using a spreadsheet system is possible with appropriate data definition and VBA-programmed data segregation. The described concept is applicable to any data management system construction requiring speed and flexibility in a resource-limited situation.


2004 ◽  
Vol 128 (1) ◽  
pp. 71-83 ◽  
Author(s):  
James H. Harrison

Abstract Context.—Effective pathology practice increasingly requires familiarity with concepts in medical informatics that may cover a broad range of topics, for example, traditional clinical information systems, desktop and Internet computer applications, and effective protocols for computer security. To address this need, the University of Pittsburgh (Pittsburgh, Pa) includes a full-time, 3-week rotation in pathology informatics as a required component of pathology residency training. Objective.—To teach pathology residents general informatics concepts important in pathology practice. Design.—We assess the efficacy of the rotation in communicating these concepts using a short-answer examination administered at the end of the rotation. Because the increasing use of computers and the Internet in education and general communications prior to residency training has the potential to communicate key concepts that might not need additional coverage in the rotation, we have also evaluated incoming residents' informatics knowledge using a similar pretest. Data Sources.—This article lists 128 questions that cover a range of topics in pathology informatics at a level appropriate for residency training. These questions were used for pretests and posttests in the pathology informatics rotation in the Pathology Residency Program at the University of Pittsburgh for the years 2000 through 2002. With slight modification, the questions are organized here into 15 topic categories within pathology informatics. The answers provided are brief and are meant to orient the reader to the question and suggest the level of detail appropriate in an answer from a pathology resident. Results.—A previously published evaluation of the test results revealed that pretest scores did not increase during the 3-year evaluation period, and self-assessed computer skill level correlated with pretest scores, but all pretest scores were low. Posttest scores increased substantially, and posttest scores did not correlate with the self-assessed computer skill level recorded at pretest time. Conclusions.—Even residents who rated themselves high in computer skills lacked many concepts important in pathology informatics, and posttest scores showed that residents with both high and low self-assessed skill levels learned pathology informatics concepts effectively.


2020 ◽  
Vol 5 (2) ◽  
pp. 13-32
Author(s):  
Hye-Kyung Yang ◽  
Hwan-Seung Yong

AbstractPurposeWe propose InParTen2, a multi-aspect parallel factor analysis three-dimensional tensor decomposition algorithm based on the Apache Spark framework. The proposed method reduces re-decomposition cost and can handle large tensors.Design/methodology/approachConsidering that tensor addition increases the size of a given tensor along all axes, the proposed method decomposes incoming tensors using existing decomposition results without generating sub-tensors. Additionally, InParTen2 avoids the calculation of Khari–Rao products and minimizes shuffling by using the Apache Spark platform.FindingsThe performance of InParTen2 is evaluated by comparing its execution time and accuracy with those of existing distributed tensor decomposition methods on various datasets. The results confirm that InParTen2 can process large tensors and reduce the re-calculation cost of tensor decomposition. Consequently, the proposed method is faster than existing tensor decomposition algorithms and can significantly reduce re-decomposition cost.Research limitationsThere are several Hadoop-based distributed tensor decomposition algorithms as well as MATLAB-based decomposition methods. However, the former require longer iteration time, and therefore their execution time cannot be compared with that of Spark-based algorithms, whereas the latter run on a single machine, thus limiting their ability to handle large data.Practical implicationsThe proposed algorithm can reduce re-decomposition cost when tensors are added to a given tensor by decomposing them based on existing decomposition results without re-decomposing the entire tensor.Originality/valueThe proposed method can handle large tensors and is fast within the limited-memory framework of Apache Spark. Moreover, InParTen2 can handle static as well as incremental tensor decomposition.


Author(s):  
Erlu Wang ◽  
Priyan Malarvizhi Kumar ◽  
R. Dinesh Jackson samuel

It is a very difficult problem to achieve high-order functionality for graphical dependency parsing without growing decoding difficulties. To solve this problem, this article offers a way for Semantic Graphical Dependence Parsing Model (SGDPM) with a language-dependency model and a beam search to represent high-order functions for computer applications. The first approach is to scan a large amount of unnoticed data using a baseline parser. It will build auto-parsed data to create the Language-dependence Model (LDM). The LDM is based on a set of new features during beam search decoding, where it will incorporate the LDM features into the parsing model and utilize the features in parsing models of bilingual text. Our approach has main benefits, which include rich high-order features that are described given the large size and the additional large crude corpus for increasing the difficulty of decoding.  Further, SGDPM has been evaluated using the suggested method for parsing tasks of mono-parsing text and bi-parsing text to carry out experiments on the English and Chinese data in the mono-parsing text function using computer applications. Experimental results show that the most accurate Chinese data is obtained with the best known English data systems and their comparable accuracy. Furthermore, the lab-scale experiments on the Chinese/General bilingual information in the bitext parsing process outperform the best recorded existing solutions.


Author(s):  
Hugh Clout

Terry Coppock FBA was a pioneer in three areas of scholarship – agricultural geography, land-use management and computer applications – whose academic career was at University College London and the University of Edinburgh, where he was the first holder of the Ogilvie Chair in Geography. He received the Victoria Medal from the Royal Geographic Society and was elected Fellow of the British Academy in 1976. Coppock, who was Secretary and then Chair of the Commission on World Food Problems and Agricultural Productivity of the International Geographical Union, served as Secretary Treasurer of the Carnegie Trust for the Universities of Scotland. Obituary by Hugh Clout FBA.


2020 ◽  
Vol 26 (10) ◽  
pp. 3008-3021 ◽  
Author(s):  
Jonathan Sarton ◽  
Nicolas Courilleau ◽  
Yannick Remion ◽  
Laurent Lucas

2019 ◽  
Vol 21 (2) ◽  
pp. 180-199 ◽  
Author(s):  
Shivam Dolhey

Purpose The purpose of this study is to provide a bibliometric analysis of the research on entrepreneurial intentions. A total of 1,393 papers published from the year 2000 to 2018 are analysed. The study attempts to identify the significant journals in this area, years with the maximum publication, most cited papers, important authors and most prolific countries and institutions. Then, the co-authorship network map, inter-country co-authorship network map and keyword co-occurrences network maps are provided. Design/methodology/approach The Scopus database was used for analysing the large data about various papers included in this study. Then, the VOSviewer software was used for creating a co-authorship network map, inter-country co-authorship network map and keywords co-occurrences network maps. Findings The results of this study indicate that in the year 2017, the maximum papers have been published, the most significant journal is International Journal of Entrepreneurship and Small Business and the most cited paper is about competing models of entrepreneurial intentions. Furthermore, the most prominent author is Francisco Linan, and the most prolific country and institution are the USA and the University of Seville (Spain), respectively. Originality/value This study contributes to the existing literature on entrepreneurial intentions. A much comprehensive and reliable picture of this area is provided using the bibliometric techniques. The results can help in guiding the authors interested in conducting future research on this topic.


Sign in / Sign up

Export Citation Format

Share Document