Changing Healthcare Institutions with Large Information Technology Projects

Author(s):  
Matthew W. Guah

This article reviews the development of institutional theory in direct relations to historical changes within the UK’s National Health Service (NHS) with an eye to contributing to the theoretical specification of healthcare information processes. This is done partly by extending certain paradigms (see Meyer & Rowan, 1991; Powell & DiMaggio, 1991; Tolbert & Zucker, 1994) through a proposed model of causes and consequences of variations in levels of institutionalisation in the healthcare industry. It reports findings from a 5-year study on the NHS implementation of the largest civil ISs worldwide at an estimated cost of $10 billion over a 10-year period. The theoretical basis for analysis is developed, using concepts drawn from neo-institutionalism, realisation of business value, and organisational logic, as well as mixed empirical results about the lack of IT investments value in the NHS. The findings suggest that large scale, IT change imposed upon a highly institutionalised healthcare industry is fraught with difficulty mainly because culturally embedded norms, values, and behavioural patterns serve to impede centrally imposed initiatives to automate clinical working practices. It concludes with a discussion about the nature of evaluation procedures in relation to the process of institutionalising IS in healthcare.

2011 ◽  
pp. 1304-1317
Author(s):  
Matthew W. Guah

This article reviews the development of institutional theory in direct relations to historical changes within the UK’s National Health Service (NHS) with an eye to contributing to the theoretical specification of healthcare information processes. This is done partly by extending certain paradigms (see Meyer & Rowan, 1991; Powell & DiMaggio, 1991; Tolbert & Zucker, 1994) through a proposed model of causes and consequences of variations in levels of institutionalisation in the healthcare industry. It reports findings from a 5-year study on the NHS implementation of the largest civil ISs worldwide at an estimated cost of $10 billion over a 10-year period. The theoretical basis for analysis is developed, using concepts drawn from neo-institutionalism, realisation of business value, and organisational logic, as well as mixed empirical results about the lack of IT investments value in the NHS. The findings suggest that large scale, IT change imposed upon a highly institutionalised healthcare industry is fraught with difficulty mainly because culturally embedded norms, values, and behavioural patterns serve to impede centrally imposed initiatives to automate clinical working practices. It concludes with a discussion about the nature of evaluation procedures in relation to the process of institutionalising IS in healthcare.


2011 ◽  
pp. 1689-1702
Author(s):  
Matthew W. Guah

This article reviews the development of institutional theory in direct relations to historical changes within the UK’s National Health Service (NHS) with an eye to contributing to the theoretical specification of healthcare information processes. This is done partly by extending certain paradigms (see Meyer & Rowan, 1991; Powell & DiMaggio, 1991; Tolbert & Zucker, 1994) through a proposed model of causes and consequences of variations in levels of institutionalisation in the healthcare industry. It reports findings from a 5-year study on the NHS implementation of the largest civil ISs worldwide at an estimated cost of $10 billion over a 10-year period. The theoretical basis for analysis is developed, using concepts drawn from neo-institutionalism, realisation of business value, and organisational logic, as well as mixed empirical results about the lack of IT investments value in the NHS. The findings suggest that large scale, IT change imposed upon a highly institutionalised healthcare industry is fraught with difficulty mainly because culturally embedded norms, values, and behavioural patterns serve to impede centrally imposed initiatives to automate clinical working practices. It concludes with a discussion about the nature of evaluation procedures in relation to the process of institutionalising IS in healthcare.


1976 ◽  
Vol 7 (4) ◽  
pp. 236-241 ◽  
Author(s):  
Marisue Pickering ◽  
William R. Dopheide

This report deals with an effort to begin the process of effectively identifying children in rural areas with speech and language problems using existing school personnel. A two-day competency-based workshop for the purpose of training aides to conduct a large-scale screening of speech and language problems in elementary-school-age children is described. Training strategies, implementation, and evaluation procedures are discussed.


Author(s):  
Honghai LI ◽  
Jun CAI

The transformation of China's design innovation industry has highlighted the importance of design research. The design research process in practice can be regarded as the process of knowledge production. The design 3.0 mode based on knowledge production MODE2 has been shown in the Chinese design innovation industry. On this cognition, this paper establishes a map with two dimensions of how knowledge integration occurs in practice based design research, which are the design knowledge transfer and contextual transformation of design knowledge. We use this map to carry out the analysis of design research cases. Through the analysis, we define four typical practice based design research models from the viewpoint of knowledge integration. This method and the proposed model can provide a theoretical basis and a path for better management design research projects.


Author(s):  
A. V. Ponomarev

Introduction: Large-scale human-computer systems involving people of various skills and motivation into the information processing process are currently used in a wide spectrum of applications. An acute problem in such systems is assessing the expected quality of each contributor; for example, in order to penalize incompetent or inaccurate ones and to promote diligent ones.Purpose: To develop a method of assessing the expected contributor’s quality in community tagging systems. This method should only use generally unreliable and incomplete information provided by contributors (with ground truth tags unknown).Results:A mathematical model is proposed for community image tagging (including the model of a contributor), along with a method of assessing the expected contributor’s quality. The method is based on comparing tag sets provided by different contributors for the same images, being a modification of pairwise comparison method with preference relation replaced by a special domination characteristic. Expected contributors’ quality is evaluated as a positive eigenvector of a pairwise domination characteristic matrix. Community tagging simulation has confirmed that the proposed method allows you to adequately estimate the expected quality of community tagging system contributors (provided that the contributors' behavior fits the proposed model).Practical relevance: The obtained results can be used in the development of systems based on coordinated efforts of community (primarily, community tagging systems). 


2020 ◽  
Author(s):  
Pranav C

UNSTRUCTURED The word blockchain elicits thoughts of cryptocurrency much of the time, which does disservice to this disruptive new technology. Agreed, bitcoin launched in 2011 was the first large scale implementation of blockchain technology. Also, Bitcoin’s success has triggered the establishment of nearly 1000 new cryptocurrencies. This again lead to the delusion that the only application of blockchain technology is for the creation of cryptocurrency. However, the blockchain technology is capable of a lot more than just cryptocurrency creation and may support such things as transactions that require personal identification, peer review, elections and other types of democratic decision-making and audit trails. Blockchain exists with real world implementations beyond cryptocurrencies and these solutions deliver powerful benefits to healthcare organizations, bankers, retailers and consumers among others. One of the areas where blockchain technology can be used effectively is healthcare industry. Proper application of this technology in healthcare will not only save billions of money but also will contribute to the growth in research. This review paper briefly defines blockchain and deals in detail the applications of blockchain in various areas particularly in healthcare industry.


2020 ◽  
Author(s):  
Anusha Ampavathi ◽  
Vijaya Saradhi T

UNSTRUCTURED Big data and its approaches are generally helpful for healthcare and biomedical sectors for predicting the disease. For trivial symptoms, the difficulty is to meet the doctors at any time in the hospital. Thus, big data provides essential data regarding the diseases on the basis of the patient’s symptoms. For several medical organizations, disease prediction is important for making the best feasible health care decisions. Conversely, the conventional medical care model offers input as structured that requires more accurate and consistent prediction. This paper is planned to develop the multi-disease prediction using the improvised deep learning concept. Here, the different datasets pertain to “Diabetes, Hepatitis, lung cancer, liver tumor, heart disease, Parkinson’s disease, and Alzheimer’s disease”, from the benchmark UCI repository is gathered for conducting the experiment. The proposed model involves three phases (a) Data normalization (b) Weighted normalized feature extraction, and (c) prediction. Initially, the dataset is normalized in order to make the attribute's range at a certain level. Further, weighted feature extraction is performed, in which a weight function is multiplied with each attribute value for making large scale deviation. Here, the weight function is optimized using the combination of two meta-heuristic algorithms termed as Jaya Algorithm-based Multi-Verse Optimization algorithm (JA-MVO). The optimally extracted features are subjected to the hybrid deep learning algorithms like “Deep Belief Network (DBN) and Recurrent Neural Network (RNN)”. As a modification to hybrid deep learning architecture, the weight of both DBN and RNN is optimized using the same hybrid optimization algorithm. Further, the comparative evaluation of the proposed prediction over the existing models certifies its effectiveness through various performance measures.


Electronics ◽  
2021 ◽  
Vol 10 (14) ◽  
pp. 1670
Author(s):  
Waheeb Abu-Ulbeh ◽  
Maryam Altalhi ◽  
Laith Abualigah ◽  
Abdulwahab Ali Almazroi ◽  
Putra Sumari ◽  
...  

Cyberstalking is a growing anti-social problem being transformed on a large scale and in various forms. Cyberstalking detection has become increasingly popular in recent years and has technically been investigated by many researchers. However, cyberstalking victimization, an essential part of cyberstalking, has empirically received less attention from the paper community. This paper attempts to address this gap and develop a model to understand and estimate the prevalence of cyberstalking victimization. The model of this paper is produced using routine activities and lifestyle exposure theories and includes eight hypotheses. The data of this paper is collected from the 757 respondents in Jordanian universities. This review paper utilizes a quantitative approach and uses structural equation modeling for data analysis. The results revealed a modest prevalence range is more dependent on the cyberstalking type. The results also indicated that proximity to motivated offenders, suitable targets, and digital guardians significantly influences cyberstalking victimization. The outcome from moderation hypothesis testing demonstrated that age and residence have a significant effect on cyberstalking victimization. The proposed model is an essential element for assessing cyberstalking victimization among societies, which provides a valuable understanding of the prevalence of cyberstalking victimization. This can assist the researchers and practitioners for future research in the context of cyberstalking victimization.


Author(s):  
Junshu Wang ◽  
Guoming Zhang ◽  
Wei Wang ◽  
Ka Zhang ◽  
Yehua Sheng

AbstractWith the rapid development of hospital informatization and Internet medical service in recent years, most hospitals have launched online hospital appointment registration systems to remove patient queues and improve the efficiency of medical services. However, most of the patients lack professional medical knowledge and have no idea of how to choose department when registering. To instruct the patients to seek medical care and register effectively, we proposed CIDRS, an intelligent self-diagnosis and department recommendation framework based on Chinese medical Bidirectional Encoder Representations from Transformers (BERT) in the cloud computing environment. We also established a Chinese BERT model (CHMBERT) trained on a large-scale Chinese medical text corpus. This model was used to optimize self-diagnosis and department recommendation tasks. To solve the limited computing power of terminals, we deployed the proposed framework in a cloud computing environment based on container and micro-service technologies. Real-world medical datasets from hospitals were used in the experiments, and results showed that the proposed model was superior to the traditional deep learning models and other pre-trained language models in terms of performance.


2010 ◽  
Vol 23 (12) ◽  
pp. 3157-3180 ◽  
Author(s):  
N. Eckert ◽  
H. Baya ◽  
M. Deschatres

Abstract Snow avalanches are natural hazards strongly controlled by the mountain winter climate, but their recent response to climate change has thus far been poorly documented. In this paper, hierarchical modeling is used to obtain robust indexes of the annual fluctuations of runout altitudes. The proposed model includes a possible level shift, and distinguishes common large-scale signals in both mean- and high-magnitude events from the interannual variability. Application to the data available in France over the last 61 winters shows that the mean runout altitude is not different now than it was 60 yr ago, but that snow avalanches have been retreating since 1977. This trend is of particular note for high-magnitude events, which have seen their probability rates halved, a crucial result in terms of hazard assessment. Avalanche control measures, observation errors, and model limitations are insufficient explanations for these trends. On the other hand, strong similarities in the pattern of behavior of the proposed runout indexes and several climate datasets are shown, as well as a consistent evolution of the preferred flow regime. The proposed runout indexes may therefore be usable as indicators of climate change at high altitudes.


Sign in / Sign up

Export Citation Format

Share Document