scholarly journals The Analysis of Big Data Architecture for Healthcare Service: Case Study of a Public Hospital

Author(s):  
Wasinee Noonpakdee
Author(s):  
Michael Goul ◽  
T. S. Raghu ◽  
Ziru Li

As procurement organizations increasingly move from a cost-and-efficiency emphasis to a profit-and-growth emphasis, flexible data architecture will become an integral part of a procurement analytics strategy. It is therefore imperative for procurement leaders to understand and address digitization trends in supply chains and to develop strategies to create robust data architecture and analytics strategies for the future. This chapter assesses and examines the ways companies can organize their procurement data architectures in the big data space to mitigate current limitations and to lay foundations for the discovery of new insights. It sets out to understand and define the levels of maturity in procurement organizations as they pertain to the capture, curation, exploitation, and management of procurement data. The chapter then develops a framework for articulating the value proposition of moving between maturity levels and examines what the future entails for companies with mature data architectures. In addition to surveying the practitioner and academic research literature on procurement data analytics, the chapter presents detailed and structured interviews with over fifteen procurement experts from companies around the globe. The chapter finds several important and useful strategies that have helped procurement organizations design strategic roadmaps for the development of robust data architectures. It then further identifies four archetype procurement area data architecture contexts. In addition, this chapter details exemplary high-level mature data architecture for each archetype and examines the critical assumptions underlying each one. Data architectures built for the future need a design approach that supports both descriptive and real-time, prescriptive analytics.


2021 ◽  
Author(s):  
Andrew Sudmant ◽  
Vincent Viguié ◽  
Quentin Lepetit ◽  
Lucy Oates ◽  
Abhijit Datey ◽  
...  

2020 ◽  
Vol 9 (5) ◽  
pp. 311 ◽  
Author(s):  
Sujit Bebortta ◽  
Saneev Kumar Das ◽  
Meenakshi Kandpal ◽  
Rabindra Kumar Barik ◽  
Harishchandra Dubey

Several real-world applications involve the aggregation of physical features corresponding to different geographic and topographic phenomena. This information plays a crucial role in analyzing and predicting several events. The application areas, which often require a real-time analysis, include traffic flow, forest cover, disease monitoring and so on. Thus, most of the existing systems portray some limitations at various levels of processing and implementation. Some of the most commonly observed factors involve lack of reliability, scalability and exceeding computational costs. In this paper, we address different well-known scalable serverless frameworks i.e., Amazon Web Services (AWS) Lambda, Google Cloud Functions and Microsoft Azure Functions for the management of geospatial big data. We discuss some of the existing approaches that are popularly used in analyzing geospatial big data and indicate their limitations. We report the applicability of our proposed framework in context of Cloud Geographic Information System (GIS) platform. An account of some state-of-the-art technologies and tools relevant to our problem domain are discussed. We also visualize performance of the proposed framework in terms of reliability, scalability, speed and security parameters. Furthermore, we present the map overlay analysis, point-cluster analysis, the generated heatmap and clustering analysis. Some relevant statistical plots are also visualized. In this paper, we consider two application case-studies. The first case study was explored using the Mineral Resources Data System (MRDS) dataset, which refers to worldwide density of mineral resources in a country-wise fashion. The second case study was performed using the Fairfax Forecast Households dataset, which signifies the parcel-level household prediction for 30 consecutive years. The proposed model integrates a serverless framework to reduce timing constraints and it also improves the performance associated to geospatial data processing for high-dimensional hyperspectral data.


2021 ◽  
Vol 139 (1) ◽  
pp. 94-127
Author(s):  
Mark Faulkner

Abstract This paper demonstrates the potential of new methodologies for using existing corpora of medieval English to better contextualise linguistic variants, a major task of philology and a key underpinning of our ability to answer major literary-historical questions, such as when, where and to what purpose medieval texts and manuscripts were produced. The primary focus of the article is the assistance these methods can offer in dating the composition of texts, which it illustrates with a case study of the “Old” English Life of St Neot, uniquely preserved in the mid-twelfth-century South-Eastern homiliary, London, British Library, Cotton Vespasian D.xiv, fols. 4–169. While the Life has recently been dated around 1100, examining its orthography, lexis, syntax and style alongside that of all other English-language texts surviving from before 1150 using new techniques for searching the Dictionary of Old English Corpus suggests it is very unlikely to be this late. The article closes with some reflections on what book-historical research should prioritise as it further evolves into the digital age.


Sign in / Sign up

Export Citation Format

Share Document