Standardizing Physiologic Assessment Data to Enable Big Data Analytics

2016 ◽  
Vol 39 (1) ◽  
pp. 63-77 ◽  
Author(s):  
Susan A. Matney ◽  
Theresa (Tess) Settergren ◽  
Jane M. Carrington ◽  
Rachel L. Richesson ◽  
Amy Sheide ◽  
...  

Disparate data must be represented in a common format to enable comparison across multiple institutions and facilitate Big Data science. Nursing assessments represent a rich source of information. However, a lack of agreement regarding essential concepts and standardized terminology prevent their use for Big Data science in the current state. The purpose of this study was to align a minimum set of physiological nursing assessment data elements with national standardized coding systems. Six institutions shared their 100 most common electronic health record nursing assessment data elements. From these, a set of distinct elements was mapped to nationally recognized Logical Observations Identifiers Names and Codes (LOINC®) and Systematized Nomenclature of Medicine–Clinical Terms (SNOMED CT®) standards. We identified 137 observation names (55% new to LOINC), and 348 observation values (20% new to SNOMED CT) organized into 16 panels (72% new LOINC). This reference set can support the exchange of nursing information, facilitate multi-site research, and provide a framework for nursing data analysis.

Author(s):  
Aakriti Shukla ◽  
◽  
Dr Damodar Prasad Tiwari ◽  

Dimension reduction or feature selection is thought to be the backbone of big data applications in order to improve performance. Many scholars have shifted their attention in recent years to data science and analysis for real-time applications using big data integration. It takes a long time for humans to interact with big data. As a result, while handling high workload in a distributed system, it is necessary to make feature selection elastic and scalable. In this study, a survey of alternative optimizing techniques for feature selection are presented, as well as an analytical result analysis of their limits. This study contributes to the development of a method for improving the efficiency of feature selection in big complicated data sets.


2021 ◽  
pp. 034-041
Author(s):  
A.Y. Gladun ◽  
◽  
K.A. Khala ◽  

It is becoming clear with growing complication of cybersecurity threats, that one of the most important resources to combat cyberattacks is the processing of large amounts of data in the cyber environment. In order to process a huge amount of data and to make decisions, there is a need to automate the tasks of searching, selecting and interpreting Big Data to solve operational information security problems. Big data analytics is complemented by semantic technology, can improve cybersecurity, and allows you to process and interpret large amounts of information in the cyber environment. Using of semantic modeling methods in Big Data analytics is necessary for the selection and combination of heterogeneous Big Data sources, recognition of the patterns of network attacks and other cyber threats, which must occur quickly to implement countermeasures. Therefore to analyze Big Data metadata, the authors propose pre-processing of metadata at the semantic level. As analysis tools, it is proposed to create a thesaurus of the problem based on the domain ontology, which should provide a terminological basis for the integration of ontologies of different levels. To build a thesaurus of the problem, it is proposed to use the standards of open information resources, dictionaries, encyclopedias. The development of an ontology hierarchy formalizes the relationships between data elements that will be used in future for machine learning and artificial intelligence algorithms to adapt to changes in the environment, which in turn will increase the efficiency of big data analytics for the cybersecurity domain.


Web Services ◽  
2019 ◽  
pp. 1301-1329
Author(s):  
Suren Behari ◽  
Aileen Cater-Steel ◽  
Jeffrey Soar

The chapter discusses how Financial Services organizations can take advantage of Big Data analysis for disruptive innovation through examination of a case study in the financial services industry. Popular tools for Big Data Analysis are discussed and the challenges of big data are explored as well as how these challenges can be met. The work of Hayes-Roth in Valued Information at the Right Time (VIRT) and how it applies to the case study is examined. Boyd's model of Observe, Orient, Decide, and Act (OODA) is explained in relation to disruptive innovation in financial services. Future trends in big data analysis in the financial services domain are explored.


Web Services ◽  
2019 ◽  
pp. 1262-1281
Author(s):  
Chitresh Verma ◽  
Rajiv Pandey

Big Data Analytics is a major branch of data science where the huge amount raw data is processed to get insight for relevant business processes. Integration of big data, its analytics along with Service Oriented Architecture (SOA) is need of the hour, such integration shall render reusability and scalability to various business processes. This chapter explains the concept of Big Data and Big Data Analytics at its implementation level. The Chapter further describes Hadoop and its technologies which are one of the popular frameworks for Big Data Analytics and envisage integrating SOA with relevant case studies. The chapter demonstrates the SOA integration with Big Data through, two case studies of two different scenarios are incorporated that integrates real world implementation with theory and enables better understanding of the industrial level processes and practices.


Author(s):  
Sheik Abdullah A. ◽  
Selvakumar S. ◽  
Parkavi R. ◽  
Suganya R. ◽  
Abirami A. M.

The importance of big data over analytics made the process of solving various real-world problems simpler. The big data and data science tool box provided a realm of data preparation, data analysis, implementation process, and solutions. Data connections over any data source, data preparation for analysis has been made simple with the availability of tremendous tools in data analytics package. Some of the analytical tools include R programming, python programming, rapid analytics, and weka. The patterns and the granularity over the observed data can be fetched with the visualizations and data observations. This chapter provides an insight regarding the types of analytics in a big data perspective with the realm in applicability towards healthcare data. Also, the processing paradigms and techniques can be clearly observed through the chapter contents.


Author(s):  
Chitresh Verma ◽  
Rajiv Pandey

Big Data Analytics is a major branch of data science where the huge amount raw data is processed to get insight for relevant business processes. Integration of big data, its analytics along with Service Oriented Architecture (SOA) is need of the hour, such integration shall render reusability and scalability to various business processes. This chapter explains the concept of Big Data and Big Data Analytics at its implementation level. The Chapter further describes Hadoop and its technologies which are one of the popular frameworks for Big Data Analytics and envisage integrating SOA with relevant case studies. The chapter demonstrates the SOA integration with Big Data through, two case studies of two different scenarios are incorporated that integrates real world implementation with theory and enables better understanding of the industrial level processes and practices.


Author(s):  
Nenad Stefanovic

The current approach to supply chain intelligence has some fundamental challenges when confronted with the scale and characteristics of big data. In this chapter, applications, challenges and new trends in supply chain big data analytics are discussed and background research of big data initiatives related to supply chain management is provided. The methodology and the unified model for supply chain big data analytics which comprises the whole business intelligence (data science) lifecycle is described. It enables creation of the next-generation cloud-based big data systems that can create strategic value and improve performance of supply chains. Finally, example of supply chain big data solution that illustrates applicability and effectiveness of the model is presented.


Author(s):  
Zhaohao Sun ◽  
Andrew Stranieri

Intelligent analytics is an emerging paradigm in the age of big data, analytics, and artificial intelligence (AI). This chapter explores the nature of intelligent analytics. More specifically, this chapter identifies the foundations, cores, and applications of intelligent big data analytics based on the investigation into the state-of-the-art scholars' publications and market analysis of advanced analytics. Then it presents a workflow-based approach to big data analytics and technological foundations for intelligent big data analytics through examining intelligent big data analytics as an integration of AI and big data analytics. The chapter also presents a novel approach to extend intelligent big data analytics to intelligent analytics. The proposed approach in this chapter might facilitate research and development of intelligent analytics, big data analytics, business analytics, business intelligence, AI, and data science.


Sign in / Sign up

Export Citation Format

Share Document