Big data in systemic sclerosis: Great potential for the future

2020 ◽  
Vol 5 (3) ◽  
pp. 172-177
Author(s):  
Mislav Radic ◽  
Tracy M Frech

Since it was first used in 1997, the term “big data” has been popularized; however, the concept of big data is relatively new to medicine. Big data refers to a method and technique to systematically retrieve, collect, manage, and analyze very large and complex sets of structured and unstructured data that cannot be sufficiently processed using traditional methods of processing data. Integrating big data in rare diseases with low prevalence and incidence, like systemic sclerosis is of particular importance. We conducted a literature review of use of big data in systemic sclerosis. The volume of data on systemic sclerosis has grown steadily in the recent years; however, big data methods have not been readily used. This inexhaustible source of data needs to be used more to unleash its full potential.

2020 ◽  
pp. 239-254
Author(s):  
David W. Dorsey

With the rise of the internet and the related explosion in the amount of data that are available, the field of data science has expanded rapidly, and analytic techniques designed for use in “big data” contexts have become popular. These include techniques for analyzing both structured and unstructured data. This chapter explores the application of these techniques to the development and evaluation of career pathways. For example, data scientists can analyze online job listings and resumes to examine changes in skill requirements and careers over time and to examine job progressions across an enormous number of people. Similarly, analysts can evaluate whether information on career pathways accurately captures realistic job progressions. Within organizations, the increasing amount of data make it possible to pinpoint the specific skills, behaviors, and attributes that maximize performance in specific roles. The chapter concludes with ideas for the future application of big data to career pathways.


2018 ◽  
Vol 7 (3.6) ◽  
pp. 55
Author(s):  
Neha Narayan Kulkarni ◽  
Shital Kumar A. Jain ◽  
. .

Recently the technologies are growing fast, so they have become the point of source and also the sink for data. Data is generated in large volume introducing the concept of structured and unstructured data evolving "Big Data" which needs large memory for storage. There are two possible solutions either increase the local storage or use the Cloud Storage. Cloud makes data available to the user anytime, anywhere, anything. Cloud allows the user to store their data virtually without investing much. However, this data is on cloud raising a concern of data security and recovery. This attack is made by the untrusted or unauthorized user remotely. The attacker may modify, delete or replace the data. Therefore, different models are proposed for a data integrity check and proof of retrievability. This paper includes the literature review related to various techniques for data integrity, data recovery and proof of retrievability.  


2020 ◽  
Vol 12 (2) ◽  
pp. 634 ◽  
Author(s):  
Diana Martinez-Mosquera ◽  
Rosa Navarrete ◽  
Sergio Lujan-Mora

The work presented in this paper is motivated by the acknowledgement that a complete and updated systematic literature review (SLR) that consolidates all the research efforts for Big Data modeling and management is missing. This study answers three research questions. The first question is how the number of published papers about Big Data modeling and management has evolved over time. The second question is whether the research is focused on semi-structured and/or unstructured data and what techniques are applied. Finally, the third question determines what trends and gaps exist according to three key concepts: the data source, the modeling and the database. As result, 36 studies, collected from the most important scientific digital libraries and covering the period between 2010 and 2019, were deemed relevant. Moreover, we present a complete bibliometric analysis in order to provide detailed information about the authors and the publication data in a single document. This SLR reveal very interesting facts. For instance, Entity Relationship and document-oriented are the most researched models at the conceptual and logical abstraction level respectively and MongoDB is the most frequent implementation at the physical. Furthermore, 2.78% studies have proposed approaches oriented to hybrid databases with a real case for structured, semi-structured and unstructured data.


2012 ◽  
Vol 16 (3) ◽  
Author(s):  
Laurie P Dringus

This essay is written to present a prospective stance on how learning analytics, as a core evaluative approach, must help instructors uncover the important trends and evidence of quality learner data in the online course. A critique is presented of strategic and tactical issues of learning analytics. The approach to the critique is taken through the lens of questioning the current status of applying learning analytics to online courses. The goal of the discussion is twofold: (1) to inform online learning practitioners (e.g., instructors and administrators) of the potential of learning analytics in online courses and (2) to broaden discussion in the research community about the advancement of learning analytics in online learning. In recognizing the full potential of formalizing big data in online coures, the community must address this issue also in the context of the potentially "harmful" application of learning analytics.


MedienJournal ◽  
2017 ◽  
Vol 38 (4) ◽  
pp. 50-61 ◽  
Author(s):  
Jan Jagodzinski

This paper will first briefly map out the shift from disciplinary to control societies (what I call designer capitalism, the idea of control comes from Gilles Deleuze) in relation to surveillance and mediation of life through screen cultures. The paper then shifts to the issues of digitalization in relation to big data that have the danger of continuing to close off life as zoë, that is life that is creative rather than captured via attention technologies through marketing techniques and surveillance. The last part of this paper then develops the way artists are able to resist the big data archive by turning the data in on itself to offer viewers and participants a glimpse of the current state of manipulating desire and maintaining copy right in order to keep the future closed rather than being potentially open.


2019 ◽  
Author(s):  
Meghana Bastwadkar ◽  
Carolyn McGregor ◽  
S Balaji

BACKGROUND This paper presents a systematic literature review of existing remote health monitoring systems with special reference to neonatal intensive care (NICU). Articles on NICU clinical decision support systems (CDSSs) which used cloud computing and big data analytics were surveyed. OBJECTIVE The aim of this study is to review technologies used to provide NICU CDSS. The literature review highlights the gaps within frameworks providing HAaaS paradigm for big data analytics METHODS Literature searches were performed in Google Scholar, IEEE Digital Library, JMIR Medical Informatics, JMIR Human Factors and JMIR mHealth and only English articles published on and after 2015 were included. The overall search strategy was to retrieve articles that included terms that were related to “health analytics” and “as a service” or “internet of things” / ”IoT” and “neonatal intensive care unit” / ”NICU”. Title and abstracts were reviewed to assess relevance. RESULTS In total, 17 full papers met all criteria and were selected for full review. Results showed that in most cases bedside medical devices like pulse oximeters have been used as the sensor device. Results revealed a great diversity in data acquisition techniques used however in most cases the same physiological data (heart rate, respiratory rate, blood pressure, blood oxygen saturation) was acquired. Results obtained have shown that in most cases data analytics involved data mining classification techniques, fuzzy logic-NICU decision support systems (DSS) etc where as big data analytics involving Artemis cloud data analysis have used CRISP-TDM and STDM temporal data mining technique to support clinical research studies. In most scenarios both real-time and retrospective analytics have been performed. Results reveal that most of the research study has been performed within small and medium sized urban hospitals so there is wide scope for research within rural and remote hospitals with NICU set ups. Results have shown creating a HAaaS approach where data acquisition and data analytics are not tightly coupled remains an open research area. Reviewed articles have described architecture and base technologies for neonatal health monitoring with an IoT approach. CONCLUSIONS The current work supports implementation of the expanded Artemis cloud as a commercial offering to healthcare facilities in Canada and worldwide to provide cloud computing services to critical care. However, no work till date has been completed for low resource setting environment within healthcare facilities in India which results in scope for research. It is observed that all the big data analytics frameworks which have been reviewed in this study have tight coupling of components within the framework, so there is a need for a framework with functional decoupling of components.


Author(s):  
Michael Goul ◽  
T. S. Raghu ◽  
Ziru Li

As procurement organizations increasingly move from a cost-and-efficiency emphasis to a profit-and-growth emphasis, flexible data architecture will become an integral part of a procurement analytics strategy. It is therefore imperative for procurement leaders to understand and address digitization trends in supply chains and to develop strategies to create robust data architecture and analytics strategies for the future. This chapter assesses and examines the ways companies can organize their procurement data architectures in the big data space to mitigate current limitations and to lay foundations for the discovery of new insights. It sets out to understand and define the levels of maturity in procurement organizations as they pertain to the capture, curation, exploitation, and management of procurement data. The chapter then develops a framework for articulating the value proposition of moving between maturity levels and examines what the future entails for companies with mature data architectures. In addition to surveying the practitioner and academic research literature on procurement data analytics, the chapter presents detailed and structured interviews with over fifteen procurement experts from companies around the globe. The chapter finds several important and useful strategies that have helped procurement organizations design strategic roadmaps for the development of robust data architectures. It then further identifies four archetype procurement area data architecture contexts. In addition, this chapter details exemplary high-level mature data architecture for each archetype and examines the critical assumptions underlying each one. Data architectures built for the future need a design approach that supports both descriptive and real-time, prescriptive analytics.


2021 ◽  
pp. 2516600X2110059
Author(s):  
Som Sekhar Bhattacharyya ◽  
Rajesh Chandwani

The COVID-19 pandemic highlighted the necessity of good quality and adequate quantity of healthcare infrastructure facilities. Healthcare facilities were provided for COVID-19 facilities with improvisation and supplementary lateral infrastructure from other sectors. However, the main point of contemplation going into the future was regarding how to quickly develop healthcare facilities. The subject domain of ‘industrial engineering’ (IE) and its associated perspectives could provide some key insights regarding this. The authors undertook a conceptual literature review and provided theoretical argumentation toward this. The findings provided insights regarding the application of industrial engineering concepts in healthcare facilities and services.


Sign in / Sign up

Export Citation Format

Share Document