scholarly journals Big Data Analytics & Artificial Intelligence in Healthcare

Author(s):  
PRANJAL KUMAR ◽  
Siddhartha Chauhan

Abstract Big data analysis and Artificial Intelligence have received significant attention recently in creating more opportunities in the health sector for aggregating or collecting large-scale data. Today, our genomes and microbiomes can be sequenced i.e., all information exchanged between physicians and patients in Electronic Health Records (EHR) can be collected and traced at least theoretically. Social media and mobile devices today obviously provide many health-related data regarding activity, diets, social contacts, and so on. However, it is increasingly difficult to use this information to answer health questions and, in particular, because the data comes from various domains and lives in different infrastructures and of course it also is very variable quality. The massive collection and aggregation of personal data come with a number of ethical policy, methodological, technological challenges. It should be acknowledged that large-scale clinical evidence remains to confirm the promise of Big Data and Artificial Intelligence (AI) in health care. This paper explores the complexities of big data & artificial intelligence in healthcare as well as the benefits and prospects.

2021 ◽  
Author(s):  
R. Salter ◽  
Quyen Dong ◽  
Cody Coleman ◽  
Maria Seale ◽  
Alicia Ruvinsky ◽  
...  

The Engineer Research and Development Center, Information Technology Laboratory’s (ERDC-ITL’s) Big Data Analytics team specializes in the analysis of large-scale datasets with capabilities across four research areas that require vast amounts of data to inform and drive analysis: large-scale data governance, deep learning and machine learning, natural language processing, and automated data labeling. Unfortunately, data transfer between government organizations is a complex and time-consuming process requiring coordination of multiple parties across multiple offices and organizations. Past successes in large-scale data analytics have placed a significant demand on ERDC-ITL researchers, highlighting that few individuals fully understand how to successfully transfer data between government organizations; future project success therefore depends on a small group of individuals to efficiently execute a complicated process. The Big Data Analytics team set out to develop a standardized workflow for the transfer of large-scale datasets to ERDC-ITL, in part to educate peers and future collaborators on the process required to transfer datasets between government organizations. Researchers also aim to increase workflow efficiency while protecting data integrity. This report provides an overview of the created Data Lake Ecosystem Workflow by focusing on the six phases required to efficiently transfer large datasets to supercomputing resources located at ERDC-ITL.


Author(s):  
Manjunath Thimmasandra Narayanapppa ◽  
T. P. Puneeth Kumar ◽  
Ravindra S. Hegadi

Recent technological advancements have led to generation of huge volume of data from distinctive domains (scientific sensors, health care, user-generated data, finical companies and internet and supply chain systems) over the past decade. To capture the meaning of this emerging trend the term big data was coined. In addition to its huge volume, big data also exhibits several unique characteristics as compared with traditional data. For instance, big data is generally unstructured and require more real-time analysis. This development calls for new system platforms for data acquisition, storage, transmission and large-scale data processing mechanisms. In recent years analytics industries interest expanding towards the big data analytics to uncover potentials concealed in big data, such as hidden patterns or unknown correlations. The main goal of this chapter is to explore the importance of machine learning algorithms and computational environment including hardware and software that is required to perform analytics on big data.


Neurology ◽  
2021 ◽  
pp. 10.1212/WNL.0000000000012884
Author(s):  
Hugo Vrenken ◽  
Mark Jenkinson ◽  
Dzung Pham ◽  
Charles R.G. Guttmann ◽  
Deborah Pareto ◽  
...  

Multiple sclerosis (MS) patients have heterogeneous clinical presentations, symptoms and progression over time, making MS difficult to assess and comprehend in vivo. The combination of large-scale data-sharing and artificial intelligence creates new opportunities for monitoring and understanding MS using magnetic resonance imaging (MRI).First, development of validated MS-specific image analysis methods can be boosted by verified reference, test and benchmark imaging data. Using detailed expert annotations, artificial intelligence algorithms can be trained on such MS-specific data. Second, understanding disease processes could be greatly advanced through shared data of large MS cohorts with clinical, demographic and treatment information. Relevant patterns in such data that may be imperceptible to a human observer could be detected through artificial intelligence techniques. This applies from image analysis (lesions, atrophy or functional network changes) to large multi-domain datasets (imaging, cognition, clinical disability, genetics, etc.).After reviewing data-sharing and artificial intelligence, this paper highlights three areas that offer strong opportunities for making advances in the next few years: crowdsourcing, personal data protection, and organized analysis challenges. Difficulties as well as specific recommendations to overcome them are discussed, in order to best leverage data sharing and artificial intelligence to improve image analysis, imaging and the understanding of MS.


Author(s):  
Anitha S. Pillai ◽  
Bindu Menon

Advancement in technology has paved the way for the growth of big data. We are able to exploit this data to a great extent as the costs of collecting, storing, and analyzing a large volume of data have plummeted considerably. There is an exponential increase in the amount of health-related data being generated by smart devices. Requisite for proper mining of the data for knowledge discovery and therapeutic product development is very essential. The expanding field of big data analytics is playing a vital role in healthcare practices and research. A large number of people are being affected by Alzheimer's Disease (AD), and as a result, it becomes very challenging for the family members to handle these individuals. The objective of this chapter is to highlight how deep learning can be used for the early diagnosis of AD and present the outcomes of research studies of both neurologists and computer scientists. The chapter gives introduction to big data, deep learning, AD, biomarkers, and brain images and concludes by suggesting blood biomarker as an ideal solution for early detection of AD.


2017 ◽  
Vol 37 (1) ◽  
pp. 56-74 ◽  
Author(s):  
Thomas Kude ◽  
Hartmut Hoehle ◽  
Tracy Ann Sykes

Purpose Big Data Analytics provides a multitude of opportunities for organizations to improve service operations, but it also increases the threat of external parties gaining unauthorized access to sensitive customer data. With data breaches now a common occurrence, it is becoming increasingly plain that while modern organizations need to put into place measures to try to prevent breaches, they must also put into place processes to deal with a breach once it occurs. Prior research on information technology security and services failures suggests that customer compensation can potentially restore customer sentiment after such data breaches. The paper aims to discuss these issues. Design/methodology/approach In this study, the authors draw on the literature on personality traits and social influence to better understand the antecedents of perceived compensation and the effectiveness of compensation strategies. The authors studied the propositions using data collected in the context of Target’s large-scale data breach that occurred in December 2013 and affected the personal data of more than 70 million customers. In total, the authors collected data from 212 breached customers. Findings The results show that customers’ personality traits and their social environment significantly influences their perceptions of compensation. The authors also found that perceived compensation positively influences service recovery and customer experience. Originality/value The results add to the emerging literature on Big Data Analytics and will help organizations to more effectively manage compensation strategies in large-scale data breaches.


2019 ◽  
Vol 6 (1) ◽  
pp. 205395171983011 ◽  
Author(s):  
Alessandro Blasimme ◽  
Effy Vayena ◽  
Ine Van Hoyweghen

In this paper, we discuss how access to health-related data by private insurers, other than affecting the interests of prospective policy-holders, can also influence their propensity to make personal data available for research purposes. We take the case of national precision medicine initiatives as an illustrative example of this possible tendency. Precision medicine pools together unprecedented amounts of genetic as well as phenotypic data. The possibility that private insurers could claim access to such rapidly accumulating biomedical Big Data or to health-related information derived from it would discourage people from enrolling in precision medicine studies. Should that be the case, the economic value of personal data for the insurance industry would end up affecting the public value of data as a scientific resource. In what follows we articulate three principles – trustworthiness, openness and evidence – to address this problem and tame its potentially harmful effects on the development of precision medicine and, more generally, on the advancement of medical science.


2020 ◽  
Vol 9 (4) ◽  
pp. 1107 ◽  
Author(s):  
Charat Thongprayoon ◽  
Wisit Kaewput ◽  
Karthik Kovvuru ◽  
Panupong Hansrivijit ◽  
Swetha R. Kanduri ◽  
...  

Kidney diseases form part of the major health burdens experienced all over the world. Kidney diseases are linked to high economic burden, deaths, and morbidity rates. The great importance of collecting a large quantity of health-related data among human cohorts, what scholars refer to as “big data”, has increasingly been identified, with the establishment of a large group of cohorts and the usage of electronic health records (EHRs) in nephrology and transplantation. These data are valuable, and can potentially be utilized by researchers to advance knowledge in the field. Furthermore, progress in big data is stimulating the flourishing of artificial intelligence (AI), which is an excellent tool for handling, and subsequently processing, a great amount of data and may be applied to highlight more information on the effectiveness of medicine in kidney-related complications for the purpose of more precise phenotype and outcome prediction. In this article, we discuss the advances and challenges in big data, the use of EHRs and AI, with great emphasis on the usage of nephrology and transplantation.


Author(s):  
Sadaf Afrashteh ◽  
Ida Someh ◽  
Michael Davern

Big data analytics uses algorithms for decision-making and targeting of customers. These algorithms process large-scale data sets and create efficiencies in the decision-making process for organizations but are often incomprehensible to customers and inherently opaque in nature. Recent European Union regulations require that organizations communicate meaningful information to customers on the use of algorithms and the reasons behind decisions made about them. In this paper, we explore the use of explanations in big data analytics services. We rely on discourse ethics to argue that explanations can facilitate a balanced communication between organizations and customers, leading to transparency and trust for customers as well as customer engagement and reduced reputation risks for organizations. We conclude the paper by proposing future empirical research directions.


2021 ◽  
pp. 1-7
Author(s):  
Emmanuel Jesse Amadosi

With rapid development in technology, the built industry’s capacity to generate large-scale data is not in doubt. This trend of data upsurge labelled “Big Data” is currently being used to seek intelligent solutions in many industries including construction. As a result of this, the appeal to embrace Big Data Analytics has also gained wide advocacy globally. However, the general knowledge of Nigeria’s built environment professionals on Big Data Analytics is still limited and this gap continues to account for the slow pace of adoption of digital technologies like Big Data Analytics and the value it projects. This study set out to assess the level of awareness and knowledge of professionals within the Nigerian built environment with a view to promoting the adoption of Big Data Analytics for improved productivity. To achieve this aim, a structured questionnaire survey was carried out among a total of 283 professionals drawn from 9 disciplines within the built environment in the Federal Capital Territory, Abuja. The findings revealed that: a) a low knowledge level of Big Data exists among professionals, b) knowledge among professional and the level of Big Data Analytics application have strong relationship c) professional are interested in knowing more about the Big Data concept and how Big Data Analytics can be leveraged upon. The study, therefore recommends an urgent paradigm shift towards digitisation to fully embrace and adopt Big Data Analytics and enjoin stakeholders to promote collaborative schemes among practice-based professionals and the academia in seeking intelligent and smart solutions to construction-related problems.


2022 ◽  
pp. 979-992
Author(s):  
Pavani Konagala

A large volume of data is stored electronically. It is very difficult to measure the total volume of that data. This large amount of data is coming from various sources such as stock exchange, which may generate terabytes of data every day, Facebook, which may take about one petabyte of storage, and internet archives, which may store up to two petabytes of data, etc. So, it is very difficult to manage that data using relational database management systems. With the massive data, reading and writing from and into the drive takes more time. So, the storage and analysis of this massive data has become a big problem. Big data gives the solution for these problems. It specifies the methods to store and analyze the large data sets. This chapter specifies a brief study of big data techniques to analyze these types of data. It includes a wide study of Hadoop characteristics, Hadoop architecture, advantages of big data and big data eco system. Further, this chapter includes a comprehensive study of Apache Hive for executing health-related data and deaths data of U.S. government.


Sign in / Sign up

Export Citation Format

Share Document