A Comprehensive Survey of AI, Blockchain Technology and Big Data Applications in Medical Field and Global Health

Author(s):  
Inderpreet Kaur ◽  
Yogesh Kumar ◽  
Amanpreet Kaur Sandhu
Author(s):  
Ibrahim Haleem Khan ◽  
Mohd Javaid

Digital imaging and medical reporting have acquired an essential role in healthcare, but the main challenge is the storage of a high volume of patient data. Although newer technologies are already introduced in the medical sciences to save records size, Big Data provides advancements by storing a large amount of data to improve the efficiency and quality of patient treatment with better care. It provides intelligent automation capabilities to reduce errors than manual inputs. Large numbers of research papers on big data in the medical field are studied and analyzed for their impacts, benefits, and applications. Big data has great potential to support the digitalization of all medical and clinical records and then save the entire data regarding the medical history of an individual or a group. This paper discusses big data usage for various industries and sectors. Finally, 12 significant applications for the medical field by the implementation of big data are identified and studied with a brief description. This technology can be gainfully used to extract useful information from the available data by analyzing and managing them through a combination of hardware and software. With technological advancement, big data provides health-related information for millions of patient-related to life issues such as lab tests reporting, clinical narratives, demographics, prescription, medical diagnosis, and related documentation. Thus, Big Data is essential in developing a better yet efficient analysis and storage healthcare services. The demand for big data applications is increasing due to its capability of handling and analyzing massive data. Not only in the future but even now, Big Data is proving itself as an axiom of storing, developing, analyzing, and providing overall health information to the physicians.


2021 ◽  
Vol 5 (3) ◽  
pp. 41
Author(s):  
Supriya M. ◽  
Vijay Kumar Chattu

Artificial intelligence (AI) programs are applied to methods such as diagnostic procedures, treatment protocol development, patient monitoring, drug development, personalized medicine in healthcare, and outbreak predictions in global health, as in the case of the current COVID-19 pandemic. Machine learning (ML) is a field of AI that allows computers to learn and improve without being explicitly programmed. ML algorithms can also analyze large amounts of data called Big data through electronic health records for disease prevention and diagnosis. Wearable medical devices are used to continuously monitor an individual’s health status and store it in cloud computing. In the context of a newly published study, the potential benefits of sophisticated data analytics and machine learning are discussed in this review. We have conducted a literature search in all the popular databases such as Web of Science, Scopus, MEDLINE/PubMed and Google Scholar search engines. This paper describes the utilization of concepts underlying ML, big data, blockchain technology and their importance in medicine, healthcare, public health surveillance, case estimations in COVID-19 pandemic and other epidemics. The review also goes through the possible consequences and difficulties for medical practitioners and health technologists in designing futuristic models to improve the quality and well-being of human lives.


Author(s):  
Sobin C. C. ◽  
Vaskar Raychoudhury ◽  
Snehanshu Saha

The amount of data generated by online social networks such as Facebook, Twitter, etc., has recently experienced an enormous growth. Extracting useful information such as community structure, from such large networks is very important in many applications. Community is a collection of nodes, having dense internal connections and sparse external connections. Community detection algorithms aim to group nodes into different communities by extracting similarities and social relations between nodes. Although, many community detection algorithms in literature, they are not scalable enough to handle large volumes of data generated by many of the today's big data applications. So, researchers are focusing on developing parallel community detection algorithms, which can handle networks consisting of millions of edges and vertices. In this article, we present a comprehensive survey of parallel community detection algorithms, which is the first ever survey in this domain, although, multiple papers exist in literature related to sequential community detection algorithms.


Author(s):  
Jonatan Enes ◽  
Guillaume Fieni ◽  
Roberto R. Exposito ◽  
Romain Rouvoy ◽  
Juan Tourino

Proceedings ◽  
2021 ◽  
Vol 74 (1) ◽  
pp. 24
Author(s):  
Eduard Alexandru Stoica ◽  
Daria Maria Sitea

Nowadays society is profoundly changed by technology, velocity and productivity. While individuals are not yet prepared for holographic connection with banks or financial institutions, other innovative technologies have been adopted. Lately, a new world has been launched, personalized and adapted to reality. It has emerged and started to govern almost all daily activities due to the five key elements that are foundations of the technology: machine to machine (M2M), internet of things (IoT), big data, machine learning and artificial intelligence (AI). Competitive innovations are now on the market, helping with the connection between investors and borrowers—notably crowdfunding and peer-to-peer lending. Blockchain technology is now enjoying great popularity. Thus, a great part of the focus of this research paper is on Elrond. The outcomes highlight the relevance of technology in digital finance.


2019 ◽  
Vol 6 (1) ◽  
Author(s):  
Mahdi Torabzadehkashi ◽  
Siavash Rezaei ◽  
Ali HeydariGorji ◽  
Hosein Bobarshad ◽  
Vladimir Alves ◽  
...  

AbstractIn the era of big data applications, the demand for more sophisticated data centers and high-performance data processing mechanisms is increasing drastically. Data are originally stored in storage systems. To process data, application servers need to fetch them from storage devices, which imposes the cost of moving data to the system. This cost has a direct relation with the distance of processing engines from the data. This is the key motivation for the emergence of distributed processing platforms such as Hadoop, which move process closer to data. Computational storage devices (CSDs) push the “move process to data” paradigm to its ultimate boundaries by deploying embedded processing engines inside storage devices to process data. In this paper, we introduce Catalina, an efficient and flexible computational storage platform, that provides a seamless environment to process data in-place. Catalina is the first CSD equipped with a dedicated application processor running a full-fledged operating system that provides filesystem-level data access for the applications. Thus, a vast spectrum of applications can be ported for running on Catalina CSDs. Due to these unique features, to the best of our knowledge, Catalina CSD is the only in-storage processing platform that can be seamlessly deployed in clusters to run distributed applications such as Hadoop MapReduce and HPC applications in-place without any modifications on the underlying distributed processing framework. For the proof of concept, we build a fully functional Catalina prototype and a CSD-equipped platform using 16 Catalina CSDs to run Intel HiBench Hadoop and HPC benchmarks to investigate the benefits of deploying Catalina CSDs in the distributed processing environments. The experimental results show up to 2.2× improvement in performance and 4.3× reduction in energy consumption, respectively, for running Hadoop MapReduce benchmarks. Additionally, thanks to the Neon SIMD engines, the performance and energy efficiency of DFT algorithms are improved up to 5.4× and 8.9×, respectively.


2020 ◽  
Vol 2020 ◽  
pp. 1-29 ◽  
Author(s):  
Xingxing Xiong ◽  
Shubo Liu ◽  
Dan Li ◽  
Zhaohui Cai ◽  
Xiaoguang Niu

With the advent of the era of big data, privacy issues have been becoming a hot topic in public. Local differential privacy (LDP) is a state-of-the-art privacy preservation technique that allows to perform big data analysis (e.g., statistical estimation, statistical learning, and data mining) while guaranteeing each individual participant’s privacy. In this paper, we present a comprehensive survey of LDP. We first give an overview on the fundamental knowledge of LDP and its frameworks. We then introduce the mainstream privatization mechanisms and methods in detail from the perspective of frequency oracle and give insights into recent studied on private basic statistical estimation (e.g., frequency estimation and mean estimation) and complex statistical estimation (e.g., multivariate distribution estimation and private estimation over complex data) under LDP. Furthermore, we present current research circumstances on LDP including the private statistical learning/inferencing, private statistical data analysis, privacy amplification techniques for LDP, and some application fields under LDP. Finally, we identify future research directions and open challenges for LDP. This survey can serve as a good reference source for the research of LDP to deal with various privacy-related scenarios to be encountered in practice.


Sign in / Sign up

Export Citation Format

Share Document