Maintenance of Medical Records Database and Simulation of Gene Therapy using Big Data Concepts

Author(s):  
Saagar S. ◽  
Naveen Nanda M. ◽  
Nivedita Suresh Kumar Nair ◽  
Sruthi Kannan
2021 ◽  
Vol 69 (12) ◽  
pp. 3618
Author(s):  
UmeshChandra Behera ◽  
Brooke Salzman ◽  
AnthonyVipin Das ◽  
GumpiliSai Prashanthi ◽  
Parth Lalakia ◽  
...  

2021 ◽  
Author(s):  
Fabian Kovacs ◽  
Max Thonagel ◽  
Marion Ludwig ◽  
Alexander Albrecht ◽  
Manuel Hegner ◽  
...  

BACKGROUND Big data in healthcare must be exploited to achieve a substantial increase in efficiency and competitiveness. Especially the analysis of patient-related data possesses huge potential to improve decision-making processes. However, most analytical approaches used today are highly time- and resource-consuming. OBJECTIVE The presented software solution Conquery is an open-source software tool providing advanced, but intuitive data analysis without the need for specialized statistical training. Conquery aims to simplify big data analysis for novice database users in the medical sector. METHODS Conquery is a document-oriented distributed timeseries database and analysis platform. Its main application is the analysis of per-person medical records by non-technical medical professionals. Complex analyses are realized in the Conquery frontend by dragging tree nodes into the query editor. Queries are evaluated by a bespoke distributed query-engine for medical records in a column-oriented fashion. We present a custom compression scheme to facilitate low response times that uses online calculated as well as precomputed metadata and data statistics. RESULTS Conquery allows for easy navigation through the hierarchy and enables complex study cohort construction whilst reducing the demand on time and resources. The UI of Conquery and a query output is exemplified by the construction of a relevant clinical cohort. CONCLUSIONS Conquery is an efficient and intuitive open-source software for performant and secure data analysis and aims at supporting decision-making processes in the healthcare sector.


2022 ◽  
pp. 431-454
Author(s):  
Pinar Kirci

To define huge datasets, the term of big data is used. The considered “4 V” datasets imply volume, variety, velocity and value for many areas especially in medical images, electronic medical records (EMR) and biometrics data. To process and manage such datasets at storage, analysis and visualization states are challenging processes. Recent improvements in communication and transmission technologies provide efficient solutions. Big data solutions should be multithreaded and data access approaches should be tailored to big amounts of semi-structured/unstructured data. Software programming frameworks with a distributed file system (DFS) that owns more units compared with the disk blocks in an operating system to multithread computing task are utilized to cope with these difficulties. Huge datasets in data storage and analysis of healthcare industry need new solutions because old fashioned and traditional analytic tools become useless.


2018 ◽  
Vol 27 (01) ◽  
pp. 234-236 ◽  
Author(s):  
Kwok-Chan Lun

SummaryHealth informatics has benefitted from the development of Info-Communications Technology (ICT) over the last fifty years. Advances in ICT in healthcare have now started to spur advances in Data Technology as hospital information systems, electronic health and medical records, mobile devices, social media and Internet Of Things (IOT) are making a substantial impact on the generation of data. It is timely for healthcare institutions to recognize data as a corporate asset and promote a data-driven culture within the institution. It is both strategic and timely for IMIA, as an international organization in health informatics, to take the lead to promote a data-driven culture in healthcare organizations. This can be achieved by expanding the terms of reference of its existing Working Group on Data Mining and Big Data Analysis to include (1) data analytics with special reference to healthcare, (2) big data tools and solutions, (3) bridging information technology and data technology and (4) data quality issues and challenges.


2020 ◽  
Vol 53 (7-8) ◽  
pp. 1286-1299
Author(s):  
Yu Cao ◽  
Yi Sun ◽  
Jiangsong Min

With the development of big data and medical information control system, electronic medical records sharing across organizations for better medical treatment and advancement has attracted much attention both from academic and industrial areas. However, the source of big data, personal privacy concern, inherent trust issues across organizations and complicated regulation hinder the great progress of healthcare intelligence. Blockchain, as a novel technique, has been used widely to resolve the privacy and security issues in electronic medical records sharing process. In this paper, we propose a hybrid blockchain–based electronic medical records sharing scheme to address the privacy and trust issues across the medical information control systems, rendering the electronic medical records sharing process secure, effective, relatively transparent, immutable, traceable and auditable. Considering the above confidential issues, we use different sharing methods for different parts of medical big data. We share privacy-sensitive couples on the consortium blockchain, while sharing the non-sensitive parts on the public blockchain. In this way, authorized medical information control systems within the consortium can access the data on it for precise medical diagnosis. Institutions such as universities and research institutes can get access to the non-sensitive parts of medical big data for scientific research on symptoms to evolve medical technologies. A working prototype is implemented to demonstrate how the hybrid blockchain facilitates the pharmaceutical operations in a healthcare information control ecosystem. A blockchain benchmark tool Hyperledger Caliper is used to evaluate the performance of hybrid blockchain–based electronic medical records sharing scheme on throughput and average latency which proves to be practicable and excellent.


2015 ◽  
Vol 2015 ◽  
pp. 1-9 ◽  
Author(s):  
Emilie Baro ◽  
Samuel Degoul ◽  
Régis Beuscart ◽  
Emmanuel Chazard

Objective.The aim of this study was to provide a definition of big data in healthcare.Methods.A systematic search of PubMed literature published until May 9, 2014, was conducted. We noted the number of statistical individuals(n)and the number of variables(p)for all papers describing a dataset. These papers were classified into fields of study. Characteristics attributed to big data by authors were also considered. Based on this analysis, a definition of big data was proposed.Results.A total of 196 papers were included. Big data can be defined as datasets withLog⁡(n*p)≥7. Properties of big data are its great variety and high velocity. Big data raises challenges on veracity, on all aspects of the workflow, on extracting meaningful information, and on sharing information. Big data requires new computational methods that optimize data management. Related concepts are data reuse, false knowledge discovery, and privacy issues.Conclusion.Big data is defined by volume. Big data should not be confused with data reuse: data can be big without being reused for another purpose, for example, in omics. Inversely, data can be reused without being necessarily big, for example, secondary use of Electronic Medical Records (EMR) data.


Sign in / Sign up

Export Citation Format

Share Document