scholarly journals Future of Medical Research in Rare Diseases and Cancers: Shift from Pharma to Biotech and the Golden Age of Medical Advancement

2017 ◽  
Vol 6 (2) ◽  
pp. 12
Author(s):  
Abhith Pallegar

The objective of the paper is to elucidate how interconnected biological systems can be better mapped and understood using the rapidly growing area of Big Data. We can harness network efficiencies by analyzing diverse medical data and probe how we can effectively lower the economic cost of finding cures for rare diseases. Most rare diseases are due to genetic abnormalities, many forms of cancers develop due to genetic mutations. Finding cures for rare diseases requires us to understand the biology and biological processes of the human body. In this paper, we explore what the historical shift of focus from pharmacology to biotechnology means for accelerating biomedical solutions. With biotechnology playing a leading role in the field of medical research, we explore how network efficiencies can be harnessed by strengthening the existing knowledge base. Studying rare or orphan diseases provides rich observable statistical data that can be leveraged for finding solutions. Network effects can be squeezed from working with diverse data sets that enables us to generate the highest quality medical knowledge with the fewest resources. This paper examines gene manipulation technologies like Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR) that can prevent diseases of genetic variety. We further explore the role of the emerging field of Big Data in analyzing large quantities of medical data with the rapid growth of computing power and some of the network efficiencies gained from this endeavor. 

2020 ◽  
Vol 30 (Supplement_5) ◽  
Author(s):  
I Mircheva ◽  
M Mirchev

Abstract Background Ownership of patient information in the context of Big Data is a relatively new problem, apparently not yet fully understood. There are not enough publications on the subject. Since the topic is interdisciplinary, incorporating legal, ethical, medical and aspects of information and communication technologies, a slightly more sophisticated analysis of the issue is needed. Aim To determine how the medical academic community perceives the issue of ownership of patient information in the context of Big Data. Methods Literature search for full text publications, indexed in PubMed, Springer, ScienceDirect and Scopus identified only 27 appropriate articles authored by academicians and corresponding to three focus areas: problem (ownership); area (healthcare); context (Big Data). Three major aspects were studied: scientific area of publications, aspects and academicians' perception of ownership in the context of Big Data. Results Publications are in the period 2014 - 2019, 37% published in health and medical informatics journals, 30% in medicine and public health, 19% in law and ethics; 78% authored by American and British academicians, highly cited. The majority (63%) are in the area of scientific research - clinical studies, access and use of patient data for medical research, secondary use of medical data, ethical challenges to Big data in healthcare. The majority (70%) of the publications discuss ownership in ethical and legal aspects and 67% see ownership as a challenge mostly to medical research, access control, ethics, politics and business. Conclusions Ownership of medical data is seen first and foremost as a challenge. Addressing this challenge requires the combined efforts of politicians, lawyers, ethicists, computer and medical professionals, as well as academicians, sharing these efforts, experiences and suggestions. However, this issue is neglected in the scientific literature. Publishing may help in open debates and adequate policy solutions. Key messages Ownership of patient information in the context of Big Data is a problem that should not be marginalized but needs a comprehensive attitude, consideration and combined efforts from all stakeholders. Overcoming the challenge of ownership may help in improving healthcare services, medical and public health research and the health of the population as a whole.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Johann Eder ◽  
Vladimir A. Shekhovtsov

Purpose Medical research requires biological material and data collected through biobanks in reliable processes with quality assurance. Medical studies based on data with unknown or questionable quality are useless or even dangerous, as evidenced by recent examples of withdrawn studies. Medical data sets consist of highly sensitive personal data, which has to be protected carefully and is available for research only after the approval of ethics committees. The purpose of this research is to propose an architecture to support researchers to efficiently and effectively identify relevant collections of material and data with documented quality for their research projects while observing strict privacy rules. Design/methodology/approach Following a design science approach, this paper develops a conceptual model for capturing and relating metadata of medical data in biobanks to support medical research. Findings This study describes the landscape of biobanks as federated medical data lakes such as the collections of samples and their annotations in the European federation of biobanks (Biobanking and Biomolecular Resources Research Infrastructure – European Research Infrastructure Consortium, BBMRI-ERIC) and develops a conceptual model capturing schema information with quality annotation. This paper discusses the quality dimensions for data sets for medical research in-depth and proposes representations of both the metadata and data quality documentation with the aim to support researchers to effectively and efficiently identify suitable data sets for medical studies. Originality/value This novel conceptual model for metadata for medical data lakes has a unique focus on the high privacy requirements of the data sets contained in medical data lakes and also stands out in the detailed representation of data quality and metadata quality of medical data sets.


Author(s):  
Richard C. Berry ◽  
Lucy Johnston

This chapter explores opportunities and challenges that are presented to doctoral candidates (and indeed all researchers) through access to big data. The authors consider what big data is and what it is not, and how working with big data differs from traditional research design and analysis. They provide examples of the opportunities that big data offers in terms of the combination of diverse data sets, sources, and types and how it can provide new perspectives on inter-disciplinary challenges. They also highlight some of the challenges for the use of big data, both for the individual researcher and for institutions. The authors advocate for the need to embrace these challenges but without foregoing data integrity and the expert use and interpretation of data.


2014 ◽  
Author(s):  
Pankaj K. Agarwal ◽  
Thomas Moelhave
Keyword(s):  
Big Data ◽  

2020 ◽  
Vol 13 (4) ◽  
pp. 790-797
Author(s):  
Gurjit Singh Bhathal ◽  
Amardeep Singh Dhiman

Background: In current scenario of internet, large amounts of data are generated and processed. Hadoop framework is widely used to store and process big data in a highly distributed manner. It is argued that Hadoop Framework is not mature enough to deal with the current cyberattacks on the data. Objective: The main objective of the proposed work is to provide a complete security approach comprising of authorisation and authentication for the user and the Hadoop cluster nodes and to secure the data at rest as well as in transit. Methods: The proposed algorithm uses Kerberos network authentication protocol for authorisation and authentication and to validate the users and the cluster nodes. The Ciphertext-Policy Attribute- Based Encryption (CP-ABE) is used for data at rest and data in transit. User encrypts the file with their own set of attributes and stores on Hadoop Distributed File System. Only intended users can decrypt that file with matching parameters. Results: The proposed algorithm was implemented with data sets of different sizes. The data was processed with and without encryption. The results show little difference in processing time. The performance was affected in range of 0.8% to 3.1%, which includes impact of other factors also, like system configuration, the number of parallel jobs running and virtual environment. Conclusion: The solutions available for handling the big data security problems faced in Hadoop framework are inefficient or incomplete. A complete security framework is proposed for Hadoop Environment. The solution is experimentally proven to have little effect on the performance of the system for datasets of different sizes.


2003 ◽  
Vol 42 (02) ◽  
pp. 185-189 ◽  
Author(s):  
R. Haux ◽  
C. Kulikowski ◽  
A. Bohne ◽  
R. Brandner ◽  
B. Brigl ◽  
...  

Summary Objectives: The Yearbook of Medical Informatics is published annually by the International Medical Informatics Association (IMIA) and contains a selection of excellent papers on medical informatics research which have been recently published (www.yearbook.uni-hd.de). The 2003 Yearbook of Medical Informatics took as its theme the role of medical informatics for the quality of health care. In this paper, we will discuss challenges for health care, and the lessons learned from editing IMIA Yearbook 2003. Results and Conclusions: Modern information processing methodology and information and communication technology have strongly influenced our societies and health care. As a consequence of this, medical informatics as a discipline has taken a leading role in the further development of health care. This involves developing information systems that enhance opportunities for global access to health services and medical knowledge. Informatics methodology and technology will facilitate high quality of care in aging societies, and will decrease the possibilities of health care errors. It will also enable the dissemination of the latest medical and health information on the web to consumers and health care providers alike. The selected papers of the IMIA Yearbook 2003 present clear examples and future challenges, and they highlight how various sub-disciplines of medical informatics can contribute to this.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Steven A. Hicks ◽  
Jonas L. Isaksen ◽  
Vajira Thambawita ◽  
Jonas Ghouse ◽  
Gustav Ahlberg ◽  
...  

AbstractDeep learning-based tools may annotate and interpret medical data more quickly, consistently, and accurately than medical doctors. However, as medical doctors are ultimately responsible for clinical decision-making, any deep learning-based prediction should be accompanied by an explanation that a human can understand. We present an approach called electrocardiogram gradient class activation map (ECGradCAM), which is used to generate attention maps and explain the reasoning behind deep learning-based decision-making in ECG analysis. Attention maps may be used in the clinic to aid diagnosis, discover new medical knowledge, and identify novel features and characteristics of medical tests. In this paper, we showcase how ECGradCAM attention maps can unmask how a novel deep learning model measures both amplitudes and intervals in 12-lead electrocardiograms, and we show an example of how attention maps may be used to develop novel ECG features.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Hossein Ahmadvand ◽  
Fouzhan Foroutan ◽  
Mahmood Fathy

AbstractData variety is one of the most important features of Big Data. Data variety is the result of aggregating data from multiple sources and uneven distribution of data. This feature of Big Data causes high variation in the consumption of processing resources such as CPU consumption. This issue has been overlooked in previous works. To overcome the mentioned problem, in the present work, we used Dynamic Voltage and Frequency Scaling (DVFS) to reduce the energy consumption of computation. To this goal, we consider two types of deadlines as our constraint. Before applying the DVFS technique to computer nodes, we estimate the processing time and the frequency needed to meet the deadline. In the evaluation phase, we have used a set of data sets and applications. The experimental results show that our proposed approach surpasses the other scenarios in processing real datasets. Based on the experimental results in this paper, DV-DVFS can achieve up to 15% improvement in energy consumption.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1573
Author(s):  
Loris Nanni ◽  
Giovanni Minchio ◽  
Sheryl Brahnam ◽  
Gianluca Maguolo ◽  
Alessandra Lumini

Traditionally, classifiers are trained to predict patterns within a feature space. The image classification system presented here trains classifiers to predict patterns within a vector space by combining the dissimilarity spaces generated by a large set of Siamese Neural Networks (SNNs). A set of centroids from the patterns in the training data sets is calculated with supervised k-means clustering. The centroids are used to generate the dissimilarity space via the Siamese networks. The vector space descriptors are extracted by projecting patterns onto the similarity spaces, and SVMs classify an image by its dissimilarity vector. The versatility of the proposed approach in image classification is demonstrated by evaluating the system on different types of images across two domains: two medical data sets and two animal audio data sets with vocalizations represented as images (spectrograms). Results show that the proposed system’s performance competes competitively against the best-performing methods in the literature, obtaining state-of-the-art performance on one of the medical data sets, and does so without ad-hoc optimization of the clustering methods on the tested data sets.


2021 ◽  
Vol 11 (5) ◽  
pp. 2340
Author(s):  
Sanjay Mathrani ◽  
Xusheng Lai

Web data have grown exponentially to reach zettabyte scales. Mountains of data come from several online applications, such as e-commerce, social media, web and sensor-based devices, business web sites, and other information types posted by users. Big data analytics (BDA) can help to derive new insights from this huge and fast-growing data source. The core advantage of BDA technology is in its ability to mine these data and provide information on underlying trends. BDA, however, faces innate difficulty in optimizing the process and capabilities that require merging of diverse data assets to generate viable information. This paper explores the BDA process and capabilities in leveraging data via three case studies who are prime users of BDA tools. Findings emphasize four key components of the BDA process framework: system coordination, data sourcing, big data application service, and end users. Further building blocks are data security, privacy, and management that represent services for providing functionality to the four components of the BDA process across information and technology value chains.


Sign in / Sign up

Export Citation Format

Share Document