scholarly journals ADataViewer: Exploring Semantically Harmonized Alzheimer's Disease Cohort Datasets

Author(s):  
Yasamin Salimi ◽  
Daniel Domingo-Fernandez ◽  
Carlos Bobis-Alvarez ◽  
Martin Hofmann-Apitius ◽  
Colin Birkenbihl ◽  
...  

INTRODUCTION: Currently, AD cohort datasets are difficult to find, lack across-cohort interoperability, and the content of the shared datasets often only becomes clear to third-party researchers once data access has been granted. METHODS: We accessed and systematically investigated the content of 20 major AD cohort datasets on data-level. A medical professional and a data specialist manually curated and semantically harmonized the acquired datasets. We developed a platform that facilitates data exploration. RESULTS: We present ADataViewer, an interactive platform that facilitates the exploration of 20 cohort datasets with respect to longitudinal follow-up, demographics, ethnoracial diversity, measured modalities, and statistical properties of individual variables. Additionally, we publish a variable mapping catalog harmonizing 1,196 variables across the 20 cohorts. The platform is available under https://adata.scai.fraunhofer.de/. DISCUSSION: ADataViewer supports robust data-driven research by transparently displaying cohort dataset content and suggesting datasets suited for discovery and validation studies based on selected variables of interest.

Cancers ◽  
2021 ◽  
Vol 13 (11) ◽  
pp. 2632
Author(s):  
Aparajita Budithi ◽  
Sumeyye Su ◽  
Arkadz Kirshtein ◽  
Leili Shahriyari

Many colon cancer patients show resistance to their treatments. Therefore, it is important to consider unique characteristic of each tumor to find the best treatment options for each patient. In this study, we develop a data driven mathematical model for interaction between the tumor microenvironment and FOLFIRI drug agents in colon cancer. Patients are divided into five distinct clusters based on their estimated immune cell fractions obtained from their primary tumors’ gene expression data. We then analyze the effects of drugs on cancer cells and immune cells in each group, and we observe different responses to the FOLFIRI drugs between patients in different immune groups. For instance, patients in cluster 3 with the highest T-reg/T-helper ratio respond better to the FOLFIRI treatment, while patients in cluster 2 with the lowest T-reg/T-helper ratio resist the treatment. Moreover, we use ROC curve to validate the model using the tumor status of the patients at their follow up, and the model predicts well for the earlier follow up days.


2019 ◽  
Vol 2 (2) ◽  
pp. 47-53
Author(s):  
Magalie Geneviève ◽  
Stanislas Bataille ◽  
Julie Beaume ◽  
Aldjia Hocine ◽  
Louis De Laforcade ◽  
...  

Home dialysis, which includes Peritoneal Dialysis and Home Hemodialysis, provides lots of profit to patients suffering of Chronic Kidney Disease, especially in terms of comfort, life quality and autonomy. However, its use is marginal in France, with an inhomogenous distributaion according to geographical regions. We conducted a French national survey of nephrologists to assess the barriers to the development of home dialysis. After analyzing the responses of the 230 participating nephrologists, the main obstacles to the development of the two techniques were identified and classified according to their reporting rate. The major obstacles that emerge from the survey are : the lack of information among the general public, a lack of acknowledgement of nurses specializing in these techniques, the limited number of structures that practice dialysis at home, and information difficulties among patient about dialysis techniques. The specific peritoneal dialysis-related difficulties reported are : difficulties in management of follow-up care and rehabilitation, the fear of insufficient purification and the difficulties related to the dialysis catheter. Concerning home hemodialysis, the barriers concern fear of autopunction and the need for a third party. This study helps to identify the representations of nephrologists on the major obstacles to the development of home dialysis to develop lines of thought for its promotion, both in terms of training, institutional acknowledgement, and the necessary regulatory evolution.


2019 ◽  
Vol 17 (2) ◽  
pp. 138-152
Author(s):  
I. S. Postanogov ◽  
I. A. Turova

In the paper we discuss how to support the process of creating tools which transform natural language (NL) queries into SPARQL queries (hereinafter referred to as a transformation tool). In the introduction, we describe the relevance of the task of understanding natural language queries by information systems, as well as the advantages of using ontologies as a means of representing knowledge for solving this problem. This ontology-based data access approach can be also used in systems which provide natural language interface to databases. Based on the analysis of problems related to the integration and testing of existing transformation tools, as well as to support the creation and testing own transformation modules, the concept of a software platform that simplifies these tasks is proposed. The platform architecture satisfies the requirements for ease of connecting third party transformation tools, reusing individual modules, as well as integrating the resulting transformation tools into other systems, including testing systems. The building blocks of the created transformation systems are the individual transformation modules packaged in Docker containers. Program access to each module is carried out using gRPC. Modules loaded into the platform can be built into the transformation pipeline automatically or manually using the built-in third party SciVi data flow diagram editor. Compatibility of individual modules is controlled by automatic analysis of application programming interfaces. The resulting pipeline is combined according to specified data flow into a single multi-container application that can be integrated into other systems, as well as tested on extendable test suites. The expected and actual results of the query transformation are available for viewing in graphical form in the visualization tool developed earlier.


Author(s):  
Poovizhi. M ◽  
Raja. G

Using Cloud Storage, users can tenuously store their data and enjoy the on-demand great quality applications and facilities from a shared pool of configurable computing resources, without the problem of local data storage and maintenance. However, the fact that users no longer have physical possession of the outsourced data makes the data integrity protection in Cloud Computing a formidable task, especially for users with constrained dividing resources. From users’ perspective, including both individuals and IT systems, storing data remotely into the cloud in a flexible on-demand manner brings tempting benefits: relief of the burden for storage management, universal data access with independent geographical locations, and avoidance of capital expenditure on hardware, software, and personnel maintenances, etc. To securely introduce an effective Sanitizer and third party auditor (TPA), the following two fundamental requirements have to be met: 1) TPA should be able to capably audit the cloud data storage without demanding the local copy of data, and introduce no additional on-line burden to the cloud user; 2) The third party auditing process should take in no new vulnerabilities towards user data privacy. In this project, utilize and uniquely combine the public auditing protocols with double encryption approach to achieve the privacy-preserving public cloud data auditing system, which meets all integrity checking without any leakage of data. To support efficient handling of multiple auditing tasks, we further explore the technique of online signature to extend our main result into a multi-user setting, where TPA can perform multiple auditing tasks simultaneously. We can implement double encryption algorithm to encrypt the data twice and stored cloud server in Electronic Health Record applications.


Neurology ◽  
2021 ◽  
pp. 10.1212/WNL.0000000000012600
Author(s):  
Emily C. Edmonds ◽  
Denis S. Smirnov ◽  
Kelsey R. Thomas ◽  
Lisa V. Graves ◽  
Katherine J. Bangen ◽  
...  

Objective:Given prior work demonstrating that mild cognitive impairment (MCI) can be empirically differentiated into meaningful cognitive subtypes, we applied actuarial methods to comprehensive neuropsychological data from the University of California San Diego (UCSD) Alzheimer’s Disease Research Center (ADRC) in order to identify cognitive subgroups within nondemented ADRC participants, and to examine cognitive, biomarker, and neuropathological trajectories.Methods:Cluster analysis was performed on baseline neuropsychological data (n=738; mean age=71.8). Survival analysis examined progression to dementia (mean follow-up=5.9 years). CSF AD biomarker status and neuropathological findings at follow-up were examined in a subset with available data.Results:Five clusters were identified: “optimal” cognitively normal (CN; n=130) with above-average cognition, “typical” CN (n=204) with average cognition, non-amnestic MCI (naMCI; n=104), amnestic MCI (aMCI; n=216), and mixed MCI (mMCI; n=84). Progression to dementia differed across MCI subtypes (mMCI>aMCI>naMCI), with the mMCI group demonstrating the highest rate of CSF biomarker positivity and AD pathology at autopsy. Actuarial methods classified 29.5% more of the sample with MCI and outperformed consensus diagnoses in capturing those who had abnormal biomarkers, progressed to dementia, or had AD pathology at autopsy.Conclusions:We identified subtypes of MCI and CN with differing cognitive profiles, clinical outcomes, CSF AD biomarkers, and neuropathological findings over more than 10 years of follow-up. Results demonstrate that actuarial methods produce reliable cognitive phenotypes, with data from a subset suggesting unique biological and neuropathological signatures. Findings indicate that data-driven algorithms enhance diagnostic sensitivity relative to consensus diagnosis for identifying older adults at risk for cognitive decline.


2021 ◽  
Vol 429 ◽  
pp. 118662
Author(s):  
Samantha Mombelli ◽  
Caterina Leitner ◽  
Marco Sforza ◽  
Andrea Galbiati ◽  
Giada D'Este ◽  
...  

Author(s):  
S. Bella ◽  
F. Murgia

In this chapter the main aspects of telemonitoring are described and discussed in the field of chronic respiratory diseases. The authors describe the various challenges they faced, in the order in which they did. First, they face the problem of effectiveness of the method, then, the problems related to the economic viability, and finally, the problems related to the operating method. The authors conclude that remote monitoring is a promising method in terms of effectiveness of follow-up that must be performed under well controlled conditions. They still require further validation studies to improve the effectiveness and reduce the effects of new issues that arise.


Web Services ◽  
2019 ◽  
pp. 882-903
Author(s):  
Izabella V. Lokshina ◽  
Barbara J. Durkin ◽  
Cees J.M. Lanting

The Internet of Things (IoT) provides the tools for the development of a major, global data-driven ecosystem. When accessible to people and businesses, this information can make every area of life, including business, more data-driven. In this ecosystem, with its emphasis on Big Data, there has been a focus on building business models for the provision of services, the so-called Internet of Services (IoS). These models assume the existence and development of the necessary IoT measurement and control instruments, communications infrastructure, and easy access to the data collected and information generated by any party. Different business models may support opportunities that generate revenue and value for various types of customers. This paper contributes to the literature by considering business models and opportunities for third-party data analysis services and discusses access to information generated by third parties in relation to Big Data techniques and potential business opportunities.


2020 ◽  
Vol 16 (1) ◽  
pp. 116-141
Author(s):  
Bertin Martens ◽  
Frank Mueller-Langer

Abstract Before the arrival of digital car data, car manufacturers had already partly foreclosed the maintenance market through franchising contracts with a network of exclusive official dealers. EU regulation endorsed this foreclosure but mandated access to maintenance data for independent service providers to keep competition in these markets. The arrival of digital car data upsets this balance because manufacturers can collect real-time maintenance data on their servers and send messages to drivers. These can be used to price discriminate and increase the market share of official dealers. There are at least four alternative technical gateways that could give independent service providers similar data access options. However, they suffer in various degrees from data portability issues, switching costs and weak network effects, and insufficient economies of scale and scope in data analytics. Multisided third-party consumer media platforms appear to be better placed to overcome these economic hurdles, provided that an operational real-time data portability regime could be established.


Sign in / Sign up

Export Citation Format

Share Document