Real World Data
Recently Published Documents





2021 ◽  
Zi-Yang Peng ◽  
Chun-Ting Yang ◽  
Huang-Tz Ou ◽  
Shihchen Kuo

Abstract Background: We conducted a model-based economic analysis of sodium-glucose cotransporter-2 inhibitors (SGLT2is) versus dipeptidyl peptidase-4 inhibitors (DPP4is) in type 2 diabetes (T2D) patients with and without established cardiovascular disease (CVD) using 10-year real-world data. Methods: A Markov model was utilized to estimate healthcare costs and quality-adjusted life-years (QALYs) over a 10-year simulation time horizon from a healthcare sector perspective, with both costs and QALYs discounted at 3% annually. Model inputs were derived from analyses of Taiwan’s National Health Insurance Research Database or published studies of Taiwanese populations. The primary outcome measure was the incremental cost-effectiveness ratios (ICERs). Incorporated with our study findings, a structured systematic review was conducted to synthesize updated evidence on the cost-effectiveness of SGLT2is versus DPP4is. Results: Over 10 years, use of SGLT2is versus DPP4is yielded ICERs of $3,244 and $4,186 per QALY gained for T2D patients with and without established CVD, respectively. Results were robust across a series of sensitivity and scenario analyses, showing ICERs between $-1,074 (cost-saving) and $8,467 per QALY gained for T2D patients with established CVD and between $369 and $37,122 per QALY gained for T2D patients without established CVD. A systematic review revealed a cost-effective or even cost-saving profile of using SGLT2is for T2D treatment. Conclusions: Use of SGLT2is versus DPP4is was highly cost-effective for T2D patients regardless of patients’ CVD history in real-world clinical practice. Our results extend current evidence by demonstrating SGLT2is as an economically rational alternative over DPP4is for T2D treatment in routine care. Future research is warranted to explore heterogenous economic benefits of SGLT2is given diverse patient characteristics in clinical settings.

Hattie Whiteside ◽  
Emma Tang ◽  
Kumud Kantilal ◽  
Yoon Loke ◽  

This realist enquiry applying behavioural theory aimed to identify behavioural mechanisms and contexts that facilitate prescribers tapering opioids. We identified relevant opioid tapering interventions and services from a 2018 international systematic review and a 2019 England-wide survey, respectively. Interventions and services were eligible if they provided information about contexts and/or behavioural mechanisms influencing opioid tapering success. A stakeholder group (n=23) generated draft programme theories based around the 14 domains of the theoretical domains framework. We refined these using the trial and service data. From 71 articles and 21 survey responses, 56 and 16 respectively were included, representing primary care, hospital, specialist pain facilities and prison services. We identified six programme theories that included five behavioural mechanisms: prescribers’ knowledge about how to taper; build prescribers’ beliefs about capabilities to initiate tapering discussions and manage psychological consequences of tapering; perceived professional role in tapering; the environmental context enabling referral to specialists; and facilitating positive social influence by aligning patient: prescriber expectations of tapering. No interventions are addressing all six mechanisms supportive of tapering. Work is required to operationalise programme theories according to organisational structures and resources. An example operationalisation is combining tapering guidelines with information about local excess opioid problems and endorsing these with organisational branding. Prescribers being given the skills and confidence to initiate tapering discussions by training them in cognitive-based interventions and incorporating access to psychological and physical support in the patient pathway. Patients being provided with leaflets about the tapering process and informed about the patient pathway.

2021 ◽  
pp. 107699862110520
Jin Liu ◽  
Robert A. Perera ◽  
Le Kang ◽  
Roy T. Sabo ◽  
Robert M. Kirkpatrick

This study proposes transformation functions and matrices between coefficients in the original and reparameterized parameter spaces for an existing linear-linear piecewise model to derive the interpretable coefficients directly related to the underlying change pattern. Additionally, the study extends the existing model to allow individual measurement occasions and investigates predictors for individual differences in change patterns. We present the proposed methods with simulation studies and a real-world data analysis. Our simulation study demonstrates that the method can generally provide an unbiased and accurate point estimate and appropriate confidence interval coverage for each parameter. The empirical analysis shows that the model can estimate the growth factor coefficients and path coefficients directly related to the underlying developmental process, thereby providing meaningful interpretation.

Jairo Ortega ◽  
Dimitrios Rizopoulos ◽  
János Tóth ◽  
Tamás Péter

In the attempt to study Light Rail Transit (LRT) systems, and their necessary underlying components, such as Park and Ride (P&R) sub-systems, this article aims to showcase the importance of land-use as a criterion in the selection of trip starting locations (i.e., points), that can potentially be used as the basis for quantitative studies on LRT and P&R systems. In order to achieve this goal, a method is introduced for the selection of locations that produce P&R mode trips based on the land-use attributes of sub-zones or neighborhoods, as they are included in Sustainable Urban Mobility Plans (SUMPs). Those land-use attributes are utilized as sub-criteria for the classification and valid selection of trip starting locations out of a broader dataset of available locations. As a second supportive technique that needs to be utilized for this study, an algorithm is introduced, which allows us to test the effectiveness of the method and the importance of land use as a criterion. The algorithm enables the calculation and comparison of the attributes of the trips to be followed by P&R mode users starting from selected trip starting locations for each zone in a city and having as destinations the several available P&R facilities. Results for the methods introduced in this article are showcased based on a case study on the mid-sized city of Cuenca, Ecuador, in which, several metrics, such as traveling times considering different traffic scenarios, are examined for the potential P&R mode trips as they emerge from real-world data.

2021 ◽  
Xi Tom Zhang ◽  
Runpeng Harris Han

A massive number of transcriptomic profiles of blood samples from COVID-19 patients has been produced since pandemic COVID-19 begins, however, these big data from primary studies have not been well integrated by machine learning approaches. Taking advantage of modern machine learning arthrograms, we integrated and collected single cell RNA-seq (scRNA-seq) data from three independent studies, identified genes potentially available for interpretation of severity, and developed a high-performance deep learning-based deconvolution model AImmune that can predict the proportion of seven different immune cells from the bulk RNA-seq results of human peripheral mononuclear cells. This novel approach which can be used for clinical blood testing of COVID-19 on the ground that previous research shows that mRNA alternations in blood-derived PBMCs may serve as a severity indicator. Assessed on real-world data sets, the AImmune model outperformed the most recognized immune profiling model CIBERSORTx. The presented study showed the results obtained by the true scRNA-seq route can be consistently reproduced through the new approach AImmune, indicating a potential replacing the costly scRNA-seq technique for the analysis of circulating blood cells for both clinical and research purposes.

Entropy ◽  
2021 ◽  
Vol 23 (12) ◽  
pp. 1621
Przemysław Juszczuk ◽  
Jan Kozak ◽  
Grzegorz Dziczkowski ◽  
Szymon Głowania ◽  
Tomasz Jach ◽  

In the era of the Internet of Things and big data, we are faced with the management of a flood of information. The complexity and amount of data presented to the decision-maker are enormous, and existing methods often fail to derive nonredundant information quickly. Thus, the selection of the most satisfactory set of solutions is often a struggle. This article investigates the possibilities of using the entropy measure as an indicator of data difficulty. To do so, we focus on real-world data covering various fields related to markets (the real estate market and financial markets), sports data, fake news data, and more. The problem is twofold: First, since we deal with unprocessed, inconsistent data, it is necessary to perform additional preprocessing. Therefore, the second step of our research is using the entropy-based measure to capture the nonredundant, noncorrelated core information from the data. Research is conducted using well-known algorithms from the classification domain to investigate the quality of solutions derived based on initial preprocessing and the information indicated by the entropy measure. Eventually, the best 25% (in the sense of entropy measure) attributes are selected to perform the whole classification procedure once again, and the results are compared.

2021 ◽  
Alberto Vera ◽  
Siddhartha Banerjee ◽  
Samitha Samaranayake

Motivated by the needs of modern transportation service platforms, we study the problem of computing constrained shortest paths (CSP) at scale via preprocessing techniques. Our work makes two contributions in this regard: 1) We propose a scalable algorithm for CSP queries and show how its performance can be parametrized in terms of a new network primitive, the constrained highway dimension. This development extends recent work that established the highway dimension as the appropriate primitive for characterizing the performance of unconstrained shortest-path (SP) algorithms. Our main theoretical contribution is deriving conditions relating the two notions, thereby providing a characterization of networks where CSP and SP queries are of comparable hardness. 2) We develop practical algorithms for scalable CSP computation, augmenting our theory with additional network clustering heuristics. We evaluate these algorithms on real-world data sets to validate our theoretical findings. Our techniques are orders of magnitude faster than existing approaches while requiring only limited additional storage and preprocessing.

2021 ◽  
Vol 11 (1) ◽  
Huang-Ming Hu ◽  
Hui-Jen Tsai ◽  
Hsiu-Ying Ku ◽  
Su-Shun Lo ◽  
Yan-Shen Shan ◽  

AbstractChemotherapy is generally considered as the main treatment for metastatic gastric adenocarcinoma. The role of gastrectomy for metastatic gastric cancer without obvious symptoms is controversial. The objective of this study is to investigate survival outcomes of treatment modalities using a real-world data setting. A retrospective cohort study was designed using the Taiwan Cancer Registry database. We identified the treatment modalities and used Kaplan–Meier estimates and Cox regressions to compare patient survival outcomes. From 2008 to 2015, 5599 gastric adenocarcinoma patients were diagnosed with metastatic disease (M1). The median overall survival (OS) of patients with surgery plus chemotherapy had the longest survival of 14.2 months. The median OS of the patients who received chemotherapy alone or surgery alone was 7.0 and 3.9, respectively. Age at diagnosis, year of diagnosis, tumor grade, and treatment modalities are prognostic factors for survival. The hazard ratios for patients who received surgery plus chemotherapy, surgery alone, and supportive care were 0.47 (95% CI 0.44–0.51), 1.22 (95% CI 1.1–1.36), and 3.23 (95% CI 3.01–3.46), respectively, by multivariable Cox regression analysis when using chemotherapy alone as a referent. Chemotherapy plus surgery may have a survival benefit for some selected gastric adenocarcinoma patients with metastatic disease.

Algorithms ◽  
2021 ◽  
Vol 14 (12) ◽  
pp. 350
Nicola Ognibene Pietri ◽  
Xiaochen Chou ◽  
Dominic Loske ◽  
Matthias Klumpp ◽  
Roberto Montemanni

Online shopping is growing fast due to the increasingly widespread use of digital services. During the COVID-19 pandemic, the desire for contactless shopping has further changed consumer behavior and accelerated the acceptance of online grocery purchases. Consequently, traditional brick-and-mortar retailers are developing omnichannel solutions such as click-and-collect services to fulfill the increasing demand. In this work, we consider the Buy-Online-Pick-up-in-Store concept, in which online orders are collected by employees of the conventional stores. As labor is a major cost driver, we apply and discuss different optimizing strategies in the picking and packing process based on real-world data from a German retailer. With comparison of different methods, we estimate the improvements in efficiency in terms of time spent during the picking process. Additionally, the time spent on the packing process can be further decreased by applying a mathematical model that guides the employees on how to organize the articles in different shopping bags during the picking process. In general, we put forward effective strategies for the Buy-Online-Pick-up-in-Store paradigm that can be easily implemented by stores with different topologies.

Sign in / Sign up

Export Citation Format

Share Document