scholarly journals Quantifiable excess of bone resorption in monoclonal gammopathy is an early symptom of malignancy: a prospective study of 87 bone biopsies

Blood ◽  
1996 ◽  
Vol 87 (11) ◽  
pp. 4762-4769 ◽  
Author(s):  
R Bataille ◽  
D Chappard ◽  
MF Basle

To determine if excessive osteoclastic-mediated bone resorption (BR) is an early tumor-induced event in multiple myeloma (MM), BR was assessed at first presentation on quantitative bone biopsy in 87 individuals evaluated for monoclonal gammopathy of undetermined significance (MGUS) and reinterpreted according to the presenting features and subsequent follow-up evaluation. As a reference population, 48 patients with previously untreated overt MM were evaluated under similar conditions. The median level of BR was significantly higher in 48 overt MM versus 87 MGUS patients (12.2% v 5.1% [normal level, <6%], P <.01). Actually, 93% of overt MM patients had an excessive BR versus 45% of MGUS patients at presentation (P <.01) According to simple presenting parameters (> or <5% plasma cells within the bone marrow, presence or absence of mild anemia/neutropenia), 31 individuals were classified as low-risk MGUS, 32 high-risk MGUS, and 24 indolent MM. An excessive BR was observed in 16% of low-risk MGUS, 46% of high-risk MGUS (P <.01 v low-risk MGUS), 79% of indolent MM (P <.05 v high-risk MGUS), and 93% of overt MM patients. Of major interest, the level of BR in indolent MM (11.2%) was identical to that in overt MM (12.2%) but significantly higher than in both low-risk (4%, P <.01) and high-risk (5.6%, P <.01) MGUS. When considering the follow-up evaluation of MGUS patients, an excessive BR at presentation was observed in 52% of MGUS cases that turned out to be unstable or developed subsequent MM, but in only 4% of stable MGUS (P <.01). More precisely the level of BR of low-risk MGUS that either turned out to be unstable or that developed into MM was significantly higher at presentation than that of subsequent stable MGUS (4.4% v 2.9%, P <.05). The same difference was observed in both high-risk MGUS and indolent MM according to subsequent follow-up studies (8.1% v 3.4% and 11.7% v 6%, respectively, P <.05). Of major interest, the level of BR in 11 stable high-risk MGUS cases actually fulfilling the diagnostic criteria of smoldering MM was very low (3.4%) and similar to that in stable low-risk MGUS (2.9%). We conclude that a quantifiable excess of BR in MGUS is significantly associated with progression and thus is an early symptom of malignancy in these individuals.

Hematology ◽  
2021 ◽  
Vol 2021 (1) ◽  
pp. 662-672
Author(s):  
Ola Landgren

Abstract In the 1960s, Dr Jan Waldenström argued that patients who had monoclonal proteins without any symptoms or evidence of end-organ damage represented a benign monoclonal gammopathy. In 1978, Dr Robert Kyle introduced the concept of “monoclonal gammopathy of undetermined significance” (MGUS) given that, at diagnosis, it was not possible with available methods (ie, serum protein electrophoresis to define the concentration of M-proteins and microscopy to determine the plasma cell percentage in bone marrow aspirates) to determine which patients would ultimately progress to multiple myeloma. The application of low-input whole-genome sequencing (WGS) technology has circumvented previous problems related to volume of clonal plasma cells and contamination by normal plasma cells and allowed for the interrogation of the WGS landscape of MGUS. As discussed in this chapter, the distribution of genetic events reveals striking differences and the existence of 2 biologically and clinically distinct entities of asymptomatic monoclonal gammopathies. Thus, we already have genomic tools to identify “myeloma-defining genomic events,” and consequently, it is reasonable to consider updating our preferred terminologies. When the clinical field is ready to move forward, we should be able to consolidate current terminologies—from current 7 clinical categories: low-risk MGUS, intermediate-risk MGUS, high-risk MGUS, low-risk smoldering myeloma, intermediate-risk smoldering myeloma, high-risk smoldering myeloma, and multiple myeloma—to future 3 genomic-based categories: monoclonal gammopathy, early detection of multiple myeloma (in which myeloma-defining genomic events already have been acquired), and multiple myeloma (patients who are already progressing and clinically defined cases). Ongoing investigations will continue to advance the field.


RMD Open ◽  
2021 ◽  
Vol 7 (2) ◽  
pp. e001524
Author(s):  
Nina Marijn van Leeuwen ◽  
Marc Maurits ◽  
Sophie Liem ◽  
Jacopo Ciaffi ◽  
Nina Ajmone Marsan ◽  
...  

ObjectivesTo develop a prediction model to guide annual assessment of systemic sclerosis (SSc) patients tailored in accordance to disease activity.MethodsA machine learning approach was used to develop a model that can identify patients without disease progression. SSc patients included in the prospective Leiden SSc cohort and fulfilling the ACR/EULAR 2013 criteria were included. Disease progression was defined as progression in ≥1 organ system, and/or start of immunosuppression or death. Using elastic-net-regularisation, and including 90 independent clinical variables (100% complete), we trained the model on 75% and validated it on 25% of the patients, optimising on negative predictive value (NPV) to minimise the likelihood of missing progression. Probability cutoffs were identified for low and high risk for disease progression by expert assessment.ResultsOf the 492 SSc patients (follow-up range: 2–10 years), disease progression during follow-up was observed in 52% (median time 4.9 years). Performance of the model in the test set showed an AUC-ROC of 0.66. Probability score cutoffs were defined: low risk for disease progression (<0.197, NPV:1.0; 29% of patients), intermediate risk (0.197–0.223, NPV:0.82; 27%) and high risk (>0.223, NPV:0.78; 44%). The relevant variables for the model were: previous use of cyclophosphamide or corticosteroids, start with immunosuppressive drugs, previous gastrointestinal progression, previous cardiovascular event, pulmonary arterial hypertension, modified Rodnan Skin Score, creatine kinase and diffusing capacity for carbon monoxide.ConclusionOur machine-learning-assisted model for progression enabled us to classify 29% of SSc patients as ‘low risk’. In this group, annual assessment programmes could be less extensive than indicated by international guidelines.


2021 ◽  
Vol 24 (3) ◽  
pp. 680-690
Author(s):  
Michiel C. Mommersteeg ◽  
Stella A. V. Nieuwenburg ◽  
Wouter J. den Hollander ◽  
Lisanne Holster ◽  
Caroline M. den Hoed ◽  
...  

Abstract Introduction Guidelines recommend endoscopy with biopsies to stratify patients with gastric premalignant lesions (GPL) to high and low progression risk. High-risk patients are recommended to undergo surveillance. We aimed to assess the accuracy of guideline recommendations to identify low-risk patients, who can safely be discharged from surveillance. Methods This study includes patients with GPL. Patients underwent at least two endoscopies with an interval of 1–6 years. Patients were defined ‘low risk’ if they fulfilled requirements for discharge, and ‘high risk’ if they fulfilled requirements for surveillance, according to European guidelines (MAPS-2012, updated MAPS-2019, BSG). Patients defined ‘low risk’ with progression of disease during follow-up (FU) were considered ‘misclassified’ as low risk. Results 334 patients (median age 60 years IQR11; 48.7% male) were included and followed for a median of 48 months. At baseline, 181/334 (54%) patients were defined low risk. Of these, 32.6% were ‘misclassified’, showing progression of disease during FU. If MAPS-2019 were followed, 169/334 (51%) patients were defined low risk, of which 32.5% were ‘misclassified’. If BSG were followed, 174/334 (51%) patients were defined low risk, of which 32.2% were ‘misclassified’. Seven patients developed gastric cancer (GC) or dysplasia, four patients were ‘misclassified’ based on MAPS-2012 and three on MAPS-2019 and BSG. By performing one additional endoscopy 72.9% (95% CI 62.4–83.3) of high-risk patients and all patients who developed GC or dysplasia were identified. Conclusion One-third of patients that would have been discharged from GC surveillance, appeared to be ‘misclassified’ as low risk. One additional endoscopy will reduce this risk by 70%.


2021 ◽  
Vol 10 (7) ◽  
pp. 205846012110306
Author(s):  
Mine B Lange ◽  
Lars J Petersen ◽  
Michael B Nielsen ◽  
Helle D Zacho

Background The presence of malignant cells in bone biopsies is considered gold standard to verify occurrence of cancer, whereas a negative bone biopsy can represent a false negative, with a risk of increasing patient morbidity and mortality and creating misleading conclusions in cancer research. However, a paucity of literature documents the validity of negative bone biopsy as an exclusion criterion for the presence of skeletal malignancies. Purpose To investigate the validity of a negative bone biopsy in bone lesions suspicious of malignancy. Material and Method A retrospective cohort of 215 consecutive targeted non-malignant skeletal biopsies from 207 patients (43% women, 57% men, median age 64, and range 94) representing suspicious focal bone lesions, collected from January 1, 2011, to July 31, 2013, was followed over a 2-year period to examine any additional biopsy, imaging, and clinical follow-up information to categorize the original biopsy as truly benign, malignant, or equivocal. Standard deviations and 95% confidence intervals were calculated. Results 210 of 215 biopsies (98%; 95% CI 0.94–0.99) showed to be truly benign 2 years after initial biopsy. Two biopsies were false negatives (1%; 95% CI 0.001–0.03), and three were equivocal (lack of imaging description). Conclusion Our study documents negative bone biopsy as a valid criterion for the absence of bone metastasis. Since only 28% had a confirmed diagnosis of prior cancer and not all patients received adequately sensitive imaging, our results might not be applicable to all cancer patients with suspicious bone lesions.


2021 ◽  
Author(s):  
Andreas Fritsche ◽  
Robert Wagner ◽  
Martin Heni ◽  
Kostantinos Kantartzis ◽  
Jürgen Machann ◽  
...  

Lifestyle intervention (LI) can prevent type 2 diabetes, but response to LI varies depending on risk subphenotypes. We tested if prediabetic individuals with low risk benefit from conventional LI and individuals with high risk benefit from an intensification of LI in a multi-center randomized controlled intervention over 12 months with 2 years follow up. 1105 prediabetic individuals based on ADA glucose criteria were stratified into a high- and low-risk phenotype, based on previously described thresholds of insulin secretion, insulin sensitivity and liver fat content. Low-risk individuals were randomly assigned to conventional LI according to the DPP protocol or control (1:1), high-risk individuals to conventional or intensified LI with doubling of required exercise (1:1). A total of 908 (82%) participants completed the study. In high-risk individuals, the difference between conventional and intensified LI in post-challenge glucose change was -0.29 mmol/l [CI:-0.54;-0.04], p=0.025. Liver fat (-1.34 percentage points [CI:-2.17;-0.50], p=0.002) and cardiovascular risk (-1.82[CI:-3.13-0.50],p=0.007) underwent larger reductions with intensified than with conventional LI. During a follow up of 3 years, intensified compared to conventional LI had a higher probability to normalize glucose tolerance (p=0.008). In conclusion, it is possible in high-risk individuals with prediabetes to improve glycemic and cardiometabolic outcomes by intensification of LI. Individualized, risk-phenotype-based LI may be beneficial for the prevention of diabetes.


2021 ◽  
Author(s):  
Andreas Fritsche ◽  
Robert Wagner ◽  
Martin Heni ◽  
Kostantinos Kantartzis ◽  
Jürgen Machann ◽  
...  

Lifestyle intervention (LI) can prevent type 2 diabetes, but response to LI varies depending on risk subphenotypes. We tested if prediabetic individuals with low risk benefit from conventional LI and individuals with high risk benefit from an intensification of LI in a multi-center randomized controlled intervention over 12 months with 2 years follow up. 1105 prediabetic individuals based on ADA glucose criteria were stratified into a high- and low-risk phenotype, based on previously described thresholds of insulin secretion, insulin sensitivity and liver fat content. Low-risk individuals were randomly assigned to conventional LI according to the DPP protocol or control (1:1), high-risk individuals to conventional or intensified LI with doubling of required exercise (1:1). A total of 908 (82%) participants completed the study. In high-risk individuals, the difference between conventional and intensified LI in post-challenge glucose change was -0.29 mmol/l [CI:-0.54;-0.04], p=0.025. Liver fat (-1.34 percentage points [CI:-2.17;-0.50], p=0.002) and cardiovascular risk (-1.82[CI:-3.13-0.50],p=0.007) underwent larger reductions with intensified than with conventional LI. During a follow up of 3 years, intensified compared to conventional LI had a higher probability to normalize glucose tolerance (p=0.008). In conclusion, it is possible in high-risk individuals with prediabetes to improve glycemic and cardiometabolic outcomes by intensification of LI. Individualized, risk-phenotype-based LI may be beneficial for the prevention of diabetes.


2021 ◽  
Author(s):  
Andreas Fritsche ◽  
Robert Wagner ◽  
Martin Heni ◽  
Kostantinos Kantartzis ◽  
Jürgen Machann ◽  
...  

Lifestyle intervention (LI) can prevent type 2 diabetes, but response to LI varies depending on risk subphenotypes. We tested if prediabetic individuals with low risk benefit from conventional LI and individuals with high risk benefit from an intensification of LI in a multi-center randomized controlled intervention over 12 months with 2 years follow up. 1105 prediabetic individuals based on ADA glucose criteria were stratified into a high- and low-risk phenotype, based on previously described thresholds of insulin secretion, insulin sensitivity and liver fat content. Low-risk individuals were randomly assigned to conventional LI according to the DPP protocol or control (1:1), high-risk individuals to conventional or intensified LI with doubling of required exercise (1:1). A total of 908 (82%) participants completed the study. In high-risk individuals, the difference between conventional and intensified LI in post-challenge glucose change was -0.29 mmol/l [CI:-0.54;-0.04], p=0.025. Liver fat (-1.34 percentage points [CI:-2.17;-0.50], p=0.002) and cardiovascular risk (-1.82[CI:-3.13-0.50],p=0.007) underwent larger reductions with intensified than with conventional LI. During a follow up of 3 years, intensified compared to conventional LI had a higher probability to normalize glucose tolerance (p=0.008). In conclusion, it is possible in high-risk individuals with prediabetes to improve glycemic and cardiometabolic outcomes by intensification of LI. Individualized, risk-phenotype-based LI may be beneficial for the prevention of diabetes.


Circulation ◽  
2015 ◽  
Vol 132 (suppl_3) ◽  
Author(s):  
Yoshitaka Ito ◽  
Kazuhiro Naito ◽  
Katsuhisa Waseda ◽  
Hiroaki Takashima ◽  
Akiyoshi Kurita ◽  
...  

Background: While anticoagulant therapy is standard management for atrial fibrillation (Af), dual antiplatelet therapy (DAPT) is needed after stent implantation for coronary artery disease. HAS-BLED score estimates risk of major bleeding for patients on anticoagulation to assess risk-benefit in Af care. However, it is little known about usefulness of HAS-BLED score in Af patient treated with coronary stents requiring DAPT or DAPT plus warfarin (triple therapy: TT). The aim of this study was to evaluate the role of HAS-BLED score on major bleeding in Af patients undergoing DAPT or TT. Methods: A total of 837 consecutive patients were received PCI in our hospital from Jan. 2007 to Dec. 2010, and 66 patients had Af or paroxysmal Af at the time of PCI. Clinical events including major bleeding (cerebral or gastrointestinal bleeding) were investigated up to 3 years. Patients were divided into 2 groups based on HAS-BLED score (High-risk group: HAS-BLED score≥4, n=19 and Low-risk group: HAS-BLED score<4, n=47). DAPT therapy was required for a minimum 12 months after stent implantation and warfarin was prescribed based on physicians’ discretion. Management/change of antiplatelet and anticoagulant therapy during follow-up periods were also up to physicians’ discretion. Results: Baseline characteristics were not different between High-risk and Low-risk group except for age. Overall incidence of major bleeding was observed in 8 cases (12.1%) at 3 years follow-up. Major bleeding event was significantly higher in High-risk group compared with Low-risk group (31.6% vs. 4.3%, p=0.002). However, management of DAPT and TT was not different between the 2 groups. Among component of HAS-BLED score, renal dysfunction and bleeding contributed with increased number of the score. Conclusion: High-risk group was more frequently observed major bleeding events compared with Low-risk group in patients with Af following DES implantation regardless of antiplatelet/anticoagulant therapy.


2020 ◽  
Vol 41 (Supplement_1) ◽  
Author(s):  
W Sun ◽  
B P Y Yan

Abstract Background We have previously demonstrated unselected screening for atrial fibrillation (AF) in patients ≥65 years old in an out-patient setting yielded 1-2% new AF each time screen-negative patients underwent repeated screening at 12 to 18 month interval. Selection criteria to identify high-risk patients for repeated AF screening may be more efficient than repeat screening on all patients. Aims This study aimed to validate CHA2DS2VASC score as a predictive model to select target population for repeat AF screening. Methods 17,745 consecutive patients underwent 24,363 index AF screening (26.9% patients underwent repeated screening) using a handheld single-lead ECG (AliveCor) from Dec 2014 to Dec 2017 (NCT02409654). Adverse clinical outcomes to be predicted included (i) new AF detection by repeated screening; (ii) new AF clinically diagnosed during follow-up and (ii) ischemic stroke/transient ischemic attack (TIA) during follow-up. Performance evaluation and validation of CHA2DS2VASC score as a prediction model was based on 15,732 subjects, 35,643 person-years of follow-up and 765 outcomes. Internal validation was conducted by method of k-fold cross-validation (k = n = 15,732, i.e., Leave-One-Out cross-validation). Performance measures included c-index for discriminatory ability and decision curve analysis for clinical utility. Risk groups were defined as ≤1, 2-3, or ≥4 for CHA2DS2VASC scores. Calibration was assessed by comparing proportions of actual observed events. Results CHA2DS2VASC scores achieved acceptable discrimination with c-index of 0.762 (95%CI: 0.746-0.777) for derivation and 0.703 for cross-validation. Decision curve analysis showed the use of CHA2DS2VASC to select patients for rescreening was superior to rescreening all or no patients in terms of net benefit across all reasonable threshold probability (Figure 1, left). Predicted and observed probabilities of adverse clinical outcomes progressively increased with increasing CHA2DS2VASC score (Figure 1, right): 0.7% outcome events in low-risk group (CHA2DS2VASC ≤1, predicted prob. ≤0.86%), 3.5% intermediate-risk group (CHA2DS2VASC 2-3, predicted prob. 2.62%-4.43%) and 11.3% in high-risk group (CHA2DS2VASC ≥4, predicted prob. ≥8.50%). The odds ratio for outcome events were 4.88 (95%CI: 3.43-6.96) for intermediate-versus-low risk group, and 17.37 (95%CI: 12.36-24.42) for high-versus-low risk group.  Conclusion Repeat AF screening on high-risk population may be more efficient than rescreening all screen-negative individuals. CHA2DS2VASC scores may be used as a selection tool to identify high-risk patients to undergo repeat AF screening. Abstract P9 Figure 1


Gut ◽  
2018 ◽  
Vol 68 (4) ◽  
pp. 615-622 ◽  
Author(s):  
Joren R ten Hove ◽  
Shailja C Shah ◽  
Seth R Shaffer ◽  
Charles N Bernstein ◽  
Daniel Castaneda ◽  
...  

ObjectivesSurveillance colonoscopy is thought to prevent colorectal cancer (CRC) in patients with long-standing colonic IBD, but data regarding the frequency of surveillance and the findings thereof are lacking. Our aim was to determine whether consecutive negative surveillance colonoscopies adequately predict low neoplastic risk.DesignA multicentre, multinational database of patients with long-standing IBD colitis without high-risk features and undergoing regular CRC surveillance was constructed. A ‘negative’ surveillance colonoscopy was predefined as a technically adequate procedure having no postinflammatory polyps, no strictures, no endoscopic disease activity and no evidence of neoplasia; a ‘positive’ colonoscopy was a technically adequate procedure that included at least one of these criteria. The primary endpoint was advanced colorectal neoplasia (aCRN), defined as high-grade dysplasia or CRC.ResultsOf 775 patients with long-standing IBD colitis, 44% (n=340) had >1 negative colonoscopy. Patients with consecutive negative surveillance colonoscopies were compared with those who had at least one positive colonoscopy. Both groups had similar demographics, disease-related characteristics, number of surveillance colonoscopies and time intervals between colonoscopies. No aCRN occurred in those with consecutive negative surveillance, compared with an incidence rate of 0.29 to 0.76/100 patient-years (P=0.02) in those having >1 positive colonoscopy on follow-up of 6.1 (P25–P75: 4.6–8.2) years after the index procedure.ConclusionWithin this large surveillance cohort of patients with colonic IBD and no additional high-risk features, having two consecutive negative colonoscopies predicted a very low risk of aCRN occurrence on follow-up. Our findings suggest that longer surveillance intervals in this selected population may be safe.


Sign in / Sign up

Export Citation Format

Share Document