scholarly journals Clinical course and prognostic factors in acute low back pain: an inception cohort study in primary care practice

BMJ ◽  
1994 ◽  
Vol 308 (6928) ◽  
pp. 577-580 ◽  
Author(s):  
J Coste ◽  
G Delecoeuillerie ◽  
A C. de Lara ◽  
J M LeParc ◽  
J B Paolaggi
Spine ◽  
2005 ◽  
Vol 30 (8) ◽  
pp. 976-982 ◽  
Author(s):  
Margreth Grotle ◽  
Jens I. Brox ◽  
Merit B. Veierød ◽  
Bredo Glomsrød ◽  
Jan H. Lønn ◽  
...  

2018 ◽  
Vol 27 (11) ◽  
pp. 2823-2830 ◽  
Author(s):  
Flávia Cordeiro Medeiros ◽  
Leonardo Oliveira Pena Costa ◽  
Indiara Soares Oliveira ◽  
Renan Kendy Oshima ◽  
Lucíola Cunha Menezes Costa

2006 ◽  
Vol 7 (1) ◽  
Author(s):  
Nicholas Henschke ◽  
Christopher G Maher ◽  
Kathryn M Refshauge ◽  
Robert D Herbert ◽  
Robert G Cumming ◽  
...  

2019 ◽  
Author(s):  
Riccardo Miotto ◽  
Bethany L. Percha ◽  
Benjamin S. Glicksberg ◽  
Hao-Chih Lee ◽  
Lisanne Cruz ◽  
...  

AbstractBackgroundAcute and chronic low back pain (LBP) are different conditions with different treatments. However, they are coded in electronic health records with the same ICD-10 code (M54.5) and can be differentiated only by retrospective chart reviews. This prevents efficient definition of data-driven guidelines for billing and therapy recommendations, such as return-to-work options.ObjectiveTo solve this issue, we evaluate the feasibility of automatically distinguishing acute LBP episodes by analyzing free text clinical notes.MethodsWe used a dataset of 17,409 clinical notes from different primary care practices; of these, 891 documents were manually annotated as “acute LBP” and 2,973 were generally associated with LBP via the recorded ICD-10 code. We compared different supervised and unsupervised strategies for automated identification: keyword search; topic modeling; logistic regression with bag-of-n-grams and manual features; and deep learning (ConvNet). We trained the supervised models using either manual annotations or ICD-10 codes as positive labels.ResultsConvNet trained using manual annotations obtained the best results with an AUC-ROC of 0.97 and F-score of 0.69. ConvNet’s results were also robust to reduction of the number of manually annotated documents. In the absence of manual annotations, topic models performed better than methods trained using ICD-10 codes, which were unsatisfactory for identifying LBP acuity.ConclusionsThis study uses clinical notes to delineate a potential path toward systematic learning of therapeutic strategies, billing guidelines, and management options for acute LBP at the point of care.


10.2196/16878 ◽  
2020 ◽  
Vol 8 (2) ◽  
pp. e16878 ◽  
Author(s):  
Riccardo Miotto ◽  
Bethany L Percha ◽  
Benjamin S Glicksberg ◽  
Hao-Chih Lee ◽  
Lisanne Cruz ◽  
...  

Background Acute and chronic low back pain (LBP) are different conditions with different treatments. However, they are coded in electronic health records with the same International Classification of Diseases, 10th revision (ICD-10) code (M54.5) and can be differentiated only by retrospective chart reviews. This prevents an efficient definition of data-driven guidelines for billing and therapy recommendations, such as return-to-work options. Objective The objective of this study was to evaluate the feasibility of automatically distinguishing acute LBP episodes by analyzing free-text clinical notes. Methods We used a dataset of 17,409 clinical notes from different primary care practices; of these, 891 documents were manually annotated as acute LBP and 2973 were generally associated with LBP via the recorded ICD-10 code. We compared different supervised and unsupervised strategies for automated identification: keyword search, topic modeling, logistic regression with bag of n-grams and manual features, and deep learning (a convolutional neural network-based architecture [ConvNet]). We trained the supervised models using either manual annotations or ICD-10 codes as positive labels. Results ConvNet trained using manual annotations obtained the best results with an area under the receiver operating characteristic curve of 0.98 and an F score of 0.70. ConvNet’s results were also robust to reduction of the number of manually annotated documents. In the absence of manual annotations, topic models performed better than methods trained using ICD-10 codes, which were unsatisfactory for identifying LBP acuity. Conclusions This study uses clinical notes to delineate a potential path toward systematic learning of therapeutic strategies, billing guidelines, and management options for acute LBP at the point of care.


BMJ ◽  
2008 ◽  
Vol 337 (jul07 1) ◽  
pp. a171-a171 ◽  
Author(s):  
N. Henschke ◽  
C. G Maher ◽  
K. M Refshauge ◽  
R. D Herbert ◽  
R. G Cumming ◽  
...  

2019 ◽  
Author(s):  
Riccardo Miotto ◽  
Bethany L Percha ◽  
Benjamin S Glicksberg ◽  
Hao-Chih Lee ◽  
Lisanne Cruz ◽  
...  

BACKGROUND Acute and chronic low back pain (LBP) are different conditions with different treatments. However, they are coded in electronic health records with the same International Classification of Diseases, 10th revision (ICD-10) code (M54.5) and can be differentiated only by retrospective chart reviews. This prevents an efficient definition of data-driven guidelines for billing and therapy recommendations, such as return-to-work options. OBJECTIVE The objective of this study was to evaluate the feasibility of automatically distinguishing acute LBP episodes by analyzing free-text clinical notes. METHODS We used a dataset of 17,409 clinical notes from different primary care practices; of these, 891 documents were manually annotated as <i>acute LBP</i> and 2973 were generally associated with LBP via the recorded ICD-10 code. We compared different supervised and unsupervised strategies for automated identification: keyword search, topic modeling, logistic regression with bag of n-grams and manual features, and deep learning (a convolutional neural network-based architecture [ConvNet]). We trained the supervised models using either manual annotations or ICD-10 codes as positive labels. RESULTS ConvNet trained using manual annotations obtained the best results with an area under the receiver operating characteristic curve of 0.98 and an F score of 0.70. ConvNet’s results were also robust to reduction of the number of manually annotated documents. In the absence of manual annotations, topic models performed better than methods trained using ICD-10 codes, which were unsatisfactory for identifying LBP acuity. CONCLUSIONS This study uses clinical notes to delineate a potential path toward systematic learning of therapeutic strategies, billing guidelines, and management options for acute LBP at the point of care.


Sign in / Sign up

Export Citation Format

Share Document