scholarly journals Novel Analgesic Index for Postoperative Pain Assessment Based on a Photoplethysmographic Spectrogram and Convolutional Neural Network: Observational Study

10.2196/23920 ◽  
2021 ◽  
Vol 23 (2) ◽  
pp. e23920
Author(s):  
Byung-Moon Choi ◽  
Ji Yeon Yim ◽  
Hangsik Shin ◽  
Gyujeong Noh

Background Although commercially available analgesic indices based on biosignal processing have been used to quantify nociception during general anesthesia, their performance is low in conscious patients. Therefore, there is a need to develop a new analgesic index with improved performance to quantify postoperative pain in conscious patients. Objective This study aimed to develop a new analgesic index using photoplethysmogram (PPG) spectrograms and a convolutional neural network (CNN) to objectively assess pain in conscious patients. Methods PPGs were obtained from a group of surgical patients for 6 minutes both in the absence (preoperatively) and in the presence (postoperatively) of pain. Then, the PPG data of the latter 5 minutes were used for analysis. Based on the PPGs and a CNN, we developed a spectrogram–CNN index for pain assessment. The area under the curve (AUC) of the receiver-operating characteristic curve was measured to evaluate the performance of the 2 indices. Results PPGs from 100 patients were used to develop the spectrogram–CNN index. When there was pain, the mean (95% CI) spectrogram–CNN index value increased significantly—baseline: 28.5 (24.2-30.7) versus recovery area: 65.7 (60.5-68.3); P<.01. The AUC and balanced accuracy were 0.76 and 71.4%, respectively. The spectrogram–CNN index cutoff value for detecting pain was 48, with a sensitivity of 68.3% and specificity of 73.8%. Conclusions Although there were limitations to the study design, we confirmed that the spectrogram–CNN index can efficiently detect postoperative pain in conscious patients. Further studies are required to assess the spectrogram–CNN index’s feasibility and prevent overfitting to various populations, including patients under general anesthesia. Trial Registration Clinical Research Information Service KCT0002080; https://cris.nih.go.kr/cris/search/search_result_st01.jsp?seq=6638


2020 ◽  
Author(s):  
Byung-Moon Choi ◽  
Ji Yeon Yim ◽  
Hangsik Shin ◽  
Gyu-Jeong Noh

BACKGROUND Although commercially available analgesic indices based on biosignal processing have been used to quantify nociception during general anaesthesia, the performance of these indices is not high in awake patients. Therefore, there is a need for the development of a new analgesic index with improved performance to quantify postoperative pain in awake patients. OBJECTIVE The aim of this study was to develop a new analgesic index using spectrogram of photoplethysmogram and convolutional neural network to objectively assess pain in awake patients. METHODS Photoplethysmograms (PPGs) were obtained for 6 min both in the absence (preoperatively) and presence (postoperatively) of pain in a group of surgical patients. Of these, 5 min worth of PPG data, barring the first minute, were used for analysis. Based on the spectrogram from the photoplethysmography and convolutional neural network, we developed a spectrogram-CNN index (SCI) for pain assessment. The area under the curve (AUC) of the receiver-operating characteristic (ROC) curve was measured to evaluate the performance of the two indices. RESULTS PPGs from 100 patients were used to develop the SCI. When there was pain, the mean [95% confidence interval, CI] SCI value increased significantly (baseline: 28.5 [24.2 - 30.7] vs. recovery area: 65.7 [60.5 - 68.3]; P<0.01). The AUC of ROC curve and balanced accuracy were 0.76 and 71.4%, respectively. The cut-off value for detecting pain was 48 on the SCI, with a sensitivity of 68.3% and specificity of 73.8%. CONCLUSIONS Although there were limitations to the study design, we confirmed that the SCI can efficiently detect postoperative pain in conscious patients. Further studies are needed to assess feasibility and prevent overfitting in various populations, including patients under general anaesthesia. CLINICALTRIAL KCT0002080



2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Tao Gan ◽  
Yulin Yang ◽  
Shuaicheng Liu ◽  
Bing Zeng ◽  
Jinlin Yang ◽  
...  

Ancylostomiasis is a fairly common small bowel parasite disease identified by capsule endoscopy (CE) for which a computer-aided clinical detection method has not been established. We sought to develop an artificial intelligence system with a convolutional neural network (CNN) to automatically detect hookworms in CE images. We trained a deep CNN system based on a YOLO-V4 (You Look Only Once-Version4) detector using 11236 CE images of hookworms. We assessed its performance by calculating the area under the receiver operating characteristic curve and its sensitivity, specificity, and accuracy using an independent test set of 10,529 small-bowel images including 531 images of hookworms. The trained CNN system required 403 seconds to evaluate 10,529 test images. The area under the curve for the detection of hookworms was 0.972 (95% confidence interval (CI), 0.967-0.978). The sensitivity, specificity, and accuracy of the CNN system were 92.2%, 91.1%, and 91.2%, respectively, at a probability score cut-off of 0.485. We developed and validated a CNN-based system for detecting hookworms in CE images. By combining this high-accuracy, high-speed, and oversight-preventing system with other CNN systems, we hope it will become an important supplement for detecting intestinal abnormalities in CE images. This trial is registered with ChiCTR2000034546 (a clinical research of artificial-intelligence-aided diagnosis for hookworms in small intestine by capsule endoscope images).



2021 ◽  
Vol 7 (2) ◽  
pp. 356-362
Author(s):  
Harry Coppock ◽  
Alex Gaskell ◽  
Panagiotis Tzirakis ◽  
Alice Baird ◽  
Lyn Jones ◽  
...  

BackgroundSince the emergence of COVID-19 in December 2019, multidisciplinary research teams have wrestled with how best to control the pandemic in light of its considerable physical, psychological and economic damage. Mass testing has been advocated as a potential remedy; however, mass testing using physical tests is a costly and hard-to-scale solution.MethodsThis study demonstrates the feasibility of an alternative form of COVID-19 detection, harnessing digital technology through the use of audio biomarkers and deep learning. Specifically, we show that a deep neural network based model can be trained to detect symptomatic and asymptomatic COVID-19 cases using breath and cough audio recordings.ResultsOur model, a custom convolutional neural network, demonstrates strong empirical performance on a data set consisting of 355 crowdsourced participants, achieving an area under the curve of the receiver operating characteristics of 0.846 on the task of COVID-19 classification.ConclusionThis study offers a proof of concept for diagnosing COVID-19 using cough and breath audio signals and motivates a comprehensive follow-up research study on a wider data sample, given the evident advantages of a low-cost, highly scalable digital COVID-19 diagnostic tool.



Author(s):  
Oguz Akbilgic ◽  
Liam Butler ◽  
Ibrahim Karabayir ◽  
Patricia P Chang ◽  
Dalane W Kitzman ◽  
...  

Abstract Aims Heart failure (HF) is a leading cause of death. Early intervention is the key to reduce HF-related morbidity and mortality. This study assesses the utility of electrocardiograms (ECGs) in HF risk prediction. Methods and results Data from the baseline visits (1987–89) of the Atherosclerosis Risk in Communities (ARIC) study was used. Incident hospitalized HF events were ascertained by ICD codes. Participants with good quality baseline ECGs were included. Participants with prevalent HF were excluded. ECG-artificial intelligence (AI) model to predict HF was created as a deep residual convolutional neural network (CNN) utilizing standard 12-lead ECG. The area under the receiver operating characteristic curve (AUC) was used to evaluate prediction models including (CNN), light gradient boosting machines (LGBM), and Cox proportional hazards regression. A total of 14 613 (45% male, 73% of white, mean age ± standard deviation of 54 ± 5) participants were eligible. A total of 803 (5.5%) participants developed HF within 10 years from baseline. Convolutional neural network utilizing solely ECG achieved an AUC of 0.756 (0.717–0.795) on the hold-out test data. ARIC and Framingham Heart Study (FHS) HF risk calculators yielded AUC of 0.802 (0.750–0.850) and 0.780 (0.740–0.830). The highest AUC of 0.818 (0.778–0.859) was obtained when ECG-AI model output, age, gender, race, body mass index, smoking status, prevalent coronary heart disease, diabetes mellitus, systolic blood pressure, and heart rate were used as predictors of HF within LGBM. The ECG-AI model output was the most important predictor of HF. Conclusions ECG-AI model based solely on information extracted from ECG independently predicts HF with accuracy comparable to existing FHS and ARIC risk calculators.



2018 ◽  
Author(s):  
Rumeng Li ◽  
Baotian Hu ◽  
Feifan Liu ◽  
Weisong Liu ◽  
Francesca Cunningham ◽  
...  

BACKGROUND Bleeding events are common and critical and may cause significant morbidity and mortality. High incidences of bleeding events are associated with cardiovascular disease in patients on anticoagulant therapy. Prompt and accurate detection of bleeding events is essential to prevent serious consequences. As bleeding events are often described in clinical notes, automatic detection of bleeding events from electronic health record (EHR) notes may improve drug-safety surveillance and pharmacovigilance. OBJECTIVE We aimed to develop a natural language processing (NLP) system to automatically classify whether an EHR note sentence contains a bleeding event. METHODS We expert annotated 878 EHR notes (76,577 sentences and 562,630 word-tokens) to identify bleeding events at the sentence level. This annotated corpus was used to train and validate our NLP systems. We developed an innovative hybrid convolutional neural network (CNN) and long short-term memory (LSTM) autoencoder (HCLA) model that integrates a CNN architecture with a bidirectional LSTM (BiLSTM) autoencoder model to leverage large unlabeled EHR data. RESULTS HCLA achieved the best area under the receiver operating characteristic curve (0.957) and F1 score (0.938) to identify whether a sentence contains a bleeding event, thereby surpassing the strong baseline support vector machines and other CNN and autoencoder models. CONCLUSIONS By incorporating a supervised CNN model and a pretrained unsupervised BiLSTM autoencoder, the HCLA achieved high performance in detecting bleeding events.



2021 ◽  
Vol 8 ◽  
Author(s):  
Ann N. Allen ◽  
Matt Harvey ◽  
Lauren Harrell ◽  
Aren Jansen ◽  
Karlina P. Merkens ◽  
...  

Passive acoustic monitoring is a well-established tool for researching the occurrence, movements, and ecology of a wide variety of marine mammal species. Advances in hardware and data collection have exponentially increased the volumes of passive acoustic data collected, such that discoveries are now limited by the time required to analyze rather than collect the data. In order to address this limitation, we trained a deep convolutional neural network (CNN) to identify humpback whale song in over 187,000 h of acoustic data collected at 13 different monitoring sites in the North Pacific over a 14-year period. The model successfully detected 75 s audio segments containing humpback song with an average precision of 0.97 and average area under the receiver operating characteristic curve (AUC-ROC) of 0.992. The model output was used to analyze spatial and temporal patterns of humpback song, corroborating known seasonal patterns in the Hawaiian and Mariana Islands, including occurrence at remote monitoring sites beyond well-studied aggregations, as well as novel discovery of humpback whale song at Kingman Reef, at 5∘ North latitude. This study demonstrates the ability of a CNN trained on a small dataset to generalize well to a highly variable signal type across a diverse range of recording and noise conditions. We demonstrate the utility of active learning approaches for creating high-quality models in specialized domains where annotations are rare. These results validate the feasibility of applying deep learning models to identify highly variable signals across broad spatial and temporal scales, enabling new discoveries through combining large datasets with cutting edge tools.



2020 ◽  
pp. 147592172096544
Author(s):  
Aravinda S Rao ◽  
Tuan Nguyen ◽  
Marimuthu Palaniswami ◽  
Tuan Ngo

With the growing number of aging infrastructure across the world, there is a high demand for a more effective inspection method to assess its conditions. Routine assessment of structural conditions is a necessity to ensure the safety and operation of critical infrastructure. However, the current practice to detect structural damages, such as cracks, depends on human visual observation methods, which are prone to efficiency, cost, and safety concerns. In this article, we present an automated detection method, which is based on convolutional neural network models and a non-overlapping window-based approach, to detect crack/non-crack conditions of concrete structures from images. To this end, we construct a data set of crack/non-crack concrete structures, comprising 32,704 training patches, 2074 validation patches, and 6032 test patches. We evaluate the performance of our approach using 15 state-of-the-art convolutional neural network models in terms of number of parameters required to train the models, area under the curve, and inference time. Our approach provides over 95% accuracy and over 87% precision in detecting the cracks for most of the convolutional neural network models. We also show that our approach outperforms existing models in literature in terms of accuracy and inference time. The best performance in terms of area under the curve was achieved by visual geometry group-16 model (area under the curve = 0.9805) and best inference time was provided by AlexNet (0.32 s per image in size of 256 × 256 × 3). Our evaluation shows that deeper convolutional neural network models have higher detection accuracies; however, they also require more parameters and have higher inference time. We believe that this study would act as a benchmark for real-time, automated crack detection for condition assessment of infrastructure.





Symmetry ◽  
2018 ◽  
Vol 10 (11) ◽  
pp. 607 ◽  
Author(s):  
Jianwei Lu ◽  
Yixuan Xu ◽  
Mingle Chen ◽  
Ye Luo

Fundus vessel analysis is a significant tool for evaluating the development of retinal diseases such as diabetic retinopathy and hypertension in clinical practice. Hence, automatic fundus vessel segmentation is essential and valuable for medical diagnosis in ophthalmopathy and will allow identification and extraction of relevant symmetric and asymmetric patterns. Further, due to the uniqueness of fundus vessel, it can be applied in the field of biometric identification. In this paper, we remold fundus vessel segmentation as a task of pixel-wise classification task, and propose a novel coarse-to-fine fully convolutional neural network (CF-FCN) to extract vessels from fundus images. Our CF-FCN is aimed at making full use of the original data information and making up for the coarse output of the neural network by harnessing the space relationship between pixels in fundus images. Accompanying with necessary pre-processing and post-processing operations, the efficacy and efficiency of our CF-FCN is corroborated through our experiments on DRIVE, STARE, HRF and CHASE DB1 datasets. It achieves sensitivity of 0.7941, specificity of 0.9870, accuracy of 0.9634 and Area Under Receiver Operating Characteristic Curve (AUC) of 0.9787 on DRIVE datasets, which surpasses the state-of-the-art approaches.



BMJ Open ◽  
2021 ◽  
Vol 11 (3) ◽  
pp. e045120
Author(s):  
Robert Arntfield ◽  
Blake VanBerlo ◽  
Thamer Alaifan ◽  
Nathan Phelps ◽  
Matthew White ◽  
...  

ObjectivesLung ultrasound (LUS) is a portable, low-cost respiratory imaging tool but is challenged by user dependence and lack of diagnostic specificity. It is unknown whether the advantages of LUS implementation could be paired with deep learning (DL) techniques to match or exceed human-level, diagnostic specificity among similar appearing, pathological LUS images.DesignA convolutional neural network (CNN) was trained on LUS images with B lines of different aetiologies. CNN diagnostic performance, as validated using a 10% data holdback set, was compared with surveyed LUS-competent physicians.SettingTwo tertiary Canadian hospitals.Participants612 LUS videos (121 381 frames) of B lines from 243 distinct patients with either (1) COVID-19 (COVID), non-COVID acute respiratory distress syndrome (NCOVID) or (3) hydrostatic pulmonary edema (HPE).ResultsThe trained CNN performance on the independent dataset showed an ability to discriminate between COVID (area under the receiver operating characteristic curve (AUC) 1.0), NCOVID (AUC 0.934) and HPE (AUC 1.0) pathologies. This was significantly better than physician ability (AUCs of 0.697, 0.704, 0.967 for the COVID, NCOVID and HPE classes, respectively), p<0.01.ConclusionsA DL model can distinguish similar appearing LUS pathology, including COVID-19, that cannot be distinguished by humans. The performance gap between humans and the model suggests that subvisible biomarkers within ultrasound images could exist and multicentre research is merited.



Sign in / Sign up

Export Citation Format

Share Document