scholarly journals A new analgesic index for postoperative pain assessment based on a photoplethysmographic spectrogram and convolutional neural network (Preprint)

2020 ◽  
Author(s):  
Byung-Moon Choi ◽  
Ji Yeon Yim ◽  
Hangsik Shin ◽  
Gyu-Jeong Noh

BACKGROUND Although commercially available analgesic indices based on biosignal processing have been used to quantify nociception during general anaesthesia, the performance of these indices is not high in awake patients. Therefore, there is a need for the development of a new analgesic index with improved performance to quantify postoperative pain in awake patients. OBJECTIVE The aim of this study was to develop a new analgesic index using spectrogram of photoplethysmogram and convolutional neural network to objectively assess pain in awake patients. METHODS Photoplethysmograms (PPGs) were obtained for 6 min both in the absence (preoperatively) and presence (postoperatively) of pain in a group of surgical patients. Of these, 5 min worth of PPG data, barring the first minute, were used for analysis. Based on the spectrogram from the photoplethysmography and convolutional neural network, we developed a spectrogram-CNN index (SCI) for pain assessment. The area under the curve (AUC) of the receiver-operating characteristic (ROC) curve was measured to evaluate the performance of the two indices. RESULTS PPGs from 100 patients were used to develop the SCI. When there was pain, the mean [95% confidence interval, CI] SCI value increased significantly (baseline: 28.5 [24.2 - 30.7] vs. recovery area: 65.7 [60.5 - 68.3]; P<0.01). The AUC of ROC curve and balanced accuracy were 0.76 and 71.4%, respectively. The cut-off value for detecting pain was 48 on the SCI, with a sensitivity of 68.3% and specificity of 73.8%. CONCLUSIONS Although there were limitations to the study design, we confirmed that the SCI can efficiently detect postoperative pain in conscious patients. Further studies are needed to assess feasibility and prevent overfitting in various populations, including patients under general anaesthesia. CLINICALTRIAL KCT0002080


10.2196/23920 ◽  
2021 ◽  
Vol 23 (2) ◽  
pp. e23920
Author(s):  
Byung-Moon Choi ◽  
Ji Yeon Yim ◽  
Hangsik Shin ◽  
Gyujeong Noh

Background Although commercially available analgesic indices based on biosignal processing have been used to quantify nociception during general anesthesia, their performance is low in conscious patients. Therefore, there is a need to develop a new analgesic index with improved performance to quantify postoperative pain in conscious patients. Objective This study aimed to develop a new analgesic index using photoplethysmogram (PPG) spectrograms and a convolutional neural network (CNN) to objectively assess pain in conscious patients. Methods PPGs were obtained from a group of surgical patients for 6 minutes both in the absence (preoperatively) and in the presence (postoperatively) of pain. Then, the PPG data of the latter 5 minutes were used for analysis. Based on the PPGs and a CNN, we developed a spectrogram–CNN index for pain assessment. The area under the curve (AUC) of the receiver-operating characteristic curve was measured to evaluate the performance of the 2 indices. Results PPGs from 100 patients were used to develop the spectrogram–CNN index. When there was pain, the mean (95% CI) spectrogram–CNN index value increased significantly—baseline: 28.5 (24.2-30.7) versus recovery area: 65.7 (60.5-68.3); P<.01. The AUC and balanced accuracy were 0.76 and 71.4%, respectively. The spectrogram–CNN index cutoff value for detecting pain was 48, with a sensitivity of 68.3% and specificity of 73.8%. Conclusions Although there were limitations to the study design, we confirmed that the spectrogram–CNN index can efficiently detect postoperative pain in conscious patients. Further studies are required to assess the spectrogram–CNN index’s feasibility and prevent overfitting to various populations, including patients under general anesthesia. Trial Registration Clinical Research Information Service KCT0002080; https://cris.nih.go.kr/cris/search/search_result_st01.jsp?seq=6638



2018 ◽  
Vol 10 (1) ◽  
pp. 57-64 ◽  
Author(s):  
Rizqa Raaiqa Bintana ◽  
Chastine Fatichah ◽  
Diana Purwitasari

Community-based question answering (CQA) is formed to help people who search information that they need through a community. One condition that may occurs in CQA is when people cannot obtain the information that they need, thus they will post a new question. This condition can cause CQA archive increased because of duplicated questions. Therefore, it becomes important problems to find semantically similar questions from CQA archive towards a new question. In this study, we use convolutional neural network methods for semantic modeling of sentence to obtain words that they represent the content of documents and new question. The result for the process of finding the same question semantically to a new question (query) from the question-answer documents archive using the convolutional neural network method, obtained the mean average precision value is 0,422. Whereas by using vector space model, as a comparison, obtained mean average precision value is 0,282. Index Terms—community-based question answering, convolutional neural network, question retrieval



2021 ◽  
Vol 18 (1) ◽  
pp. 172988142199332
Author(s):  
Xintao Ding ◽  
Boquan Li ◽  
Jinbao Wang

Indoor object detection is a very demanding and important task for robot applications. Object knowledge, such as two-dimensional (2D) shape and depth information, may be helpful for detection. In this article, we focus on region-based convolutional neural network (CNN) detector and propose a geometric property-based Faster R-CNN method (GP-Faster) for indoor object detection. GP-Faster incorporates geometric property in Faster R-CNN to improve the detection performance. In detail, we first use mesh grids that are the intersections of direct and inverse proportion functions to generate appropriate anchors for indoor objects. After the anchors are regressed to the regions of interest produced by a region proposal network (RPN-RoIs), we then use 2D geometric constraints to refine the RPN-RoIs, in which the 2D constraint of every classification is a convex hull region enclosing the width and height coordinates of the ground-truth boxes on the training set. Comparison experiments are implemented on two indoor datasets SUN2012 and NYUv2. Since the depth information is available in NYUv2, we involve depth constraints in GP-Faster and propose 3D geometric property-based Faster R-CNN (DGP-Faster) on NYUv2. The experimental results show that both GP-Faster and DGP-Faster increase the performance of the mean average precision.



2021 ◽  
Vol 7 (2) ◽  
pp. 356-362
Author(s):  
Harry Coppock ◽  
Alex Gaskell ◽  
Panagiotis Tzirakis ◽  
Alice Baird ◽  
Lyn Jones ◽  
...  

BackgroundSince the emergence of COVID-19 in December 2019, multidisciplinary research teams have wrestled with how best to control the pandemic in light of its considerable physical, psychological and economic damage. Mass testing has been advocated as a potential remedy; however, mass testing using physical tests is a costly and hard-to-scale solution.MethodsThis study demonstrates the feasibility of an alternative form of COVID-19 detection, harnessing digital technology through the use of audio biomarkers and deep learning. Specifically, we show that a deep neural network based model can be trained to detect symptomatic and asymptomatic COVID-19 cases using breath and cough audio recordings.ResultsOur model, a custom convolutional neural network, demonstrates strong empirical performance on a data set consisting of 355 crowdsourced participants, achieving an area under the curve of the receiver operating characteristics of 0.846 on the task of COVID-19 classification.ConclusionThis study offers a proof of concept for diagnosing COVID-19 using cough and breath audio signals and motivates a comprehensive follow-up research study on a wider data sample, given the evident advantages of a low-cost, highly scalable digital COVID-19 diagnostic tool.



2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jingwei Liu ◽  
Peixuan Li ◽  
Xuehan Tang ◽  
Jiaxin Li ◽  
Jiaming Chen

AbstractArtificial neural networks (ANN) which include deep learning neural networks (DNN) have problems such as the local minimal problem of Back propagation neural network (BPNN), the unstable problem of Radial basis function neural network (RBFNN) and the limited maximum precision problem of Convolutional neural network (CNN). Performance (training speed, precision, etc.) of BPNN, RBFNN and CNN are expected to be improved. Main works are as follows: Firstly, based on existing BPNN and RBFNN, Wavelet neural network (WNN) is implemented in order to get better performance for further improving CNN. WNN adopts the network structure of BPNN in order to get faster training speed. WNN adopts the wavelet function as an activation function, whose form is similar to the radial basis function of RBFNN, in order to solve the local minimum problem. Secondly, WNN-based Convolutional wavelet neural network (CWNN) method is proposed, in which the fully connected layers (FCL) of CNN is replaced by WNN. Thirdly, comparative simulations based on MNIST and CIFAR-10 datasets among the discussed methods of BPNN, RBFNN, CNN and CWNN are implemented and analyzed. Fourthly, the wavelet-based Convolutional Neural Network (WCNN) is proposed, where the wavelet transformation is adopted as the activation function in Convolutional Pool Neural Network (CPNN) of CNN. Fifthly, simulations based on CWNN are implemented and analyzed on the MNIST dataset. Effects are as follows: Firstly, WNN can solve the problems of BPNN and RBFNN and have better performance. Secondly, the proposed CWNN can reduce the mean square error and the error rate of CNN, which means CWNN has better maximum precision than CNN. Thirdly, the proposed WCNN can reduce the mean square error and the error rate of CWNN, which means WCNN has better maximum precision than CWNN.



1983 ◽  
Vol 11 (1) ◽  
pp. 27-30 ◽  
Author(s):  
D. A. Pybus ◽  
B. E. D'Bras ◽  
G. Goulding ◽  
H. Liberman ◽  
T. A. Torda

Seventy patients undergoing haemorrhoidectomy under general anaesthesia were randomly allocated to one of five treatment groups in order to compare the effectiveness of various caudal agents in the control of postoperative pain. Four groups were given a caudal injection of either 2% lignocaine, 0.5% bupivacaine, 2% lignocaine + morphine sulphate 4 mg or normal saline + morphine sulphate 4 mg, while the fifth (control) group did not receive an injection. The number of patients requiring postoperative opiates was significantly higher in the lignocaine group than in the morphine (p <0.05) and morphine-lignocaine (p <0.05) groups. No agent significantly reduced the number requiring opiates. In those who received opiates, the mean analgesic period was 228 minutes in the control group, and was significantly longer following bupivacaine (577 min, p <0.01), morphine-lignocaine (637 min, p <0.05) and morphine (665 min, p <0.01). The mean analgesic period following lignocaine (349 min) was not significantly different from control. The incidence of catheterisation was lowest in those patients who did not receive caudal analgesia.



Author(s):  
David Baur ◽  
Richard Bieck ◽  
Johann Berger ◽  
Juliane Neumann ◽  
Jeanette Henkelmann ◽  
...  

Abstract Purpose This single-center study aimed to develop a convolutional neural network to segment multiple consecutive axial magnetic resonance imaging (MRI) slices of the lumbar spinal muscles of patients with lower back pain and automatically classify fatty muscle degeneration. Methods We developed a fully connected deep convolutional neural network (CNN) with a pre-trained U-Net model trained on a dataset of 3,650 axial T2-weighted MRI images from 100 patients with lower back pain. We included all qualities of MRI; the exclusion criteria were fractures, tumors, infection, or spine implants. The training was performed using k-fold cross-validation (k = 10), and performance was evaluated using the dice similarity coefficient (DSC) and cross-sectional area error (CSA error). For clinical correlation, we used a simplified Goutallier classification (SGC) system with three classes. Results The mean DSC was high for overall muscle (0.91) and muscle tissue segmentation (0.83) but showed deficiencies in fatty tissue segmentation (0.51). The CSA error was small for the overall muscle area of 8.42%, and fatty tissue segmentation showed a high mean CSA error of 40.74%. The SGC classification was correctly predicted in 75% of the patients. Conclusion Our fully connected CNN segmented overall muscle and muscle tissue with high precision and recall, as well as good DSC values. The mean predicted SGC values of all available patient axial slices showed promising results. With an overall Error of 25%, further development is needed for clinical implementation. Larger datasets and training of other model architectures are required to segment fatty tissue more accurately.



Author(s):  
Caner Ediz ◽  
Serkan Akan ◽  
Neslihan Kaya Terzi ◽  
Aysenur Ihvan

Background: To discuss the necessity of the second prostate biopsy in the patients with atypical small acinar proliferation (ASAP) and to develop a scoring system and risk table as a new re-biopsy criteria. Methods: 2845 patients who were performed transrectal ultrasonography-guided prostate biopsy between January 2008 and May 2019 were evaluated. 128 patients, whose data were reached, were enrolled into the study. Before the first and the second biopsy, tPSA, fPSA, f/tPSA rate and PSA-Density assessment and changes in these parameters between the two biopsies were recorded. “ASAP Scoring System and risk table” (ASS-RT) was evaluated before the second biopsy. Results: The mean age of 128 patients with ASAP was 62.9±7.8 years. The ASS-RT scores of the patients with PCa were statistically significantly higher than the patients with non-PCa (p: 0.001). In the ROC curve analysis of ASS-RT, area under the curve was 0.804 and the standard error was 0.04. The area under the ROC curve was significantly higher than 0.5 (p:0.001). The cut-off point of ASS-RT score in diagnosis of malignancy was ≥ 7. The sensitivity of this value was found to be 60.8% and its specificity as 80.5%. Conclusions: The threshold value for the ASS-RT score may be used as 7 and the second biopsy may be performed immediately to patients over this value. We think that there may be no need for a second biopsy if the ASS-RT score under the 7 (especially low-risk group) before the second biopsy.



2020 ◽  
Vol 10 (3) ◽  
pp. 78-84
Author(s):  
Seleno Glauber de Jesus-Silva ◽  
Ana Elisa Chaves ◽  
Caio Augusto Alves Maciel ◽  
Edson Eziel Ferreira Scotini ◽  
Pablo Girardelli Mendonça Mesquita ◽  
...  

Objectives: To assess the incidence of contrast-induced nephropathy (CIN) and determine the Mehran Score's (MS) ability to predict CIN in patients undergoing digital angiography or computed tomography angiography. Methods: 252 medical records of inpatients who underwent DA or CTA over 28 months in a quaternary hospital were reviewed. CIN was defined as serum creatinine> 0.5 mg / dL or > 25% increase in baseline creatinine, 48 h after administration of iodinated contrast. The ROC curve and the area under the curve (AUC) were used as a score test. Results: The majority (159; 63.1%) were male, and the average age was 60.4 years. Anemia, diabetes mellitus, and age > 75 years were the most prevalent factors. The incidence of CIN was 17.8% (n = 45). There was a decrease in the mean values ​​of creatinine pre and post among patients who did not suffer CIN (1.38 ± 1.22 vs 1.19 ± 0.89; t = 3.433; p = 0.0007), while among patients who suffering CIN, the mean increase was 1.03 mg / dL (1.43 ± 1.48 vs 2.46 ± 2.35 mg / dL; t = 5.44; p = 0.117). The ROC curve analysis identified a low correlation between MS and the occurrence of CIN (AUC = 0.506). Conclusion: The incidence of CIN in hospitalized patients undergoing angiography or computed tomography angiography was high. The EM did not allow the prediction of NIC.



2020 ◽  
pp. 147592172096544
Author(s):  
Aravinda S Rao ◽  
Tuan Nguyen ◽  
Marimuthu Palaniswami ◽  
Tuan Ngo

With the growing number of aging infrastructure across the world, there is a high demand for a more effective inspection method to assess its conditions. Routine assessment of structural conditions is a necessity to ensure the safety and operation of critical infrastructure. However, the current practice to detect structural damages, such as cracks, depends on human visual observation methods, which are prone to efficiency, cost, and safety concerns. In this article, we present an automated detection method, which is based on convolutional neural network models and a non-overlapping window-based approach, to detect crack/non-crack conditions of concrete structures from images. To this end, we construct a data set of crack/non-crack concrete structures, comprising 32,704 training patches, 2074 validation patches, and 6032 test patches. We evaluate the performance of our approach using 15 state-of-the-art convolutional neural network models in terms of number of parameters required to train the models, area under the curve, and inference time. Our approach provides over 95% accuracy and over 87% precision in detecting the cracks for most of the convolutional neural network models. We also show that our approach outperforms existing models in literature in terms of accuracy and inference time. The best performance in terms of area under the curve was achieved by visual geometry group-16 model (area under the curve = 0.9805) and best inference time was provided by AlexNet (0.32 s per image in size of 256 × 256 × 3). Our evaluation shows that deeper convolutional neural network models have higher detection accuracies; however, they also require more parameters and have higher inference time. We believe that this study would act as a benchmark for real-time, automated crack detection for condition assessment of infrastructure.



Sign in / Sign up

Export Citation Format

Share Document