scholarly journals A tree based approach for multi-class classification of surgical procedures using structured and unstructured data

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Tannaz Khaleghi ◽  
Alper Murat ◽  
Suzan Arslanturk

Abstract Background In surgical department, CPT code assignment has been a complicated manual human effort, that entails significant related knowledge and experience. While there are several studies using CPTs to make predictions in surgical services, literature on predicting CPTs in surgical and other services using text features is very sparse. This study improves the prediction of CPTs by the means of informative features and a novel re-prioritization algorithm. Methods The input data used in this study is composed of both structured and unstructured data. The ground truth labels (CPTs) are obtained from medical coding databases using relative value units which indicates the major operational procedures in each surgery case. In the modeling process, we first utilize Random Forest multi-class classification model to predict the CPT codes. Second, we extract the key information such as label probabilities, feature importance measures, and medical term frequency. Then, the indicated factors are used in a novel algorithm to rearrange the alternative CPT codes in the list of potential candidates based on the calculated weights. Results To evaluate the performance of both phases, prediction and complementary improvement, we report the accuracy scores of multi-class CPT prediction tasks for datasets of 5 key surgery case specialities. The Random Forest model performs the classification task with 74–76% when predicting the primary CPT (accuracy@1) versus the CPT set (accuracy@2) with respect to two filtering conditions on CPT codes. The complementary algorithm improves the results from initial step by 8% on average. Furthermore, the incorporated text features enhanced the quality of the output by 20–35%. The model outperforms the state-of-the-art neural network model with respect to accuracy, precision and recall. Conclusions We have established a robust framework based on a decision tree predictive model. We predict the surgical codes more accurately and robust compared to the state-of-the-art deep neural structures which can help immensely in both surgery billing and scheduling purposes in such units.

2021 ◽  
Author(s):  
Tannaz Khaleghi ◽  
َAlper Murat ◽  
Suzan Arslanturk

Abstract Background: In surgical department, CPT code assignment has been a complicated manual human effort, that entails significant related knowledge and experience. While there are several studies using CPTs to make predictions in surgical services, literature on predicting CPTs in surgical and other services using text features is very sparse. This study improves the prediction of CPTs by the means of informative features and novel re-prioritization algorithm. Methods: The input data used in this study is composed of both structured and unstructured data. The ground truth labels (CPTs) are obtained from medical coding databases using relative value units which indicates the major operational procedures in each surgery case. In the modeling process, we first utilize Random Forest multi-class classification model to predict the CPT codes. Second, we extract the key information such as label probabilities, feature importance measures, and medical term frequency. Then, the indicated factors are used in a novel algorithm to rearrange the alternative CPT codes in the list of potential candidates based on the calculated weights. Results: To evaluate the performance of both phases, prediction and complementary improvement, we report the accuracy scores of multi-class CPT prediction tasks for datasets of 5 key surgery case specialities. The Random Forest model performs the classification task with 74% to 76% when predicting the primary CPT versus the CPT set with respect to the two filtering conditions on CPT codes. The complementary algorithm improves the results from initial step by 8% on average. Furthermore, the incorporated text features enhanced the quality of the output by 20-35%. Conclusions: We have established a robust framework based on a decision tree predictive model. We predict the surgical codes more accurately and robust compared to the state-of-the-art deep neural structures which can help immensely in both surgery billing and scheduling purposes in such units.


Author(s):  
Anass Nouri ◽  
Christophe Charrier ◽  
Olivier Lezoray

This chapter concerns the visual saliency and the perceptual quality assessment of 3D meshes. Firstly, the chapter proposes a definition of visual saliency and describes the state-of-the-art methods for its detection on 3D mesh surfaces. A focus is made on a recent model of visual saliency detection for 3D colored and non-colored meshes whose results are compared with a ground-truth saliency as well as with the literature's methods. Since this model is able to estimate the visual saliency on 3D colored meshes, named colorimetric saliency, a description of the construction of a 3D colored mesh database that was used to assess its relevance is presented. The authors also describe three applications of the detailed model that respond to the problems of viewpoint selection, adaptive simplification and adaptive smoothing. Secondly, two perceptual quality assessment metrics for 3D non-colored meshes are described, analyzed, and compared with the state-of-the-art approaches.


2020 ◽  
Vol 34 (07) ◽  
pp. 12637-12644 ◽  
Author(s):  
Yibo Yang ◽  
Hongyang Li ◽  
Xia Li ◽  
Qijie Zhao ◽  
Jianlong Wu ◽  
...  

The panoptic segmentation task requires a unified result from semantic and instance segmentation outputs that may contain overlaps. However, current studies widely ignore modeling overlaps. In this study, we aim to model overlap relations among instances and resolve them for panoptic segmentation. Inspired by scene graph representation, we formulate the overlapping problem as a simplified case, named scene overlap graph. We leverage each object's category, geometry and appearance features to perform relational embedding, and output a relation matrix that encodes overlap relations. In order to overcome the lack of supervision, we introduce a differentiable module to resolve the overlap between any pair of instances. The mask logits after removing overlaps are fed into per-pixel instance id classification, which leverages the panoptic supervision to assist in the modeling of overlap relations. Besides, we generate an approximate ground truth of overlap relations as the weak supervision, to quantify the accuracy of overlap relations predicted by our method. Experiments on COCO and Cityscapes demonstrate that our method is able to accurately predict overlap relations, and outperform the state-of-the-art performance for panoptic segmentation. Our method also won the Innovation Award in COCO 2019 challenge.


2013 ◽  
Vol 39 (4) ◽  
pp. 847-884 ◽  
Author(s):  
Emili Sapena ◽  
Lluís Padró ◽  
Jordi Turmo

This work is focused on research in machine learning for coreference resolution. Coreference resolution is a natural language processing task that consists of determining the expressions in a discourse that refer to the same entity. The main contributions of this article are (i) a new approach to coreference resolution based on constraint satisfaction, using a hypergraph to represent the problem and solving it by relaxation labeling; and (ii) research towards improving coreference resolution performance using world knowledge extracted from Wikipedia. The developed approach is able to use an entity-mention classification model with more expressiveness than the pair-based ones, and overcome the weaknesses of previous approaches in the state of the art such as linking contradictions, classifications without context, and lack of information evaluating pairs. Furthermore, the approach allows the incorporation of new information by adding constraints, and research has been done in order to use world knowledge to improve performances. RelaxCor, the implementation of the approach, achieved results at the state-of-the-art level, and participated in international competitions: SemEval-2010 and CoNLL-2011. RelaxCor achieved second place in CoNLL-2011.


Author(s):  
Ramgopal Kashyap

Decision makers require a versatile framework that responds and adjusts to the always changing business conditions. The personal information handling arrangement of an organization can offer the least help since it identified with exchanges. For this situation, the decision support system (DSS) joins human abilities with the abilities of PCs to give productive administration of information, announcing, investigation, displaying, and arranging issues. DSS provides a refinement between organized, semi-organized, and unstructured data. Specifically, a DSS lessens the amount of data to a ridiculous organized sum; because of this, choices are made to help the assembling procedure. The objective of these frameworks is to prevent issues inside the generation procedure. This chapter gives an outline of the state-of-the-art craftsmanship writing on DSS and portrays current methods of DSS-related applications inside assembling situations.


Author(s):  
Jin Chen ◽  
Defu Lian ◽  
Kai Zheng

One-class collaborative filtering (OCCF) problems are vital in many applications of recommender systems, such as news and music recommendation, but suffers from sparsity issues and lacks negative examples. To address this problem, the state-of-the-arts assigned smaller weights to unobserved samples and performed low-rank approximation. However, the ground-truth ratings of unobserved samples are usually set to zero but ill-defined. In this paper, we propose a ranking-based implicit regularizer and provide a new general framework for OCCF, to avert the ground-truth ratings of unobserved samples. We then exploit it to regularize a ranking-based loss function and design efficient optimization algorithms to learn model parameters. Finally, we evaluate them on three realworld datasets. The results show that the proposed regularizer significantly improves ranking-based algorithms and that the proposed framework outperforms the state-of-the-art OCCF algorithms.


Crime rate is increasing over the years, and it remains a great challenge for the government to track the crimes convicted. Each area has a pattern of which type of crime is happening and the crime knowledge is inevitable to control from the crime happening. Crime occurs in a sequence leaving hidden patterns. Thus the crime data is to be processed for finding underlying patterns, this project finds the patterns and insights about the crime data. Majorly being an unstructured data, this is been preprocessed and checked for future values. Crimes convicted are collected from a particular area (Indore in our case) and checked for predictions using Multi class Classification Algorithms like Random Forest and the future crime to be convicted in an area is predicted and visualizations are made accordingly. Many classification algorithms like support vector machines, decision trees, and random forest are used to classify and random forest shows better accuracy. Features to be given as input and output are selected by visualizing the data by graphs and plots.


2019 ◽  
Author(s):  
Onur Can Uner ◽  
Ramazan Gokberk Cinbis ◽  
Oznur Tastan ◽  
A. Ercument Cicek

AbstractDrug failures due to unforeseen adverse effects at clinical trials pose health risks for the participants and lead to substantial financial losses. Side effect prediction algorithms have the potential to guide the drug design process. LINCS L1000 dataset provides a vast resource of cell line gene expression data perturbed by different drugs and creates a knowledge base for context specific features. The state-of-the-art approach that aims at using context specific information relies on only the high-quality experiments in LINCS L1000 and discards a large portion of the experiments. In this study, our goal is to boost the prediction performance by utilizing this data to its full extent. We experiment with 5 deep learning architectures. We find that a multi-modal architecture produces the best predictive performance among multi-layer perceptron-based architectures when drug chemical structure (CS), and the full set of drug perturbed gene expression profiles (GEX) are used as modalities. Overall, we observe that the CS is more informative than the GEX. A convolutional neural network-based model that uses only SMILES string representation of the drugs achieves the best results and provides 13.0% macro-AUC and 3.1% micro-AUC improvements over the state-of-the-art. We also show that the model is able to predict side effect-drug pairs that are reported in the literature but was missing in the ground truth side effect dataset. DeepSide is available at http://github.com/OnurUner/DeepSide.


Author(s):  
Kuo-Liang Chung ◽  
Yu-Ling Tseng ◽  
Tzu-Hsien Chan ◽  
Ching-Sheng Wang

In this paper, we rst propose a fast and eective region-based depth map upsampling method, and then propose a joint upsampling and location map-free reversible data hiding method, simpled called the JUR method. In the proposed upsampling method, all the missing depth pixels are partitioned into three disjoint regions: the homogeneous, semi-homogeneous, and non- homogeneous regions. Then, we propose the depth copying, mean value, and bicubic interpolation approaches to reconstruct the three kinds of missing depth pixels quickly, respectively. In the proposed JUR method, without any location map overhead, using the neighboring ground truth depth pixels of each missing depth pixel, achieving substantial quality, and embedding capacity merits. The comprehensive experiments have been carried out to not only justify the execution-time and quality merits of the upsampled depth maps by our upsampling method relative to the state-of-the-art methods, but also justify the embedding capacity and quality merits of our JUR method when compared with the state-of-the-art methods.


2018 ◽  
Vol 68 (5) ◽  
pp. 473-479 ◽  
Author(s):  
Divya Lakshmi Krishnan ◽  
Rajappa Muthaiah ◽  
Anand Madhukar Tapas ◽  
Krithivasan Kannan

Local features are key regions of an image suitable for applications such as image matching, and fusion. Detection of targets under varying atmospheric conditions, via aerial images is a typical defence application where multi spectral correlation is essential. Focuses on local features for the comparison of thermal and visual aerial images in this study. The state of the art differential and intensity comparison based features are evaluated over the dataset. An improved affine invariant feature is proposed with a new saliency measure. The performances of the existing and the proposed features are measured with a ground truth transformation estimated for each of the image pairs. Among the state of the art local features, Speeded Up Robust Feature exhibited the highest average repeatability of 57 per cent. The proposed detector produces features with average repeatability of 64 per cent. Future works include design of techniques for retrieval of corresponding regions.


Sign in / Sign up

Export Citation Format

Share Document