scholarly journals CPT to RVU conversion improves model performance in the prediction of surgical case length

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Nicholas Garside ◽  
Hamed Zaribafzadeh ◽  
Ricardo Henao ◽  
Royce Chung ◽  
Daniel Buckland

AbstractMethods used to predict surgical case time often rely upon the current procedural terminology (CPT) code as a nominal variable to train machine-learned models, however this limits the ability of the model to incorporate new procedures and adds complexity as the number of unique procedures increases. The relative value unit (RVU, a consensus-derived billing indicator) can serve as a proxy for procedure workload and could replace the CPT code as a primary feature for models that predict surgical case length. Using 11,696 surgical cases from Duke University Health System electronic health records data, we compared boosted decision tree models that predict individual case length, changing the method by which the model coded procedure type; CPT, RVU, and CPT–RVU combined. Performance of each model was assessed by inference time, MAE, and RMSE compared to the actual case length on a test set. Models were compared to each other and to the manual scheduler method that currently exists. RMSE for the RVU model (60.8 min) was similar to the CPT model (61.9 min), both of which were lower than scheduler (90.2 min). 65.2% of our RVU model’s predictions (compared to 43.2% from the current human scheduler method) fell within 20% of actual case time. Using RVUs reduced model prediction time by ninefold and reduced the number of training features from 485 to 44. Replacing pre-operative CPT codes with RVUs maintains model performance while decreasing overall model complexity in the prediction of surgical case length.

2021 ◽  
Author(s):  
Nicholas Garside ◽  
Hamed Zaribafzadeh ◽  
Ricardo Henao ◽  
Royce Chung ◽  
Daniel Buckland

Abstract Methods used to predict surgical case time often rely upon the current procedural terminology (CPT) code as a nominal variable to train machine-learned models, however this limits the ability of the model to incorporate new procedures and adds complexity as the number of unique procedures increases. The relative value unit (RVU, a consensus-derived billing indicator) can serve as a proxy for procedure workload and could replace the CPT code as a primary feature for models that predict surgical case length. Using 11,696 surgical cases from Duke University Health System electronic health records data, we compared boosted decision tree models that predict individual case length, changing the method by which the model coded procedure type; CPT, RVU, and CPT-RVU combined. Performance of each model was assessed by inference time, MAE, and RMSE compared to the actual case length on a test set. Models were compared to each other and to the manual scheduler method that currently exists. RMSE for the RVU model (60.8 mins) was similar to the CPT model (61.9 mins), both of which were lower than scheduler (90.2 mins). 65.2% of our RVU model’s predictions (compared to 43.2% from the current human scheduler methods) fell within 20% of actual case time. Using RVUs reduced model prediction time by 8-fold. Replacing pre-operative CPT codes with RVUs maintains model performance while decreasing overall model complexity in the prediction of surgical case length.


Author(s):  
Thorsten Meiser

Stochastic dependence among cognitive processes can be modeled in different ways, and the family of multinomial processing tree models provides a flexible framework for analyzing stochastic dependence among discrete cognitive states. This article presents a multinomial model of multidimensional source recognition that specifies stochastic dependence by a parameter for the joint retrieval of multiple source attributes together with parameters for stochastically independent retrieval. The new model is equivalent to a previous multinomial model of multidimensional source memory for a subset of the parameter space. An empirical application illustrates the advantages of the new multinomial model of joint source recognition. The new model allows for a direct comparison of joint source retrieval across conditions, it avoids statistical problems due to inflated confidence intervals and does not imply a conceptual imbalance between source dimensions. Model selection criteria that take model complexity into account corroborate the new model of joint source recognition.


2006 ◽  
Vol 10 (3) ◽  
pp. 395-412 ◽  
Author(s):  
H. Kunstmann ◽  
J. Krause ◽  
S. Mayr

Abstract. Even in physically based distributed hydrological models, various remaining parameters must be estimated for each sub-catchment. This can involve tremendous effort, especially when the number of sub-catchments is large and the applied hydrological model is computationally expensive. Automatic parameter estimation tools can significantly facilitate the calibration process. Hence, we combined the nonlinear parameter estimation tool PEST with the distributed hydrological model WaSiM. PEST is based on the Gauss-Marquardt-Levenberg method, a gradient-based nonlinear parameter estimation algorithm. WaSiM is a fully distributed hydrological model using physically based algorithms for most of the process descriptions. WaSiM was applied to the alpine/prealpine Ammer River catchment (southern Germany, 710 km2 in a 100×100 m2 horizontal resolution. The catchment is heterogeneous in terms of geology, pedology and land use and shows a complex orography (the difference of elevation is around 1600 m). Using the developed PEST-WaSiM interface, the hydrological model was calibrated by comparing simulated and observed runoff at eight gauges for the hydrologic year 1997 and validated for the hydrologic year 1993. For each sub-catchment four parameters had to be calibrated: the recession constants of direct runoff and interflow, the drainage density, and the hydraulic conductivity of the uppermost aquifer. Additionally, five snowmelt specific parameters were adjusted for the entire catchment. Altogether, 37 parameters had to be calibrated. Additional a priori information (e.g. from flood hydrograph analysis) narrowed the parameter space of the solutions and improved the non-uniqueness of the fitted values. A reasonable quality of fit was achieved. Discrepancies between modelled and observed runoff were also due to the small number of meteorological stations and corresponding interpolation artefacts in the orographically complex terrain. Application of a 2-dimensional numerical groundwater model partly yielded a slight decrease of overall model performance when compared to a simple conceptual groundwater approach. Increased model complexity therefore did not yield in general increased model performance. A detailed covariance analysis was performed allowing to derive confidence bounds for all estimated parameters. The correlation between the estimated parameters was in most cases negligible, showing that parameters were estimated independently from each other.


2018 ◽  
Vol 22 (11) ◽  
pp. 5967-5985 ◽  
Author(s):  
Cédric Rebolho ◽  
Vazken Andréassian ◽  
Nicolas Le Moine

Abstract. The production of spatially accurate representations of potential inundation is often limited by the lack of available data as well as model complexity. We present in this paper a new approach for rapid inundation mapping, MHYST, which is well adapted for data-scarce areas; it combines hydraulic geometry concepts for channels and DEM data for floodplains. Its originality lies in the fact that it does not work at the cross section scale but computes effective geometrical properties to describe the reach scale. Combining reach-scale geometrical properties with 1-D steady-state flow equations, MHYST computes a topographically coherent relation between the “height above nearest drainage” and streamflow. This relation can then be used on a past or future event to produce inundation maps. The MHYST approach is tested here on an extreme flood event that occurred in France in May–June 2016. The results indicate that it has a tendency to slightly underestimate inundation extents, although efficiency criteria values are clearly encouraging. The spatial distribution of model performance is discussed and it shows that the model can perform very well on most reaches, but has difficulties modelling the more complex, urbanised reaches. MHYST should not be seen as a rival to detailed inundation studies, but as a first approximation able to rapidly provide inundation maps in data-scarce areas.


2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Cheng-Jian Lin ◽  
Chun-Hui Lin ◽  
Shyh-Hau Wang

Deep learning has accomplished huge success in computer vision applications such as self-driving vehicles, facial recognition, and controlling robots. A growing need for deploying systems on resource-limited or resource-constrained environments such as smart cameras, autonomous vehicles, robots, smartphones, and smart wearable devices drives one of the current mainstream developments of convolutional neural networks: reducing model complexity but maintaining fine accuracy. In this study, the proposed efficient light convolutional neural network (ELNet) comprises three convolutional modules which perform ELNet using fewer computations, which is able to be implemented in resource-constrained hardware equipment. The classification task using CIFAR-10 and CIFAR-100 datasets was used to verify the model performance. According to the experimental results, ELNet reached 92.3% and 69%, respectively, in CIFAR-10 and CIFAR-100 datasets; moreover, ELNet effectively lowered the computational complexity and parameters required in comparison with other CNN architectures.


2021 ◽  
Vol 21 (8) ◽  
pp. 2447-2460
Author(s):  
Stuart R. Mead ◽  
Jonathan Procter ◽  
Gabor Kereszturi

Abstract. The use of mass flow simulations in volcanic hazard zonation and mapping is often limited by model complexity (i.e. uncertainty in correct values of model parameters), a lack of model uncertainty quantification, and limited approaches to incorporate this uncertainty into hazard maps. When quantified, mass flow simulation errors are typically evaluated on a pixel-pair basis, using the difference between simulated and observed (“actual”) map-cell values to evaluate the performance of a model. However, these comparisons conflate location and quantification errors, neglecting possible spatial autocorrelation of evaluated errors. As a result, model performance assessments typically yield moderate accuracy values. In this paper, similarly moderate accuracy values were found in a performance assessment of three depth-averaged numerical models using the 2012 debris avalanche from the Upper Te Maari crater, Tongariro Volcano, as a benchmark. To provide a fairer assessment of performance and evaluate spatial covariance of errors, we use a fuzzy set approach to indicate the proximity of similarly valued map cells. This “fuzzification” of simulated results yields improvements in targeted performance metrics relative to a length scale parameter at the expense of decreases in opposing metrics (e.g. fewer false negatives result in more false positives) and a reduction in resolution. The use of this approach to generate hazard zones incorporating the identified uncertainty and associated trade-offs is demonstrated and indicates a potential use for informed stakeholders by reducing the complexity of uncertainty estimation and supporting decision-making from simulated data.


2021 ◽  
Author(s):  
Stuart R. Mead ◽  
Jonathan Procter ◽  
Gabor Kereszturi

Abstract. The use of mass flow simulations in volcanic hazard zonation and mapping is often limited by model complexity (i.e. uncertainty in correct values of model parameters), a lack of model uncertainty quantification, and limited approaches to incorporate this uncertainty into hazard maps. When quantified, mass flow simulation errors are typically evaluated on a pixel-pair basis, using the difference between simulated and observed (actual) map-cell values to evaluate the performance of a model. However, these comparisons conflate location and quantification errors, neglecting possible spatial autocorrelation of evaluated errors. As a result, model performance assessments typically yield moderate accuracy values. In this paper, similarly moderate accuracy values were found in a performance assessment of three depth-averaged numerical models using the 2012 debris avalanche from the Upper Te Maari crater, Tongariro Volcano as a benchmark. To provide a fairer assessment of performance and evaluate spatial covariance of errors, we use a fuzzy set approach to indicate the proximity of similarly valued map cells. This fuzzification of simulated results yields improvements in targeted performance metrics relative to a length scale parameter, at the expense of decreases in opposing metrics (e.g. less false negatives results in more false positives) and a reduction in resolution. The use of this approach to generate hazard zones incorporating the identified uncertainty and associated trade-offs is demonstrated, and indicates a potential use for informed stakeholders by reducing the complexity of uncertainty estimation and supporting decision making from simulated data.


2021 ◽  
Author(s):  
Ladan Vahidi-Arbabi

Thermal performance of complex buildings like data centers is not easy to evaluate. Experimental Investigation of the effects of energy conservation methods or any alteration that might occur in hundreds of variables in data centres would cost stakeholders time and money. And they might find worthless at times. Building energy model is a well-established field of science with an insufficient number of applications in data centers. This study presents methods of developing a data center model based on an actual case study. Moreover, it identifies effective calibrating strategies to increase the model performance accuracy relative to a recorded dataset. A reliable energy model can assist data center operators and researchers in different ways. As a result, calibrated energy model proved Earth Rangers’ data center can be independent of a heat pump or chiller use for most of the year, while ground heat exchangers deliver excessive heat to the ground as the heat sink.


Energies ◽  
2020 ◽  
Vol 13 (8) ◽  
pp. 2102 ◽  
Author(s):  
Vo-Nguyen Tuyet-Doan ◽  
Tien-Tung Nguyen ◽  
Minh-Tuan Nguyen ◽  
Jong-Ho Lee ◽  
Yong-Hwa Kim

Detecting, measuring, and classifying partial discharges (PDs) are important tasks for assessing the condition of insulation systems used in different electrical equipment. Owing to the implementation of the phase-resolved PD (PRPD) as a sequence input, an existing method that processes sequential data, e.g., the recurrent neural network, using a long short-term memory (LSTM) has been applied for fault classification. However, the model performance is not further improved because of the lack of supporting parallel computation and the inability to recognize the relevance of all inputs. To overcome these two drawbacks, we propose a novel deep-learning model in this study based on a self-attention mechanism to classify the PD patterns in a gas-insulated switchgear (GIS). The proposed model uses a self-attention block that offers the advantages of simultaneous computation and selective focusing on parts of the PRPD signals and a classification block to finally classify faults in the GIS. Moreover, the combination of LSTM and self-attention is considered for comparison purposes. The experimental results show that the proposed method achieves performance superiority compared with the previous neural networks, whereas the model complexity is significantly reduced.


Sign in / Sign up

Export Citation Format

Share Document