Estimating profitability using a neural classification tool

Author(s):  
Dumitru I. Nastac ◽  
Irina M. Dragan ◽  
Alexandru Isaic-Maniu
Keyword(s):  
2020 ◽  
Vol 32 (2) ◽  
pp. 022030
Author(s):  
R. Olsson ◽  
J. Powell ◽  
J. Frostevarg ◽  
A. F. H. Kaplan

2016 ◽  
Vol 25 (1) ◽  
pp. 41-50 ◽  
Author(s):  
Sharon B. Sams ◽  
Joshua A. Wisell

Discrepancies between intraoperative consultations with frozen section diagnosis and the final pathology report have the potential to alter treatment decisions and affect patient care. Monitoring these correlations is a key component of laboratory quality assurance, however identifying specific areas for improvement can be difficult to attain. Our goal is to develop a standardized method utilizing root cause analysis and a modified Eindhoven classification schematic to identify the source of discrepancies and deferrals and subsequently to guide performance improvement initiatives. A retrospective review of intraoperative consultations performed at a tertiary level hospital and cancer center over a 6-month period identified deferrals and discrepancies between the intraoperative consult report and the final pathology report. We developed and applied a classification tool to identify the process errors and cognitive errors leading to discrepant results. A total of 48 (4.6%) discrepancies and 24 (2.3%) deferrals were identified from the 1042 frozen sections. Within the entire data set of frozen sections, the process errors (n = 26, 54.2%) were due to gross sampling (n = 16, 33.3%), histologic sampling (n = 8, 16.7%), and surgical sampling (n = 2, 4.2%). Interpretation errors (n = 22, 45.8%) included undercalls/false negatives (n=8, 16.7%), overcalls/false positives (n = 10, 20.8%), and misclassification errors (n = 4, 8.3%). Application of our classification tool demonstrated that the root cause of discrepancies and deferrals varied both between organ systems and by specific organs and that classification models may be utilized as a standardized method to identify focused areas for improvement.


Author(s):  
Neha Thomas ◽  
Susan Elias

 Abstract— Detection of fake review and reviewers is currently a challenging problem in cyber space. It is challenging primarily due to the dynamic nature of the methodology used to fake the review. There are several aspects to be considered when analyzing reviews to classify them effective into genuine and fake. Sentiment analysis, opinion mining and intend mining are fields of research that try to accomplish the goal through Natural Language Processing of the text content of the review.  In this paper, an approach that uses the review ratings evaluated along a timeline is presented. An Amazon dataset comprising of ratings indicated for a wide range of products was used for the analysis presented here. The analysis of the ratings was carried out for an electronic product over a period of six years.  The computed average rating helps to identify linear classifiers that define solution boundaries within the dataspace. This enables a product specific classification of review ratings and suitable recommendations can also be generated automatically. The paper explains a methodology to evaluate the average product ratings over time and presents the research outcomes using a novel classification tool. The proposed approach helps to determine the optimal point to distinguish between fake and genuine ratings for each product.    Index Terms: Fake reviews, Fake Ratings, Product Ratings, Online Shopping, Amazon Dataset.


Authorea ◽  
2020 ◽  
Author(s):  
Caroline Lohoff ◽  
Patrick Buchholz ◽  
Marilize Le Roes Hill ◽  
Juergen Pleiss

Author(s):  
Francesc López Seguí ◽  
Ricardo Ander Egg Aguilar ◽  
Gabriel de Maeztu ◽  
Anna García-Altés ◽  
Francesc García Cuyàs ◽  
...  

Background: the primary care service in Catalonia has operated an asynchronous teleconsulting service between GPs and patients since 2015 (eConsulta), which has generated some 500,000 messages. New developments in big data analysis tools, particularly those involving natural language, can be used to accurately and systematically evaluate the impact of the service. Objective: the study was intended to examine the predictive potential of eConsulta messages through different combinations of vector representation of text and machine learning algorithms and to evaluate their performance. Methodology: 20 machine learning algorithms (based on 5 types of algorithms and 4 text representation techniques)were trained using a sample of 3,559 messages (169,102 words) corresponding to 2,268 teleconsultations (1.57 messages per teleconsultation) in order to predict the three variables of interest (avoiding the need for a face-to-face visit, increased demand and type of use of the teleconsultation). The performance of the various combinations was measured in terms of precision, sensitivity, F-value and the ROC curve. Results: the best-trained algorithms are generally effective, proving themselves to be more robust when approximating the two binary variables "avoiding the need of a face-to-face visit" and "increased demand" (precision = 0.98 and 0.97, respectively) rather than the variable "type of query"(precision = 0.48). Conclusion: to the best of our knowledge, this study is the first to investigate a machine learning strategy for text classification using primary care teleconsultation datasets. The study illustrates the possible capacities of text analysis using artificial intelligence. The development of a robust text classification tool could be feasible by validating it with more data, making it potentially more useful for decision support for health professionals.


2011 ◽  
Vol 2011 ◽  
pp. 1-10 ◽  
Author(s):  
Timur Düzenli ◽  
Nalan Özkurt

The performance of wavelet transform-based features for the speech/music discrimination task has been investigated. In order to extract wavelet domain features, discrete and complex orthogonal wavelet transforms have been used. The performance of the proposed feature set has been compared with a feature set constructed from the most common time, frequency and cepstral domain features such as number of zero crossings, spectral centroid, spectral flux, and Mel cepstral coefficients. The artificial neural networks have been used as classification tool. The principal component analysis has been applied to eliminate the correlated features before the classification stage. For discrete wavelet transform, considering the number of vanishing moments and orthogonality, the best performance is obtained with Daubechies8 wavelet among the other members of the Daubechies family. The dual tree wavelet transform has also demonstrated a successful performance both in terms of accuracy and time consumption. Finally, a real-time discrimination system has been implemented using the Daubhecies8 wavelet which has the best accuracy.


PLoS ONE ◽  
2014 ◽  
Vol 9 (5) ◽  
pp. e91929 ◽  
Author(s):  
Claire Hoede ◽  
Sandie Arnoux ◽  
Mark Moisset ◽  
Timothée Chaumier ◽  
Olivier Inizan ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document