Evaluation of Large-scale Data to Detect Irregularity in Payment for Medical Services

2016 ◽  
Vol 55 (03) ◽  
pp. 284-291
Author(s):  
Junghyun Park ◽  
Seokjoon Yoon ◽  
Minki Kim

SummaryBackground: Sophisticated anti-fraud systems for the healthcare sector have been built based on several statistical methods. Although existing methods have been developed to detect fraud in the healthcare sector, these algorithms consume considerable time and cost, and lack a theoretical basis to handle large-scale data.Objectives: Based on mathematical theory, this study proposes a new approach to using Benford’s Law in that we closely examined the individual-level data to identify specific fees for in-depth analysis.Methods: We extended the mathematical theory to demonstrate the manner in which large-scale data conform to Benford’s Law. Then, we empirically tested its applicability using actual large-scale healthcare data from Korea’s Health Insurance Review and Assessment (HIRA) National Patient Sample (NPS). For Benford’s Law, we considered the mean absolute deviation (MAD) formula to test the large-scale data.Results: We conducted our study on 32 diseases, comprising 25 representative diseases and 7 DRG-regulated diseases. We performed an empirical test on 25 diseases, showing the applicability of Benford’s Law to large-scale data in the healthcare industry. For the seven DRG-regulated diseases, we examined the individual-level data to identify specific fees to carry out an in-depth analysis. Among the eight categories of medical costs, we considered the strength of certain irregularities based on the details of each DRG-regulated disease.Conclusions: Using the degree of abnormality, we propose priority action to be taken by government health departments and private insurance institutions to bring unnecessary medical expenses under control. However, when we detect deviations from Benford’s Law, relatively high contamination ratios are required at conventional significance levels.

2020 ◽  
Vol 33 (3-4) ◽  
pp. 160-174 ◽  
Author(s):  
Jacy L. Young

In the late 19th century, the questionnaire was one means of taking the case study into the multitudes. This article engages with Forrester’s idea of thinking in cases as a means of interrogating questionnaire-based research in early American psychology. Questionnaire research was explicitly framed by psychologists as a practice involving both natural historical and statistical forms of scientific reasoning. At the same time, questionnaire projects failed to successfully enact the latter aspiration in terms of synthesizing masses of collected data into a coherent whole. Difficulties in managing the scores of descriptive information questionnaires generated ensured the continuing presence of individuals in the results of this research, as the individual case was excerpted and discussed alongside a cast of others. As a consequence, questionnaire research embodied an amalgam of case, natural historical, and statistical thinking. Ultimately, large-scale data collection undertaken with questionnaires failed in its aim to construct composite exemplars or ‘types’ of particular kinds of individuals; to produce the singular from the multitudes.


F1000Research ◽  
2019 ◽  
Vol 7 ◽  
pp. 620 ◽  
Author(s):  
Parashkev Nachev ◽  
Geraint Rees ◽  
Richard Frackowiak

Translation in cognitive neuroscience remains beyond the horizon, brought no closer by supposed major advances in our understanding of the brain. Unless our explanatory models descend to the individual level—a cardinal requirement for any intervention—their real-world applications will always be limited. Drawing on an analysis of the informational properties of the brain, here we argue that adequate individualisation needs models of far greater dimensionality than has been usual in the field. This necessity arises from the widely distributed causality of neural systems, a consequence of the fundamentally adaptive nature of their developmental and physiological mechanisms. We discuss how recent advances in high-performance computing, combined with collections of large-scale data, enable the high-dimensional modelling we argue is critical to successful translation, and urge its adoption if the ultimate goal of impact on the lives of patients is to be achieved.


2020 ◽  
Vol 8 (3) ◽  
pp. 305-319 ◽  
Author(s):  
Dániel Hegedűs

The web 2.0 phenomenon and social media – without question – have reshaped our everyday experiences. These changes that they have generated affect how we consume, communicate and present ourselves, just to name a few aspects of life, and moreover, opened up new perspectives for sociology. Though many social practices persist in a somewhat altered form, brand new types of entities have emerged on different social media platforms: one of them is the video blogger. These actors have gained great visibility through so-called micro-celebrity practices and have become potential large-scale distributors of ideas, values and knowledge. Celebrities, in this case micro-celebrities (video bloggers), may disseminate such cognitive patterns through their constructed discourse which is objectified in the online space through a peculiar digital face (a social media profile) where fans can react, share and comment according to the affordances of the digital space. Most importantly, all of these interactions are accessible for scholars to examine the fan and celebrity practices of our era. This research attempts to reconstruct these discursive interactions on the Facebook pages of ten top Hungarian video bloggers. All findings are based on a large-scale data collection using the Netvizz application. As part of the interpretation of the results, a further consideration was that celebrity discourses may be a sort of disciplinary force in (post)modern society, which normalizes the individual to some extent by providing adequate schemas of attitude, mentality and ways of consumption.


2017 ◽  
Vol 24 (4) ◽  
pp. 799-805 ◽  
Author(s):  
Jean Louis Raisaro ◽  
Florian Tramèr ◽  
Zhanglong Ji ◽  
Diyue Bu ◽  
Yongan Zhao ◽  
...  

Abstract The Global Alliance for Genomics and Health (GA4GH) created the Beacon Project as a means of testing the willingness of data holders to share genetic data in the simplest technical context—a query for the presence of a specified nucleotide at a given position within a chromosome. Each participating site (or “beacon”) is responsible for assuring that genomic data are exposed through the Beacon service only with the permission of the individual to whom the data pertains and in accordance with the GA4GH policy and standards. While recognizing the inference risks associated with large-scale data aggregation, and the fact that some beacons contain sensitive phenotypic associations that increase privacy risk, the GA4GH adjudged the risk of re-identification based on the binary yes/no allele-presence query responses as acceptable. However, recent work demonstrated that, given a beacon with specific characteristics (including relatively small sample size and an adversary who possesses an individual’s whole genome sequence), the individual’s membership in a beacon can be inferred through repeated queries for variants present in the individual’s genome. In this paper, we propose three practical strategies for reducing re-identification risks in beacons. The first two strategies manipulate the beacon such that the presence of rare alleles is obscured; the third strategy budgets the number of accesses per user for each individual genome. Using a beacon containing data from the 1000 Genomes Project, we demonstrate that the proposed strategies can effectively reduce re-identification risk in beacon-like datasets.


2014 ◽  
Vol 602-605 ◽  
pp. 3265-3267
Author(s):  
Zhan Kun Zhao

With the development of computer and network technology, the storage and analysis for complex and large-scale of data can be implemented. According to the requirement, this paper designs a large-scale data management system based on C/S structure which realizes concentration, integration, security management, in-depth analysis and report generation of the field data. Application of the system can improve the scientific management level of enterprises, enhance employee productivity, ensure security, integrity and accuracy of the data, and provide important data for the performance review and decisions of enterprises development, thus improving economic efficiency of enterprises.


F1000Research ◽  
2018 ◽  
Vol 7 ◽  
pp. 620 ◽  
Author(s):  
Parashkev Nachev ◽  
Geraint Rees ◽  
Richard Frackowiak

Translation in cognitive neuroscience remains beyond the horizon, brought no closer by supposed major advances in our understanding of the brain. Unless our explanatory models descend to the individual level—a cardinal requirement for any intervention—their real-world applications will always be limited. Drawing on an analysis of the informational properties of the brain, here we argue that adequate individualisation needs models of far greater dimensionality than has been usual in the field. This necessity arises from the widely distributed causality of neural systems, a consequence of the fundamentally adaptive nature of their developmental and physiological mechanisms. We discuss how recent advances in high-performance computing, combined with collections of large-scale data, enable the high-dimensional modelling we argue is critical to successful translation, and urge its adoption if the ultimate goal of impact on the lives of patients is to be achieved.


2009 ◽  
Vol 28 (11) ◽  
pp. 2737-2740
Author(s):  
Xiao ZHANG ◽  
Shan WANG ◽  
Na LIAN

Sign in / Sign up

Export Citation Format

Share Document