4. Big Data from a Health Insurance Company’s Point of View

Author(s):  
Thomas Brunner
Author(s):  
L. Shkulipa

In the article the importance of blockchain technology in the economy and predicting its development from the accounting point of view was investigated. The methods used in the study are based on the analysis of literature related to disclosure issues and a description of existing blockchain claims on the world stage. On the basis of this, a predictive assessment of the considered results for the further development of blockchain technology in the economy, its impact on accounting and the profession of accountant was made. The findings include the positive and negative effects of blockchain technology on the medical and banking sectors, information technology, the financial sector, and accounting. The blockchain in the hype cycle was considered as a phenomenon that all new technologies undergo before stable existing or disappearing. Based on the consideration of the most famous blockchain projects with the combination of Big Data, the estimation of the development technologies of Blockchain and Big Data in finance was discussed. This study suggests to consider blockchain technology as (1) a new way of sending and processing invoices, documents, contracts, and payments, reducing errors, costs and transaction time; (2) a path to financial equality through affordability; (3) investments in the local economy so that developing countries can grow significantly; (4) updating the currency market and the international monetary and financial transaction system; (5) a major breakthrough in the economy together with the Big Data technology.


2020 ◽  
Author(s):  
Seung-Hyun Jeong ◽  
Tae Rim Lee ◽  
Jung Bae Kang ◽  
Mun-Taek Choi

BACKGROUND Early detection of childhood developmental delays is very important for the treatment of disabilities. OBJECTIVE To investigate the possibility of detecting childhood developmental delays leading to disabilities before clinical registration by analyzing big data from a health insurance database. METHODS In this study, the data from children, individuals aged up to 13 years (n=2412), from the Sample Cohort 2.0 DB of the Korea National Health Insurance Service were organized by age range. Using 6 categories (having no disability, having a physical disability, having a brain lesion, having a visual impairment, having a hearing impairment, and having other conditions), features were selected in the order of importance with a tree-based model. We used multiple classification algorithms to find the best model for each age range. The earliest age range with clinically significant performance showed the age at which conditions can be detected early. RESULTS The disability detection model showed that it was possible to detect disabilities with significant accuracy even at the age of 4 years, about a year earlier than the mean diagnostic age of 4.99 years. CONCLUSIONS Using big data analysis, we discovered the possibility of detecting disabilities earlier than clinical diagnoses, which would allow us to take appropriate action to prevent disabilities.


Health Policy ◽  
2017 ◽  
Vol 121 (6) ◽  
pp. 708-714 ◽  
Author(s):  
Giora Kaplan ◽  
Yael Shahar ◽  
Orna Tal

2020 ◽  
pp. 1564-1619
Author(s):  
Jeremy Horne

In the last half century, we have gone from storing data on 5¼ inch floppy diskettes to the cloud and now use fog computing. But one should ask why so much data is being collected. Part of the answer is simple in light of scientific projects, but why is there so much data on us? Then, we ask about its “interface” through fog computing. Such questions prompt this article on the philosophy of big data and fog computing. After some background on definitions, origins and contemporary applications, the main discussion begins with thinking about modern data collection, management, and applications from a complexity standpoint. Big data is turned into knowledge, but knowledge is extrapolated from the past and used to manage the future. Yet it is questionable whether humans have the capacity to manage contemporary technological and social complexity evidenced by our world in crisis and possibly on the brink of extinction. Such calls for a new way of studying societies from a scientific point of view. We are at the center of the observation from which big data emerge and are manipulated, the overall human project being not only to create an artificial brain with an attendant mind, but a society that might be able to survive what “natural” humans cannot.


Author(s):  
Vala Ali Rohani ◽  
Sedigheh Moghavvemi ◽  
Tiago Pinho ◽  
Paulo Caldas

Due to the COVID‐19 pandemic, most countries are exposed to unprecedented social problems in the current global situation. According to the official reports, it caused a dramatic increase of 44% in graduates' unemployment rate in Portugal. Moreover, from the human resource point of view, the whole of Europe is expected to face a shortage of 925,000 data professionals by 2025. Given the existing situations, the DataPro aims to propose a national-level reskilling solution in big data to mitigate both social problems of unemployability and the shortage of data professionals in Portugal. DataPro project consists of four dimensions, including an online portal for the hiring companies and unemployed graduates, along with a web-based analytics talent upskilling (ATU) platform empowered by an artificial intelligence recommender system to match the reskilled data professionals and the hiring companies.


Author(s):  
Dimitar Christozov ◽  
Katia Rasheva-Yordanova

The article shares the authors' experiences in training bachelor-level students to explore Big Data applications in solving nowadays problems. The article discusses curriculum issues and pedagogical techniques connected to developing Big Data competencies. The following objectives are targeted: The importance and impact of making rational, data driven decisions in the Big Data era; Complexity of developing and exploring a Big Data Application in solving real life problems; Learning skills to adopt and explore emerging technologies; and Knowledge and skills to interpret and communicate results of data analysis via combining domain knowledge with system expertise. The curriculum covers: The two general uses of Big Data Analytics Applications, which are well distinguished from the point of view of end-user's objectives (presenting and visualizing data via aggregation and summarization [data warehousing: data cubes, dash boards, etc.] and learning from Data [data mining techniques]); Organization of Data Sources: distinction of Master Data from Operational Data, in particular; Extract-Transform-Load (ETL) process; and Informing vs. Misinforming, including the issue of over-trust vs. under-trust of obtained analytical results.


Sign in / Sign up

Export Citation Format

Share Document