scholarly journals Collaboration between Government and Research Community to Respond to COVID-19: Israel’s Case

2021 ◽  
Vol 7 (4) ◽  
pp. 208
Author(s):  
Mor Peleg ◽  
Amnon Reichman ◽  
Sivan Shachar ◽  
Tamir Gadot ◽  
Meytal Avgil Tsadok ◽  
...  

Triggered by the COVID-19 crisis, Israel’s Ministry of Health (MoH) held a virtual datathon based on deidentified governmental data. Organized by a multidisciplinary committee, Israel’s research community was invited to offer insights to help solve COVID-19 policy challenges. The Datathon was designed to develop operationalizable data-driven models to address COVID-19 health policy challenges. Specific relevant challenges were defined and diverse, reliable, up-to-date, deidentified governmental datasets were extracted and tested. Secure remote-access research environments were established. Registration was open to all citizens. Around a third of the applicants were accepted, and they were teamed to balance areas of expertise and represent all sectors of the community. Anonymous surveys for participants and mentors were distributed to assess usefulness and points for improvement and retention for future datathons. The Datathon included 18 multidisciplinary teams, mentored by 20 data scientists, 6 epidemiologists, 5 presentation mentors, and 12 judges. The insights developed by the three winning teams are currently considered by the MoH as potential data science methods relevant for national policies. Based on participants’ feedback, the process for future data-driven regulatory responses for health crises was improved. Participants expressed increased trust in the MoH and readiness to work with the government on these or future projects.

RSC Advances ◽  
2016 ◽  
Vol 6 (37) ◽  
pp. 30928-30936 ◽  
Author(s):  
Hugh F. Wilson ◽  
Amanda S. Barnard

We demonstrate an approach for the use of data science methods for structural search for high-stability atomic structures in ab initio simulation, via the analysis of a large set of candidate structures.


2020 ◽  
Author(s):  
Saeed Nosratabadi ◽  
Amir Mosavi ◽  
Puhong Duan ◽  
Pedram Ghamisi ◽  
Ferdinand Filip ◽  
...  

This paper provides a state-of-the-art investigation of advances in data science in emerging economic applications. The analysis was performed on novel data science methods in four individual classes of deep learning models, hybrid deep learning models, hybrid machine learning, and ensemble models. Application domains include a wide and diverse range of economics research from the stock market, marketing, and e-commerce to corporate banking and cryptocurrency. Prisma method, a systematic literature review methodology, was used to ensure the quality of the survey. The findings reveal that the trends follow the advancement of hybrid models, which, based on the accuracy metric, outperform other learning algorithms. It is further expected that the trends will converge toward the advancements of sophisticated hybrid deep learning models.


Information ◽  
2021 ◽  
Vol 12 (5) ◽  
pp. 204
Author(s):  
Charlyn Villavicencio ◽  
Julio Jerison Macrohon ◽  
X. Alphonse Inbaraj ◽  
Jyh-Horng Jeng ◽  
Jer-Guang Hsieh

A year into the COVID-19 pandemic and one of the longest recorded lockdowns in the world, the Philippines received its first delivery of COVID-19 vaccines on 1 March 2021 through WHO’s COVAX initiative. A month into inoculation of all frontline health professionals and other priority groups, the authors of this study gathered data on the sentiment of Filipinos regarding the Philippine government’s efforts using the social networking site Twitter. Natural language processing techniques were applied to understand the general sentiment, which can help the government in analyzing their response. The sentiments were annotated and trained using the Naïve Bayes model to classify English and Filipino language tweets into positive, neutral, and negative polarities through the RapidMiner data science software. The results yielded an 81.77% accuracy, which outweighs the accuracy of recent sentiment analysis studies using Twitter data from the Philippines.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Syed Iftikhar Hussain Shah ◽  
Vassilios Peristeras ◽  
Ioannis Magnisalis

AbstractThe public sector, private firms, business community, and civil society are generating data that is high in volume, veracity, velocity and comes from a diversity of sources. This kind of data is known as big data. Public Administrations (PAs) pursue big data as “new oil” and implement data-centric policies to transform data into knowledge, to promote good governance, transparency, innovative digital services, and citizens’ engagement in public policy. From the above, the Government Big Data Ecosystem (GBDE) emerges. Managing big data throughout its lifecycle becomes a challenging task for governmental organizations. Despite the vast interest in this ecosystem, appropriate big data management is still a challenge. This study intends to fill the above-mentioned gap by proposing a data lifecycle framework for data-driven governments. Through a Systematic Literature Review, we identified and analysed 76 data lifecycles models to propose a data lifecycle framework for data-driven governments (DaliF). In this way, we contribute to the ongoing discussion around big data management, which attracts researchers’ and practitioners’ interest.


2021 ◽  
Vol 11 (7) ◽  
pp. 3110
Author(s):  
Karina Gibert ◽  
Xavier Angerri

In this paper, the results of the project INSESS-COVID19 are presented, as part of a special call owing to help in the COVID19 crisis in Catalonia. The technological infrastructure and methodology developed in this project allows the quick screening of a territory for a quick a reliable diagnosis in front of an unexpected situation by providing relevant decisional information to support informed decision-making and strategy and policy design. One of the challenges of the project was to extract valuable information from direct participatory processes where specific target profiles of citizens are consulted and to distribute the participation along the whole territory. Having a lot of variables with a moderate number of citizens involved (in this case about 1000) implies the risk of violating statistical secrecy when multivariate relationships are analyzed, thus putting in risk the anonymity of the participants as well as their safety when vulnerable populations are involved, as is the case of INSESS-COVID19. In this paper, the entire data-driven methodology developed in the project is presented and the dealing of the small subgroups of population for statistical secrecy preserving described. The methodology is reusable with any other underlying questionnaire as the data science and reporting parts are totally automatized.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Huimin Li ◽  
Limin Su ◽  
Jian Zuo ◽  
Xiaowei An ◽  
Guanghua Dong ◽  
...  

PurposeUnbalanced bidding can seriously imposed the government from obtaining the best value for the taxpayers' money in public procurement since it increases the owner's cost and decreases the fairness of the competitive bidding process. How to detect an unbalanced bid is a challenging task faced by theoretical researchers and practical actors. This study aims to develop an identification method of unbalanced bidding in the construction industry.Design/methodology/approachThe identification of unbalanced bidding is considered as a multi-criteria decision-making (MCDM) problem. A data-driven unit price database from the historical bidding document is built to present the reference unit prices as benchmarks. According to the proposed extended TOPSIS method, the data-driven unit price is chosen as the positive ideal solution, and the unit price that has the furthest absolute distance measure as the negative ideal solution. The concept of relative distance is introduced to measure the distances between positive and negative ideal solutions and each bidding unit price. The unbalanced bidding degree is ranked by means of relative distance.FindingsThe proposed model can be used for the quantitative evaluation of unbalanced bidding from a decision-making perspective. The identification process is developed according to the decision-making process. The finding shows that the model will support owners to efficiently and effectively identify unbalanced bidding in the bid evaluation stage.Originality/valueThe data-driven reference unit prices improve the accuracy of the benchmark to evaluate the unbalanced bidding. The extended TOPSIS model is applied to identify unbalanced bidding; the owners can undertake objective decision-making to identify and prevent unbalanced bidding at the stage of procurement.


2021 ◽  
Author(s):  
MUTHU RAM ELENCHEZHIAN ◽  
VAMSEE VADLAMUDI ◽  
RASSEL RAIHAN ◽  
KENNETH REIFSNIDER

Our community has a widespread knowledge on the damage tolerance and durability of the composites, developed over the past few decades by various experimental and computational efforts. Several methods have been used to understand the damage behavior and henceforth predict the material states such as residual strength (damage tolerance) and life (durability) of these material systems. Electrochemical Impedance Spectroscopy (EIS) and Broadband Dielectric Spectroscopy (BbDS) are such methods, which have been proven to identify the damage states in composites. Our previous work using BbDS method has proven to serve as precursor to identify the damage levels, indicating the beginning of end of life of the material. As a change in the material state variable is triggered by damage development, the rate of change of these states indicates the rate of damage interaction and can effectively predict impending failure. The Data-Driven Discovery of Models (D3M) [1] aims to develop model discovery systems, enabling users with domain knowledge but no data science background to create empirical models of real, complex processes. These D3M methods have been developed severely over the years in various applications and their implementation on real-time prediction for complex parameters such as material states in composites need to be trusted based on physics and domain knowledge. In this research work, we propose the use of data-driven methods combined with BbDS and progressive damage analysis to identify and hence predict material states in composites, subjected to fatigue loads.


Author(s):  
Ihor Ponomarenko ◽  
Oleksandra Lubkovska

The subject of the research is the approach to the possibility of using data science methods in the field of health care for integrated data processing and analysis in order to optimize economic and specialized processes The purpose of writing this article is to address issues related to the specifics of the use of Data Science methods in the field of health care on the basis of comprehensive information obtained from various sources. Methodology. The research methodology is system-structural and comparative analyzes (to study the application of BI-systems in the process of working with large data sets); monograph (the study of various software solutions in the market of business intelligence); economic analysis (when assessing the possibility of using business intelligence systems to strengthen the competitive position of companies). The scientific novelty the main sources of data on key processes in the medical field. Examples of innovative methods of collecting information in the field of health care, which are becoming widespread in the context of digitalization, are presented. The main sources of data in the field of health care used in Data Science are revealed. The specifics of the application of machine learning methods in the field of health care in the conditions of increasing competition between market participants and increasing demand for relevant products from the population are presented. Conclusions. The intensification of the integration of Data Science in the medical field is due to the increase of digitized data (statistics, textual informa- tion, visualizations, etc.). Through the use of machine learning methods, doctors and other health professionals have new opportunities to improve the efficiency of the health care system as a whole. Key words: Data science, efficiency, information, machine learning, medicine, Python, healthcare.


2021 ◽  
Author(s):  
Karen Triep ◽  
Alexander Benedikt Leichtle ◽  
Martin Meister ◽  
Georg Martin Fiedler ◽  
Olga Endrich

BACKGROUND The criteria for the diagnosis of kidney disease outlined in “The Kidney Disease: Improving Global Outcomes (KDIGO)” are based on a patient’s current, historical and baseline data. The diagnosis of acute (AKI), chronic (CKD) and acute-on-chronic kidney disease requires past measurements of creatinine and back-calculation and the interpretation of several laboratory values over a certain period. Diagnosis may be hindered by unclear definition of the individual creatinine baseline and rough ranges of norm values set without adjustment for age, ethnicity, comorbidities and treatment. Classification of the correct diagnosis and the sufficient staging improves coding, data quality, reimbursement, the choice of therapeutic approach and the patient’s outcome. OBJECTIVE With the help of a complex rule-engine a data-driven approach to assign the diagnoses acute, chronic and acute-on-chronic kidney disease is applied. METHODS Real-time and retrospective data from the hospital’s Clinical Data Warehouse of in- and outpatient cases treated between 2014 – 2019 is used. Delta serum creatinine, baseline values and admission and discharge data are analyzed. A KDIGO based standard query language (SQL) algorithm applies specific diagnosis (ICD) codes to inpatient stays. To measure the effect on diagnosis, Text Mining on discharge documentation is conducted. RESULTS We show that this approach yields an increased number of diagnoses as well as higher precision in documentation and coding (unspecific diagnosis ICD N19* coded in % of N19 generated 17.8 in 2016, 3.3 in 2019). CONCLUSIONS Our data-driven method supports the process and reliability of diagnosis and staging and improves the quality of documentation and data. Measuring patients’ outcome will be the next step of the project.


2021 ◽  
pp. 026638212110619
Author(s):  
Sharon Richardson

During the past two decades, there have been a number of breakthroughs in the fields of data science and artificial intelligence, made possible by advanced machine learning algorithms trained through access to massive volumes of data. However, their adoption and use in real-world applications remains a challenge. This paper posits that a key limitation in making AI applicable has been a failure to modernise the theoretical frameworks needed to evaluate and adopt outcomes. Such a need was anticipated with the arrival of the digital computer in the 1950s but has remained unrealised. This paper reviews how the field of data science emerged and led to rapid breakthroughs in algorithms underpinning research into artificial intelligence. It then discusses the contextual framework now needed to advance the use of AI in real-world decisions that impact human lives and livelihoods.


Sign in / Sign up

Export Citation Format

Share Document