Big Data, Semantics, and Policy-Making

Author(s):  
Lamyaa El Bassiti

At the heart of all policy design and implementation, there is a need to understand how well decisions are made. It is evidently known that the quality of decision making depends significantly on the quality of the analyses and advice provided to the associated actors. Over decades, organizations were highly diligent in gathering and processing vast amounts of data, but they have given less emphasis on how these data can be used in policy argument. With the arrival of big data, attention has been focused on whether it could be used to inform policy-making. This chapter aims to bridge this gap, to understand variations in how big data could yield usable evidence, and how policymakers can make better use of those evidence in policy choices. An integrated and holistic look at how solving complex problems could be conducted on the basis of semantic technologies and big data is presented in this chapter.

2018 ◽  
Vol 18 (03) ◽  
pp. e23 ◽  
Author(s):  
María José Basgall ◽  
Waldo Hasperué ◽  
Marcelo Naiouf ◽  
Alberto Fernández ◽  
Francisco Herrera

The volume of data in today's applications has meant a change in the way Machine Learning issues are addressed. Indeed, the Big Data scenario involves scalability constraints that can only be achieved through intelligent model design and the use of distributed technologies. In this context, solutions based on the Spark platform have established themselves as a de facto standard. In this contribution, we focus on a very important framework within Big Data Analytics, namely classification with imbalanced datasets. The main characteristic of this problem is that one of the classes is underrepresented, and therefore it is usually more complex to find a model that identifies it correctly. For this reason, it is common to apply preprocessing techniques such as oversampling to balance the distribution of examples in classes. In this work we present SMOTE-BD, a fully scalable preprocessing approach for imbalanced classification in Big Data. It is based on one of the most widespread preprocessing solutions for imbalanced classification, namely the SMOTE algorithm, which creates new synthetic instances according to the neighborhood of each example of the minority class. Our novel development is made to be independent of the number of partitions or processes created to achieve a higher degree of efficiency. Experiments conducted on different standard and Big Data datasets show the quality of the proposed design and implementation.


Web Services ◽  
2019 ◽  
pp. 1430-1443
Author(s):  
Louise Leenen ◽  
Thomas Meyer

The Governments, military forces and other organisations responsible for cybersecurity deal with vast amounts of data that has to be understood in order to lead to intelligent decision making. Due to the vast amounts of information pertinent to cybersecurity, automation is required for processing and decision making, specifically to present advance warning of possible threats. The ability to detect patterns in vast data sets, and being able to understanding the significance of detected patterns are essential in the cyber defence domain. Big data technologies supported by semantic technologies can improve cybersecurity, and thus cyber defence by providing support for the processing and understanding of the huge amounts of information in the cyber environment. The term big data analytics refers to advanced analytic techniques such as machine learning, predictive analysis, and other intelligent processing techniques applied to large data sets that contain different data types. The purpose is to detect patterns, correlations, trends and other useful information. Semantic technologies is a knowledge representation paradigm where the meaning of data is encoded separately from the data itself. The use of semantic technologies such as logic-based systems to support decision making is becoming increasingly popular. However, most automated systems are currently based on syntactic rules. These rules are generally not sophisticated enough to deal with the complexity of decisions required to be made. The incorporation of semantic information allows for increased understanding and sophistication in cyber defence systems. This paper argues that both big data analytics and semantic technologies are necessary to provide counter measures against cyber threats. An overview of the use of semantic technologies and big data technologies in cyber defence is provided, and important areas for future research in the combined domains are discussed.


Author(s):  
Nerea Almeda ◽  
Carlos R. García-Alonso ◽  
José A. Salinas-Pérez ◽  
Mencía R. Gutiérrez-Colosía ◽  
Luis Salvador-Carulla

Mental health services and systems (MHSS) are characterized by their complexity. Causal modelling is a tool for decision-making based on identifying critical variables and their causal relationships. In the last two decades, great efforts have been made to provide integrated and balanced mental health care, but there is no a clear systematization of causal links among MHSS variables. This study aims to review the empirical background of causal modelling applications (Bayesian networks and structural equation modelling) for MHSS management. The study followed the PRISMA guidelines (PROSPERO: CRD42018102518). The quality of the studies was assessed by using a new checklist based on MHSS structure, target population, resources, outcomes, and methodology. Seven out of 1847 studies fulfilled the inclusion criteria. After the review, the selected papers showed very different objectives and subjects of study. This finding seems to indicate that causal modelling has potential to be relevant for decision-making. The main findings provided information about the complexity of the analyzed systems, distinguishing whether they analyzed a single MHSS or a group of MHSSs. The discriminative power of the checklist for quality assessment was evaluated, with positive results. This review identified relevant strategies for policy-making. Causal modelling can be used for better understanding the MHSS behavior, identifying service performance factors, and improving evidence-informed policy-making.


2019 ◽  
Vol 19 (149) ◽  
pp. 1
Author(s):  

A technical assistance (TA) mission was conducted by IMF’s Regional Technical Assistance Center for Southern Africa (AFS)1 during February 25–March 8, 2019 to assist Statistics Botswana (SB) improve the quality of the national accounts statistics. This was the first mission on national accounts conducted by AFS to SB since January 2015. Reliable national accounts are essential for informed economic policy-making by the authorities. They also provide the private sector, foreign investors, rating agencies, donors and the public in general with important inputs in their decision-making, while informing economic analysis and IMF surveillance. Rebasing the national accounts is recommended every five years. They require comprehensive surveys and ideally, Supply and Use tables (SUTs) to support coherence checking of data.


Author(s):  
Maryam Ebrahimi

Big Data is transforming industries such as healthcare, financial services and banking, insurance, pharmacy, and telecommunication. Big Data concerns datasets that are not only big, but also high in variety and velocity, which makes them difficult to manage applying traditional tools and techniques. Big Data causes multitude benefits and advantages for industries such as marketing and selling, fraud detection, competitive advantage, risk reduction, and finally decision making and policy making. Due to the rapid growth of such data, methodologies and conceptual architectures need to be studied and provided in order to handle and extract value and knowledge from these data. The purpose of this chapter is studying Big Data benefits, characteristics, methodologies, and conceptual architectures in five different industries. Finally, according to the studies, a comprehensive methodology and architecture are proposed which might be applicable in service sector and one of the useful outcomes can be public policies.


Web Services ◽  
2019 ◽  
pp. 185-203
Author(s):  
Maryam Ebrahimi

Big Data is transforming industries such as healthcare, financial services and banking, insurance, pharmacy, and telecommunication. Big Data concerns datasets that are not only big, but also high in variety and velocity, which makes them difficult to manage applying traditional tools and techniques. Big Data causes multitude benefits and advantages for industries such as marketing and selling, fraud detection, competitive advantage, risk reduction, and finally decision making and policy making. Due to the rapid growth of such data, methodologies and conceptual architectures need to be studied and provided in order to handle and extract value and knowledge from these data. The purpose of this chapter is studying Big Data benefits, characteristics, methodologies, and conceptual architectures in five different industries. Finally, according to the studies, a comprehensive methodology and architecture are proposed which might be applicable in service sector and one of the useful outcomes can be public policies.


Author(s):  
Yazeed Alkatheeri ◽  
Ali Ameen ◽  
Osama Isaac ◽  
Mohammed Nusari ◽  
Balaganesh Duraisamy ◽  
...  
Keyword(s):  
Big Data ◽  

Author(s):  
Graciela Bensusan ◽  
Ilan Bizberg

This chapter analyses two public policy cases: The most recent labour and educational reforms in Mexico. It focused on these two cases because they show the interplay and decision making of social and political actors framed in a corporatist arrangement and its consequences on the design and implementation of public policies. The chapter is organized as follows: First, it presents an analysis of each of the abovementioned public policies and their institutional changes. It then studies the political processes through which the decisions were made, taking into account what was at stake, the actors involved, the scenarios and the rules of the game. The chapter then discusses the way in which all these factors influenced the quality of policy analysis, the implementation of the policies adopted and the manner in which corporatism diminished their effectiveness, credibility and permanence.


2014 ◽  
Vol 1 (1) ◽  
pp. 137-143 ◽  
Author(s):  
Crystal C. Hall ◽  
Martha M. Galvez ◽  
Isaac M. Sederbaum

Assumptions about decision making and consumer preferences guide programs and products intended to help low-income households achieve healthy outcomes and financial stability. Despite their importance to service design and implementation, these assumptions are rarely stated explicitly, or empirically tested. Some key assumptions may reflect ideas carried over from an earlier era of social-service delivery. Or they may reflect research on decision making by higher income populations that do not hold or have not been tested in a low-income context. This disconnect between assumptions and evidence potentially results in less effective policy design and implementation—at substantial financial and social cost. This piece examines how insights from psychology can help policymakers analyze the core assumptions about behavior that underlie policy outcomes. Three policy areas serve as case studies, to examine some implicit and explicit assumptions about how low-income individuals make decisions under public and nonprofit assistance: banking, nutrition, and housing. Research on preferences and decision making evaluates these foundational assumptions. This perspective provides a unique and under-utilized framework to explain some behavioral puzzles, examine and predict the actions of individuals living in poverty, and understand what are often disappointing program outcomes. Recommendations suggest how psychology and behavioral decision making can impact policy research and design.


2014 ◽  
Vol 2 (2) ◽  
pp. 57-71 ◽  
Author(s):  
Michael Howlett ◽  
Ishani Mukherjee

Public policies are the result of efforts made by governments to alter aspects of behaviour—both that of their own agents and of society at large—in order to carry out some end or purpose. They are comprised of arrangements of policy goals and policy means matched through some decision-making process. These policy-making efforts can be more, or less, systematic in attempting to match ends and means in a logical fashion or can result from much less systematic processes. “Policy design” implies a knowledge-based process in which the choice of means or mechanisms through which policy goals are given effect follows a logical process of inference from known or learned relationships between means and outcomes. This includes both design in which means are selected in accordance with experience and knowledge and that in which principles and relationships are incorrectly or only partially articulated or understood. Policy decisions can be careful and deliberate in attempting to best resolve a problem or can be highly contingent and driven by situational logics. Decisions stemming from bargaining or opportunism can also be distinguished from those which result from careful analysis and assessment. This article considers both modes and formulates a spectrum of policy formulation types between “design” and “non-design” which helps clarify the nature of each type and the likelihood of each unfolding.


Sign in / Sign up

Export Citation Format

Share Document