scholarly journals Black Boxes and the Role of Modeling in Environmental Policy Making

2021 ◽  
Vol 9 ◽  
Author(s):  
Eduardo Eiji Maeda ◽  
Päivi Haapasaari ◽  
Inari Helle ◽  
Annukka Lehikoinen ◽  
Alexey Voinov ◽  
...  

Modeling is essential for modern science, and science-based policies are directly affected by the reliability of model outputs. Artificial intelligence has improved the accuracy and capability of model simulations, but often at the expense of a rational understanding of the systems involved. The lack of transparency in black box models, artificial intelligence based ones among them, can potentially affect the trust in science driven policy making. Here, we suggest that a broader discussion is needed to address the implications of black box approaches on the reliability of scientific advice used for policy making. We argue that participatory methods can bridge the gap between increasingly complex scientific methods and the people affected by their interpretations

Symmetry ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 102
Author(s):  
Mohammad Reza Davahli ◽  
Waldemar Karwowski ◽  
Krzysztof Fiok ◽  
Thomas Wan ◽  
Hamid R. Parsaei

In response to the need to address the safety challenges in the use of artificial intelligence (AI), this research aimed to develop a framework for a safety controlling system (SCS) to address the AI black-box mystery in the healthcare industry. The main objective was to propose safety guidelines for implementing AI black-box models to reduce the risk of potential healthcare-related incidents and accidents. The system was developed by adopting the multi-attribute value model approach (MAVT), which comprises four symmetrical parts: extracting attributes, generating weights for the attributes, developing a rating scale, and finalizing the system. On the basis of the MAVT approach, three layers of attributes were created. The first level contained six key dimensions, the second level included 14 attributes, and the third level comprised 78 attributes. The key first level dimensions of the SCS included safety policies, incentives for clinicians, clinician and patient training, communication and interaction, planning of actions, and control of such actions. The proposed system may provide a basis for detecting AI utilization risks, preventing incidents from occurring, and developing emergency plans for AI-related risks. This approach could also guide and control the implementation of AI systems in the healthcare industry.


2001 ◽  
Vol 1 (3) ◽  
pp. 11-30 ◽  
Author(s):  
Thomas Princen

If social scientists are going to make a contribution to environmental policy-making that is commensurate with the severity of biophysical trends, they must develop analytic tools that go beyond marginal improvement and a production focus where key actors escape responsibility via distanced commerce and the black box of consumer sovereignty. One means is to construct an ecologically informed “consumption angle” on economic activity. The first approach is to retain the prevailing supply-demand dichotomy and address the externalities of consumption and the role of power in consuming. The second approach is to construe all economic activity as “consuming,” as “using up.” This approach construes material provisioning in the context of hunter/gathering, cultivation, and manufacture and then develops three interpretive layers of excess consumption: background consumption, overconsumption, and misconsumption. An example from timbering illustrates how, by going up and down the decision chain, the consumption angle generates questions about what is consumed and what is put at risk. Explicit assignment of responsibility for excess throughput becomes more likely.


2021 ◽  
Author(s):  
J. Eric T. Taylor ◽  
Graham Taylor

Artificial intelligence powered by deep neural networks has reached a levelof complexity where it can be difficult or impossible to express how a modelmakes its decisions. This black-box problem is especially concerning when themodel makes decisions with consequences for human well-being. In response,an emerging field called explainable artificial intelligence (XAI) aims to increasethe interpretability, fairness, and transparency of machine learning. In thispaper, we describe how cognitive psychologists can make contributions to XAI.The human mind is also a black box, and cognitive psychologists have overone hundred and fifty years of experience modeling it through experimentation.We ought to translate the methods and rigour of cognitive psychology to thestudy of artificial black boxes in the service of explainability. We provide areview of XAI for psychologists, arguing that current methods possess a blindspot that can be complemented by the experimental cognitive tradition. Wealso provide a framework for research in XAI, highlight exemplary cases ofexperimentation within XAI inspired by psychological science, and provide atutorial on experimenting with machines. We end by noting the advantages ofan experimental approach and invite other psychologists to conduct research inthis exciting new field.


Author(s):  
Kacper Sokol ◽  
Peter Flach

Understanding data, models and predictions is important for machine learning applications. Due to the limitations of our spatial perception and intuition, analysing high-dimensional data is inherently difficult. Furthermore, black-box models achieving high predictive accuracy are widely used, yet the logic behind their predictions is often opaque. Use of textualisation -- a natural language narrative of selected phenomena -- can tackle these shortcomings. When extended with argumentation theory we could envisage machine learning models and predictions arguing persuasively for their choices.


Author(s):  
Evren Daglarli

Today, the effects of promising technologies such as explainable artificial intelligence (xAI) and meta-learning (ML) on the internet of things (IoT) and the cyber-physical systems (CPS), which are important components of Industry 4.0, are increasingly intensified. However, there are important shortcomings that current deep learning models are currently inadequate. These artificial neural network based models are black box models that generalize the data transmitted to it and learn from the data. Therefore, the relational link between input and output is not observable. For these reasons, it is necessary to make serious efforts on the explanability and interpretability of black box models. In the near future, the integration of explainable artificial intelligence and meta-learning approaches to cyber-physical systems will have effects on a high level of virtualization and simulation infrastructure, real-time supply chain, cyber factories with smart machines communicating over the internet, maximizing production efficiency, analysis of service quality and competition level.


2021 ◽  
pp. 279-292
Author(s):  
Sonam Tshering ◽  
Nima Dorji

This chapter reflects on Bhutan’s response to the Covid-19 pandemic. The people’s trust and confidence in the leadership of His Majesty the King, their government, strong Buddhist values to help each other, and the conscience of unity and solidarity proved their foremost strength in containing this pandemic as a nation. The king’s personal involvement helped guide, motivate, and encourage compliance with and support for the government’s response. However, Bhutan faced several challenges during the pandemic. Though most of the people are united, there are outliers who took advantage of the situation; there are reported cases of drug smuggling and one case of a person who escaped from quarantine. The government responded by increasing border patrols. In the long run, other solutions could be considered: installing a smart wall—using drones, sensors, and artificial intelligence patrols—would give Bhutan more control over its borders in the context of another epidemic while also enabling the government to better control smuggling.


2018 ◽  
Vol 14 (2) ◽  
pp. 165
Author(s):  
Mohammed Salman Tayie ◽  
Ibrahim Mohammad Dashti

Objective: The study discusses the role of the parliament in foreign policy-making. The role of the parliament has increased over time in light of the complexities and intertwined interests among countries and the emergence of globalization phenomenon, which has dehumanized the nature and reality of international relations, the need to deepen cooperation among the parliamentarians of countries has increased and expanding it to various fields, and hence emerged the role of parliament in foreign policy-making, so that Public Diplomacy is a substitute for government diplomacy.Method: The study is based on the institutional approach, which is one of the oldest methods used in political analysis. It stems from the study of political institutions in terms of composition and competencies. The institutional approach in its origins is due to the traditional constitutional school in the study of political systems, which was confusing the concept of state and the political system, and the latter is seen as the system of government as defined by constitutional law, i.e. the set of rules and laws governing public authorities and defining their terms of reference and functions.Results: The study concluded that parliamentary diplomacy has become a substitute for the official diplomacy and contributes side by side to the exclusion of war crises and finding the pursuit peaceful diplomatic solutions. The Public Diplomacy is a tributary of support for official diplomacy if coordinated together and the value of Public Diplomacy increases as the goals and orientations of the country's foreign policy express the values and aspirations of the people truthfully.Conclusion: The study concluded that parliaments - especially in democratic systems - play an important role in the process of foreign policy-making, and that there are external and internal factors affecting the effectiveness of the role of parliament in foreign policy. The Kuwaiti parliamentary experience reflected this development of parliamentary diplomacy and its role in foreign policy-making.


Author(s):  
Evren Dağlarli

The explainable artificial intelligence (xAI) is one of the interesting issues that has emerged recently. Many researchers are trying to deal with the subject with different dimensions and interesting results that have come out. However, we are still at the beginning of the way to understand these types of models. The forthcoming years are expected to be years in which the openness of deep learning models is discussed. In classical artificial intelligence approaches, we frequently encounter deep learning methods available today. These deep learning methods can yield highly effective results according to the data set size, data set quality, the methods used in feature extraction, the hyper parameter set used in deep learning models, the activation functions, and the optimization algorithms. However, there are important shortcomings that current deep learning models are currently inadequate. These artificial neural network-based models are black box models that generalize the data transmitted to it and learn from the data. Therefore, the relational link between input and output is not observable. This is an important open point in artificial neural networks and deep learning models. For these reasons, it is necessary to make serious efforts on the explainability and interpretability of black box models.


Author(s):  
Marco J. Nathan

Textbooks and other popular venues commonly present science as a progressive “brick-by-brick” accumulation of knowledge and facts. Despite its hallowed history and familiar ring, this depiction is nowadays rejected by most specialists. Then why are books and articles, written by these same experts, actively promoting such a distorted characterization? The short answer is that no better alternative is available. There currently are two competing models of the scientific enterprise: reductionism and antireductionism. Neither provides an accurate depiction of the productive interaction between knowledge and ignorance, supplanting the old metaphor of the “wall” of knowledge. This book explores an original conception of the nature and advancement of science. The proposed shift brings attention to a prominent, albeit often neglected, construct—the black box—which underlies a well-oiled technique for incorporating a productive role of ignorance and failure into the acquisition of empirical knowledge. What is a black box? How does it work? How is it constructed? How does one determine what to include and what to leave out? What role do boxes play in contemporary scientific practice? By detailing some fascinating episodes in the history of biology, psychology, and economics, Nathan revisits foundational questions about causation, explanation, emergence, and progress, showing how the insights of both reductionism and antireductionism can be reconciled into a fresh and exciting approach to science.


2020 ◽  
Vol 12 (12) ◽  
pp. 226
Author(s):  
Laith T. Khrais

The advent and incorporation of technology in businesses have reformed operations across industries. Notably, major technical shifts in e-commerce aim to influence customer behavior in favor of some products and brands. Artificial intelligence (AI) comes on board as an essential innovative tool for personalization and customizing products to meet specific demands. This research finds that, despite the contribution of AI systems in e-commerce, its ethical soundness is a contentious issue, especially regarding the concept of explainability. The study adopted the use of word cloud analysis, voyance analysis, and concordance analysis to gain a detailed understanding of the idea of explainability as has been utilized by researchers in the context of AI. Motivated by a corpus analysis, this research lays the groundwork for a uniform front, thus contributing to a scientific breakthrough that seeks to formulate Explainable Artificial Intelligence (XAI) models. XAI is a machine learning field that inspects and tries to understand the models and steps involved in how the black box decisions of AI systems are made; it provides insights into the decision points, variables, and data used to make a recommendation. This study suggested that, to deploy explainable XAI systems, ML models should be improved, making them interpretable and comprehensible.


Sign in / Sign up

Export Citation Format

Share Document