scholarly journals AI and the Future of Cyber Competition

2021 ◽  
Author(s):  
Wyatt Hoffman

As states turn to AI to gain an edge in cyber competition, it will change the cat-and-mouse game between cyber attackers and defenders. Embracing machine learning systems for cyber defense could drive more aggressive and destabilizing engagements between states. Wyatt Hoffman writes that cyber competition already has the ingredients needed for escalation to real-world violence, even if these ingredients have yet to come together in the right conditions.

2021 ◽  
Vol 17 (2) ◽  
pp. 1-20
Author(s):  
Zheng Wang ◽  
Qiao Wang ◽  
Tingzhang Zhao ◽  
Chaokun Wang ◽  
Xiaojun Ye

Feature selection, an effective technique for dimensionality reduction, plays an important role in many machine learning systems. Supervised knowledge can significantly improve the performance. However, faced with the rapid growth of newly emerging concepts, existing supervised methods might easily suffer from the scarcity and validity of labeled data for training. In this paper, the authors study the problem of zero-shot feature selection (i.e., building a feature selection model that generalizes well to “unseen” concepts with limited training data of “seen” concepts). Specifically, they adopt class-semantic descriptions (i.e., attributes) as supervision for feature selection, so as to utilize the supervised knowledge transferred from the seen concepts. For more reliable discriminative features, they further propose the center-characteristic loss which encourages the selected features to capture the central characteristics of seen concepts. Extensive experiments conducted on various real-world datasets demonstrate the effectiveness of the method.


2021 ◽  
Author(s):  
Chih-Kuan Yeh ◽  
Been Kim ◽  
Pradeep Ravikumar

Understanding complex machine learning models such as deep neural networks with explanations is crucial in various applications. Many explanations stem from the model perspective, and may not necessarily effectively communicate why the model is making its predictions at the right level of abstraction. For example, providing importance weights to individual pixels in an image can only express which parts of that particular image is important to the model, but humans may prefer an explanation which explains the prediction by concept-based thinking. In this work, we review the emerging area of concept based explanations. We start by introducing concept explanations including the class of Concept Activation Vectors (CAV) which characterize concepts using vectors in appropriate spaces of neural activations, and discuss different properties of useful concepts, and approaches to measure the usefulness of concept vectors. We then discuss approaches to automatically extract concepts, and approaches to address some of their caveats. Finally, we discuss some case studies that showcase the utility of such concept-based explanations in synthetic settings and real world applications.


Author(s):  
Jessica Taylor ◽  
Eliezer Yudkowsky ◽  
Patrick LaVictoire ◽  
Andrew Critch

This chapter surveys eight research areas organized around one question: As learning systems become increasingly intelligent and autonomous, what design principles can best ensure that their behavior is aligned with the interests of the operators? The chapter focuses on two major technical obstacles to AI alignment: the challenge of specifying the right kind of objective functions and the challenge of designing AI systems that avoid unintended consequences and undesirable behavior even in cases where the objective function does not line up perfectly with the intentions of the designers. The questions surveyed include the following: How can we train reinforcement learners to take actions that are more amenable to meaningful assessment by intelligent overseers? What kinds of objective functions incentivize a system to “not have an overly large impact” or “not have many side effects”? The chapter discusses these questions, related work, and potential directions for future research, with the goal of highlighting relevant research topics in machine learning that appear tractable today.


Never Trump ◽  
2020 ◽  
pp. 240-248
Author(s):  
Robert P. Saldin ◽  
Steven M. Teles

This concluding chapter highlights how the Republican Party has been substantially transformed by the experience of having Donald Trump at its head. The president's reelection in 2020 would only deepen that transformation. Deep sociological forces—in particular, a Republican Party base that is increasingly white, working class, Christian, less formally educated, and older—will lead the party to go where its voters are. What Trump started, his Republican successors will finish. Just as parties of the right across the Western world have become more populist and nationalist, so will the Republicans. That, of course, bodes poorly for most of the Never Trumpers, who combined a deep distaste for Trump personally with a professional interest in a less populist governing style and a disinclination to see their party go ideologically where he wanted to take it. Ultimately, the future is unwritten because it will be shaped by the choices of individuals. Never Trump will have failed comprehensively in its founding mission, which was to prevent the poison of Donald Trump from entering the nation's political bloodstream. However, it is likely to be seen, in decades to come, as the first foray into a new era of American politics.


2020 ◽  
Author(s):  
Ben Buchanan ◽  
John Bansemer ◽  
Dakota Cary ◽  
Jack Lucas ◽  
Micah Musser

Based on an in-depth analysis of artificial intelligence and machine learning systems, the authors consider the future of applying such systems to cyber attacks, and what strategies attackers are likely or less likely to use. As nuanced, complex, and overhyped as machine learning is, they argue, it remains too important to ignore.


Author(s):  
Fred Powell

This chapter explores the political context of human rights and how it is shaping the future. It argues that human rights constitute the very substance of democracy by conferring a universal set of rights on the citizen, arguing that Hannah Arendt’s famous phrase ‘the right to have rights’ defines the complex relationship between democracy, human rights and civil society. It discusses how human rights embracing both individual liberty and social justice have been historically contested and critically assesses the state of human rights in today’s world along with the potential threats and opportunities for human rights development into the future. The chapter concludes by arguing that the restoration of a universal welfare state, as the embodiment of human rights in a globalised world, arguably should be the priority for the future of democracy in the twenty-first century.


2021 ◽  
Author(s):  
Cor Steging ◽  
Silja Renooij ◽  
Bart Verheij

The justification of an algorithm’s outcomes is important in many domains, and in particular in the law. However, previous research has shown that machine learning systems can make the right decisions for the wrong reasons: despite high accuracies, not all of the conditions that define the domain of the training data are learned. In this study, we investigate what the system does learn, using state-of-the-art explainable AI techniques. With the use of SHAP and LIME, we are able to show which features impact the decision making process and how the impact changes with different distributions of the training data. However, our results also show that even high accuracy and good relevant feature detection are no guarantee for a sound rationale. Hence these state-of-the-art explainable AI techniques cannot be used to fully expose unsound rationales, further advocating the need for a separate method for rationale evaluation.


2017 ◽  
Author(s):  
Michael Veale

Presented as a talk at the 4th Workshop on Fairness, Accountability and Transparency in Machine Learning (FAT/ML 2017), Halifax, Nova Scotia, Canada.Machine learning systems are increasingly used to support public sector decision-making across a variety of sectors. Given concerns around accountability in these domains, and amidst accusations of intentional or unintentional bias, there have been increased calls for transparency of these technologies. Few, however, have considered how logics and practices concerning transparency have been understood by those involved in the machine learning systems already being piloted and deployed in public bodies today. This short paper distils insights about transparency on the ground from interviews with 27 such actors, largely public servants and relevant contractors, across 5 OECD countries. Considering transparency and opacity in relation to trust and buy-in, better decision-making, and the avoidance of gaming, it seeks to provide useful insights for those hoping to develop socio-technical approaches to transparency that might be useful to practitioners on-the-ground.


2021 ◽  
Author(s):  
Zachary Arnold ◽  
◽  
Helen Toner

As modern machine learning systems become more widely used, the potential costs of malfunctions grow. This policy brief describes how trends we already see today—both in newly deployed artificial intelligence systems and in older technologies—show how damaging the AI accidents of the future could be. It describes a wide range of hypothetical but realistic scenarios to illustrate the risks of AI accidents and offers concrete policy suggestions to reduce these risks.


Sign in / Sign up

Export Citation Format

Share Document