scholarly journals Automating Cyber Attacks

2020 ◽  
Author(s):  
Ben Buchanan ◽  
John Bansemer ◽  
Dakota Cary ◽  
Jack Lucas ◽  
Micah Musser

Based on an in-depth analysis of artificial intelligence and machine learning systems, the authors consider the future of applying such systems to cyber attacks, and what strategies attackers are likely or less likely to use. As nuanced, complex, and overhyped as machine learning is, they argue, it remains too important to ignore.

2021 ◽  
Author(s):  
Zachary Arnold ◽  
◽  
Helen Toner

As modern machine learning systems become more widely used, the potential costs of malfunctions grow. This policy brief describes how trends we already see today—both in newly deployed artificial intelligence systems and in older technologies—show how damaging the AI accidents of the future could be. It describes a wide range of hypothetical but realistic scenarios to illustrate the risks of AI accidents and offers concrete policy suggestions to reduce these risks.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Muhammad Javed Iqbal ◽  
Zeeshan Javed ◽  
Haleema Sadia ◽  
Ijaz A. Qureshi ◽  
Asma Irshad ◽  
...  

AbstractArtificial intelligence (AI) is the use of mathematical algorithms to mimic human cognitive abilities and to address difficult healthcare challenges including complex biological abnormalities like cancer. The exponential growth of AI in the last decade is evidenced to be the potential platform for optimal decision-making by super-intelligence, where the human mind is limited to process huge data in a narrow time range. Cancer is a complex and multifaced disorder with thousands of genetic and epigenetic variations. AI-based algorithms hold great promise to pave the way to identify these genetic mutations and aberrant protein interactions at a very early stage. Modern biomedical research is also focused to bring AI technology to the clinics safely and ethically. AI-based assistance to pathologists and physicians could be the great leap forward towards prediction for disease risk, diagnosis, prognosis, and treatments. Clinical applications of AI and Machine Learning (ML) in cancer diagnosis and treatment are the future of medical guidance towards faster mapping of a new treatment for every individual. By using AI base system approach, researchers can collaborate in real-time and share knowledge digitally to potentially heal millions. In this review, we focused to present game-changing technology of the future in clinics, by connecting biology with Artificial Intelligence and explain how AI-based assistance help oncologist for precise treatment.


Author(s):  
Petar Radanliev ◽  
David De Roure ◽  
Kevin Page ◽  
Max Van Kleek ◽  
Omar Santos ◽  
...  

AbstractMultiple governmental agencies and private organisations have made commitments for the colonisation of Mars. Such colonisation requires complex systems and infrastructure that could be very costly to repair or replace in cases of cyber-attacks. This paper surveys deep learning algorithms, IoT cyber security and risk models, and established mathematical formulas to identify the best approach for developing a dynamic and self-adapting system for predictive cyber risk analytics supported with Artificial Intelligence and Machine Learning and real-time intelligence in edge computing. The paper presents a new mathematical approach for integrating concepts for cognition engine design, edge computing and Artificial Intelligence and Machine Learning to automate anomaly detection. This engine instigates a step change by applying Artificial Intelligence and Machine Learning embedded at the edge of IoT networks, to deliver safe and functional real-time intelligence for predictive cyber risk analytics. This will enhance capacities for risk analytics and assists in the creation of a comprehensive and systematic understanding of the opportunities and threats that arise when edge computing nodes are deployed, and when Artificial Intelligence and Machine Learning technologies are migrated to the periphery of the internet and into local IoT networks.


Entropy ◽  
2020 ◽  
Vol 23 (1) ◽  
pp. 18
Author(s):  
Pantelis Linardatos ◽  
Vasilis Papastefanopoulos ◽  
Sotiris Kotsiantis

Recent advances in artificial intelligence (AI) have led to its widespread industrial adoption, with machine learning systems demonstrating superhuman performance in a significant number of tasks. However, this surge in performance, has often been achieved through increased model complexity, turning such systems into “black box” approaches and causing uncertainty regarding the way they operate and, ultimately, the way that they come to decisions. This ambiguity has made it problematic for machine learning systems to be adopted in sensitive yet critical domains, where their value could be immense, such as healthcare. As a result, scientific interest in the field of Explainable Artificial Intelligence (XAI), a field that is concerned with the development of new methods that explain and interpret machine learning models, has been tremendously reignited over recent years. This study focuses on machine learning interpretability methods; more specifically, a literature review and taxonomy of these methods are presented, as well as links to their programming implementations, in the hope that this survey would serve as a reference point for both theorists and practitioners.


Author(s):  
Dirk Beerbaum ◽  
Julia Margarete Puaschunder

Technological improvement in the age of information has increased the possibilities to control the innocent social media users or penalize private investors and reap the benefits of their existence in hidden persuasion and discrimination. This chapter takes as a case the transparency technology XBRL (eXtensible Business Reporting Language), which should make data more accessible as well as usable for private investors. Considering theoretical literature and field research, a representation issue for principles-based accounting taxonomies exists, which intelligent machines applying artificial intelligence (AI) nudge to facilitate decision usefulness. This chapter conceptualizes ethical questions arising from the taxonomy engineering based on machine learning systems and advocates for a democratization of information, education, and transparency about nudges and coding rules.


AI Magazine ◽  
2017 ◽  
Vol 38 (4) ◽  
pp. 99-106
Author(s):  
Jeannette Bohg ◽  
Xavier Boix ◽  
Nancy Chang ◽  
Elizabeth F. Churchill ◽  
Vivian Chu ◽  
...  

The Association for the Advancement of Artificial Intelligence, in cooperation with Stanford University's Department of Computer Science, presented the 2017 Spring Symposium Series, held Monday through Wednesday, March 27–29, 2017 on the campus of Stanford University. The eight symposia held were Artificial Intelligence for the Social Good (SS-17-01); Computational Construction Grammar and Natural Language Understanding (SS-17-02); Computational Context: Why It's Important, What It Means, and Can It Be Computed? (SS-17-03); Designing the User Experience of Machine Learning Systems (SS-17-04); Interactive Multisensory Object Perception for Embodied Agents (SS-17-05); Learning from Observation of Humans (SS-17-06); Science of Intelligence: Computational Principles of Natural and Artificial Intelligence (SS-17-07); and Wellbeing AI: From Machine Learning to Subjectivity Oriented Computing (SS-17-08). This report, compiled from organizers of the symposia, summarizes the research that took place.


2018 ◽  
Vol 4 (5) ◽  
pp. 443-463
Author(s):  
Jim Shook ◽  
Robyn Smith ◽  
Alex Antonio

Businesses and consumers increasingly use artificial intelligence (“AI”)— and specifically machine learning (“ML”) applications—in their daily work. ML is often used as a tool to help people perform their jobs more efficiently, but increasingly it is becoming a technology that may eventually replace humans in performing certain functions. An AI recently beat humans in a reading comprehension test, and there is an ongoing race to replace human drivers with self-driving cars and trucks. Tomorrow there is the potential for much more—as AI is even learning to build its own AI. As the use of AI technologies continues to expand, and especially as machines begin to act more autonomously with less human intervention, important questions arise about how we can best integrate this new technology into our society, particularly within our legal and compliance frameworks. The questions raised are different from those that we have already addressed with other technologies because AI is different. Most previous technologies functioned as a tool, operated by a person, and for legal purposes we could usually hold that person responsible for actions that resulted from using that tool. For example, an employee who used a computer to send a discriminatory or defamatory email could not have done so without the computer, but the employee would still be held responsible for creating the email. While AI can function as merely a tool, it can also be designed to act after making its own decisions, and in the future, will act even more autonomously. As AI becomes more autonomous, it will be more difficult to determine who—or what—is making decisions and taking actions, and determining the basis and responsibility for those actions. These are the challenges that must be overcome to ensure AI’s integration for legal and compliance purposes.


2021 ◽  
Author(s):  
Wyatt Hoffman

As states turn to AI to gain an edge in cyber competition, it will change the cat-and-mouse game between cyber attackers and defenders. Embracing machine learning systems for cyber defense could drive more aggressive and destabilizing engagements between states. Wyatt Hoffman writes that cyber competition already has the ingredients needed for escalation to real-world violence, even if these ingredients have yet to come together in the right conditions.


Sign in / Sign up

Export Citation Format

Share Document