Artificial Intelligence

Author(s):  
Bistra Konstantinova Vassileva

In recent years, artificial intelligence (AI) has gained attention from policymakers, universities, researchers, companies and businesses, media, and the wide public. The growing importance and relevance of artificial intelligence (AI) to humanity is undisputed: AI assistants and recommendations, for instance, are increasingly embedded in our daily lives. The chapter starts with a critical review on AI definitions since terms such as “artificial intelligence,” “machine learning,” and “data science” are often used interchangeably, yet they are not the same. The first section begins with AI capabilities and AI research clusters. Basic categorisation of AI is presented as well. The increasing societal relevance of AI and its rising inburst in our daily lives though sometimes controversial are discussed in second section. The chapter ends with conclusions and recommendations aimed at future development of AI in a responsible manner.

2021 ◽  
Author(s):  
◽  
Hazel Darney

<p>With the rapid uptake of machine learning artificial intelligence in our daily lives, we are beginning to realise the risks involved in implementing this technology in high-stakes decision making. This risk is due to machine learning decisions being based in human-curated datasets, meaning these decisions are not bias-free. Machine learning datasets put women at a disadvantage due to factors including (but not limited to) historical exclusion of women in data collection, research, and design; as well as the low participation of women in artificial intelligence fields. These factors mean that applications of machine learning may fail to treat the needs and experiences of women as equal to those of men.    Research into understanding gender biases in machine learning frequently occurs within the computer science field. This has frequently resulted in research where bias is inconsistently defined, and proposed techniques do not engage with relevant literature outside of the artificial intelligence field. This research proposes a novel, interdisciplinary approach to the measurement and validation of gender biases in machine learning. This approach translates methods of human-based gender bias measurement in psychology, forming a gender bias questionnaire for use on a machine rather than a human.   The final output system of this research as a proof of concept demonstrates the potential for a new approach to gender bias investigation. This system takes advantage of the qualitative nature of language to provide a new way of understanding gender data biases by outputting both quantitative and qualitative results. These results can then be meaningfully translated into their real-world implications.</p>


2021 ◽  
Author(s):  
◽  
Hazel Darney

<p>With the rapid uptake of machine learning artificial intelligence in our daily lives, we are beginning to realise the risks involved in implementing this technology in high-stakes decision making. This risk is due to machine learning decisions being based in human-curated datasets, meaning these decisions are not bias-free. Machine learning datasets put women at a disadvantage due to factors including (but not limited to) historical exclusion of women in data collection, research, and design; as well as the low participation of women in artificial intelligence fields. These factors mean that applications of machine learning may fail to treat the needs and experiences of women as equal to those of men.    Research into understanding gender biases in machine learning frequently occurs within the computer science field. This has frequently resulted in research where bias is inconsistently defined, and proposed techniques do not engage with relevant literature outside of the artificial intelligence field. This research proposes a novel, interdisciplinary approach to the measurement and validation of gender biases in machine learning. This approach translates methods of human-based gender bias measurement in psychology, forming a gender bias questionnaire for use on a machine rather than a human.   The final output system of this research as a proof of concept demonstrates the potential for a new approach to gender bias investigation. This system takes advantage of the qualitative nature of language to provide a new way of understanding gender data biases by outputting both quantitative and qualitative results. These results can then be meaningfully translated into their real-world implications.</p>


2018 ◽  
Vol 15 (3) ◽  
pp. 497-498 ◽  
Author(s):  
Ruth C. Carlos ◽  
Charles E. Kahn ◽  
Safwan Halabi

2021 ◽  
Author(s):  
Neeraj Mohan ◽  
Ruchi Singla ◽  
Priyanka Kaushal ◽  
Seifedine Kadry

2020 ◽  
pp. 87-94
Author(s):  
Pooja Sharma ◽  

Artificial intelligence and machine learning, the two iterations of automation are based on the data, small or large. The larger the data, the more effective an AI or machine learning tool will be. The opposite holds the opposite iteration. With a larger pool of data, the large businesses and multinational corporations have effectively been building, developing and adopting refined AI and machine learning based decision systems. The contention of this chapter is to explore if the small businesses with small data in hands are well-off to use and adopt AI and machine learning based tools for their day to day business operations.


Author(s):  
Peter R Slowinski

The core of artificial intelligence (AI) applications is software of one sort or another. But while available data and computing power are important for the recent quantum leap in AI, there would not be any AI without computer programs or software. Therefore, the rise in importance of AI forces us to take—once again—a closer look at software protection through intellectual property (IP) rights, but it also offers us a chance to rethink this protection, and while perhaps not undoing the mistakes of the past, at least to adapt the protection so as not to increase the dysfunctionality that we have come to see in this area of law in recent decades. To be able to establish the best possible way to protect—or not to protect—the software in AI applications, this chapter starts with a short technical description of what AI is, with readers referred to other chapters in this book for a deeper analysis. It continues by identifying those parts of AI applications that constitute software to which legal software protection regimes may be applicable, before outlining those protection regimes, namely copyright and patents. The core part of the chapter analyses potential issues regarding software protection with respect to AI using specific examples from the fields of evolutionary algorithms and of machine learning. Finally, the chapter draws some conclusions regarding the future development of IP regimes with respect to AI.


Information ◽  
2020 ◽  
Vol 11 (4) ◽  
pp. 193 ◽  
Author(s):  
Sebastian Raschka ◽  
Joshua Patterson ◽  
Corey Nolet

Smarter applications are making better use of the insights gleaned from data, having an impact on every industry and research discipline. At the core of this revolution lies the tools and the methods that are driving it, from processing the massive piles of data generated each day to learning from and taking useful action. Deep neural networks, along with advancements in classical machine learning and scalable general-purpose graphics processing unit (GPU) computing, have become critical components of artificial intelligence, enabling many of these astounding breakthroughs and lowering the barrier to adoption. Python continues to be the most preferred language for scientific computing, data science, and machine learning, boosting both performance and productivity by enabling the use of low-level libraries and clean high-level APIs. This survey offers insight into the field of machine learning with Python, taking a tour through important topics to identify some of the core hardware and software paradigms that have enabled it. We cover widely-used libraries and concepts, collected together for holistic comparison, with the goal of educating the reader and driving the field of Python machine learning forward.


Author(s):  
James E. Dobson

This book seeks to develop an answer to the major question arising from the adoption of sophisticated data-science approaches within humanities research: are existing humanities methods compatible with computational thinking? Data-based and algorithmically powered methods present both new opportunities and new complications for humanists. This book takes as its founding assumption that the exploration and investigation of texts and data with sophisticated computational tools can serve the interpretative goals of humanists. At the same time, it assumes that these approaches cannot and will not obsolete other existing interpretive frameworks. Research involving computational methods, the book argues, should be subject to humanistic modes that deal with questions of power and infrastructure directed toward the field’s assumptions and practices. Arguing for a methodologically and ideologically self-aware critical digital humanities, the author contextualizes the digital humanities within the larger neo-liberalizing shifts of the contemporary university in order to resituate the field within a theoretically informed tradition of humanistic inquiry. Bringing the resources of critical theory to bear on computational methods enables humanists to construct an array of compelling and possible humanistic interpretations from multiple dimensions—from the ideological biases informing many commonly used algorithms to the complications of a historicist text mining, from examining the range of feature selection for sentiment analysis to the fantasies of human subjectless analysis activated by machine learning and artificial intelligence.


2021 ◽  
Vol 12 ◽  
Author(s):  
Supraja Sankaran ◽  
Chao Zhang ◽  
Henk Aarts ◽  
Panos Markopoulos

Applications using Artificial Intelligence (AI) have become commonplace and embedded in our daily lives. Much of our communication has transitioned from human–human interaction to human–technology or technology-mediated interaction. As technology is handed over control and streamlines choices and decision-making in different contexts, people are increasingly concerned about a potential threat to their autonomy. In this paper, we explore autonomy perception when interacting with AI-based applications in everyday contexts using a design fiction-based survey with 328 participants. We probed if providing users with explanations on “why” an application made certain choices or decisions influenced their perception of autonomy or reactance regarding the interaction with the applications. We also looked at changes in perception when users are aware of AI's presence in an application. In the social media context, we found that people perceived a greater reactance and lower sense of autonomy perhaps owing to the personal and identity-sensitive nature of the application context. Providing explanations on “why” in the navigation context, contributed to enhancing their autonomy perception, and reducing reactance since it influenced the users' subsequent actions based on the recommendation. We discuss our findings and the implications it has for the future development of everyday AI applications that respect human autonomy.


Sign in / Sign up

Export Citation Format

Share Document