Conversational Interface as a mediating technology of organization

Author(s):  
Mercedes Bunz

Conversational interfaces such as Apple’s Siri or Amazon’s Alexa allow technology companies to reach deeper into the social fabric of our societies than they already had. For centuries we used media to speak with each other; now to speak directly to a device has become normal. To understand this reorganisation further, this chapter explores the technology that drives it—the new Artificial Intelligence driven by machine learning—and links it back to social organization such as to the bias conversational interfaces learn from the data they are trained with, and to bots that start to converse in their own human-like language.

2020 ◽  
Vol 17 (11) ◽  
pp. 5105-5108
Author(s):  
Rubika Walia ◽  
Neelam Oberoi ◽  
Sakshi Sachdeva

The year 2020 has emerged as a menace and threat for the human being whereby the social as well as professional livings getting affected. In the global perspectives, the human lives are affecting and huge demise occurring. In this research work, the effectual implementation towards the usage of Artificial Intelligence is done with the machine learning so that the overall outcomes and predictive mining can be done with higher degree of performance. The work is having the integration pattern of COVID datasets of patients with benchmark characteristics and thereby to have the predictions for the upcoming tests and by this way overall prediction can be done.


AI Magazine ◽  
2017 ◽  
Vol 38 (4) ◽  
pp. 99-106
Author(s):  
Jeannette Bohg ◽  
Xavier Boix ◽  
Nancy Chang ◽  
Elizabeth F. Churchill ◽  
Vivian Chu ◽  
...  

The Association for the Advancement of Artificial Intelligence, in cooperation with Stanford University's Department of Computer Science, presented the 2017 Spring Symposium Series, held Monday through Wednesday, March 27–29, 2017 on the campus of Stanford University. The eight symposia held were Artificial Intelligence for the Social Good (SS-17-01); Computational Construction Grammar and Natural Language Understanding (SS-17-02); Computational Context: Why It's Important, What It Means, and Can It Be Computed? (SS-17-03); Designing the User Experience of Machine Learning Systems (SS-17-04); Interactive Multisensory Object Perception for Embodied Agents (SS-17-05); Learning from Observation of Humans (SS-17-06); Science of Intelligence: Computational Principles of Natural and Artificial Intelligence (SS-17-07); and Wellbeing AI: From Machine Learning to Subjectivity Oriented Computing (SS-17-08). This report, compiled from organizers of the symposia, summarizes the research that took place.


2021 ◽  
Vol 5 (2) ◽  
pp. 113
Author(s):  
Youngseok Lee ◽  
Jungwon Cho

In the near future, as artificial intelligence and computing network technology develop, collaboration with artificial intelligence (AI) will become important. In an AI society, the ability to communicate and collaborate among people is an important element of talent. To do this, it is necessary to understand how artificial intelligence based on computer science works. AI is being rapidly applied across industries and is developing as a core technology to enable a society led by knowledge and information. An AI education focused on problem solving and learning is efficient for computer science education. Thus, the time has come to prepare for AI education along with existing software education so that they can adapt to the social and job changes enabled by AI. In this paper, we explain a classification method for AI machine learning models and propose an AI education model using teachable machines. Non-computer majors can understand the importance of data and the AI model concept based on specific cases using AI education tools to understand and experiment with AI even without the knowledge of mathematics, and use languages such as Python, if necessary. Through the application of the machine learning model, AI can be smoothly utilized in their field of interest. If such an AI education model is activated, it will be possible to suggest the direction of AI education for collaboration with AI experts through the application of AI technology.


2020 ◽  
Author(s):  
Fenwick McKelvey

Harold Lasswell, quoted in a 1961 issue of Harper’s Magazine, described the Simulmatics Corporation as the “A-bomb of the social sciences.” Simulmatics had attracted his attention after publicizing its use of computer modeling to predict public opinion for the 1960 Kennedy Presidential Campaign. A preeminent figure in the American academia, Lasswell’s quotes reflects the long promise of “artificial intelligence” in a broad sense as a technology to better know politics and populations. Simulmatics was one application of this research agenda developed at MIT along with Project Cambridge. These under-studied cases are a needed counterpoint to theorize the contemporary applications of machine learning and deep learning for political management as popularized by the defunct psychographics firm Cambridge Analytica.Building on the pre-conference’s periodization of AI from rule-based to today’s temporal flows of classifications, I distinguish modern AI epistemology (machine learning and deep learning) from its predecessors through two key applications at MIT, the Simulmatics Corporation and its academic equivalent Project Cambridge. Drawing on archival research, I analyze the constitutive discourses that formulated the problems to be solved and the artifacts of code that actualized these projects. Simulmatics Corporation and Project Cambridge marked an important passage point of the cyborg sciences into politics and governance, integrating behaviouralism with mathematical modeling in hopes of rendering populations more knowable and manageable. In doing so, these other analytics at Cambridge erased the boundaries between artificial intelligence and political intelligence, an erasure necessary for AI to be seen as a political epistemology today.


Societies ◽  
2020 ◽  
Vol 10 (1) ◽  
pp. 23 ◽  
Author(s):  
Aspen Lillywhite ◽  
Gregor Wolbring

Artificial intelligence (AI) and machine learning (ML) advancements increasingly impact society and AI/ML ethics and governance discourses have emerged. Various countries have established AI/ML strategies. “AI for good” and “AI for social good” are just two discourses that focus on using AI/ML in a positive way. Disabled people are impacted by AI/ML in many ways such as potential therapeutic and non-therapeutic users of AI/ML advanced products and processes and by the changing societal parameters enabled by AI/ML advancements. They are impacted by AI/ML ethics and governance discussions and discussions around the use of AI/ML for good and social good. Using identity, role, and stakeholder theories as our lenses, the aim of our scoping review is to identify and analyze to what extent, and how, AI/ML focused academic literature, Canadian newspapers, and Twitter tweets engage with disabled people. Performing manifest coding of the presence of the terms “AI”, or “artificial intelligence” or “machine learning” in conjunction with the term “patient”, or “disabled people” or “people with disabilities” we found that the term “patient” was used 20 times more than the terms “disabled people” and “people with disabilities” together to identify disabled people within the AI/ML literature covered. As to the downloaded 1540 academic abstracts, 234 full-text Canadian English language newspaper articles and 2879 tweets containing at least one of 58 terms used to depict disabled people (excluding the term patient) and the three AI terms, we found that health was one major focus, that the social good/for good discourse was not mentioned in relation to disabled people, that the tone of AI/ML coverage was mostly techno-optimistic and that disabled people were mostly engaged with in their role of being therapeutic or non-therapeutic users of AI/ML influenced products. Problems with AI/ML were mentioned in relation to the user having a bodily problem, the usability of AI/ML influenced technologies, and problems disabled people face accessing such technologies. Problems caused for disabled people by AI/ML advancements, such as changing occupational landscapes, were not mentioned. Disabled people were not covered as knowledge producers or influencers of AI/ML discourses including AI/ML governance and ethics discourses. Our findings suggest that AI/ML coverage must change, if disabled people are to become meaningful contributors to, and beneficiaries of, discussions around AI/ML.


2021 ◽  
Vol 3 (1) ◽  
pp. 96-108
Author(s):  
Matt Bartlett

Serious challenges are raised by the way in which technology companies like Facebook and Google harvest and process user data. Companies in the modern data economy mine troves of data with sophisticated algorithms to produce valuable behavioural predictions. These data-driven predictions provide companies with a powerful capacity to influence and manipulate users, and these risks are increasing with the explosive growth of ‘Big Data’ and artificial intelligence machine learning. This article analyses the extent to which these challenges are met by existing regimes such as Australia and New Zealand’s respective privacy acts and the European Union’s General Data Protection Regime. While these laws protect certain privacy interests, I argue that users have a broader set of interests in their data meriting protection. I explore three of these novel interests, including the social dimension of data, control and access to predictions mined from data and the economic value of data. This article shows how existing frameworks fail to recognise or protect these novel interests. In light of this failure, lawmakers urgently need to frame new legal regimes to protect against the worst excesses of the data economy.


Author(s):  
Mike Berrell

Advanced technologies including artificial intelligence, robotics, and machine learning (smart machines) impact understandings about the nature of work. For professionals, semi-professionals, and ancillary workers supplying healthcare and legal services, for example, smart machines change the social relations of work and subvert notions of status and hierarchy that come with occupational groups such as doctors or lawyers. As smart machines continue to disrupt employment, job advertisement might soon carry the warning that humans need not apply. Under the prospect of a new world of work, people require additional knowledge, skills, and attitudes to cope with a future where smart machines radically alter the nature of work in settings where some people work anywhere and anytime while others work nowhere. In any future, people require skills and attitudes to cope with uncertainty. Ideas about multiple intelligences, emotional intelligence, critical thinking, creativity, and problem-solving will help employees cope with any of the futures of work predicted in the literature.


1998 ◽  
Vol 43 (1) ◽  
pp. 16-18
Author(s):  
Kathryn C. Oleson ◽  
Robert M. Arkin
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document