sociology of expectations
Recently Published Documents


TOTAL DOCUMENTS

19
(FIVE YEARS 11)

H-INDEX

5
(FIVE YEARS 2)

Author(s):  
Gabrielle Samuel ◽  
Rosie Sims

The UK’s National Health Service (NHS) COVID-19 contact tracing app was announced to the British public on 12th April 2020. The UK government endorsed the app as a public health intervention that would improve public health, protect the NHS and ‘save lives’. On 5th May 2020 the technology was released for trial on the Isle of Wight. However, the trial was halted in June 2020, reportedly due to technological issues. The app was later remodelled and launched to the public in September 2020. The rapid development, trial and discontinuation of the app over a short period of a few months meant that the mobilisation and effect of the discourses associated with the app could be traced relatively easily. In this paper we aimed to explore how these discourses were constructed in the media, and their effect on actors – in particular, those who developed and those who trialled the app. Promissory discourses were prevalent, the trajectory of which aligned with theories developed in the sociology of expectations. We describe this trajectory, and then interpret its implications in terms of infectious disease public health practices and responsibilities.


2021 ◽  
Vol 3 ◽  
Author(s):  
Per-Anders Langendahl

Although farming practices are essentially situated in rural locations, they are also developing in urban environments and multiple rationalities underpin such initiatives. Urban farming practices are, among other things, recognized for their recreational and wellbeing effects (e.g., allotments) as well as to increase biodiversity and to mitigate flooding. More recently, food produced in digitally augmented and contained environments have become increasingly established in cities across the globe such as Stockholm, London, and New York. These ICT enabled farming practices are different from non-smart and outdoor farming. Specifically, indoor farming practices are founded upon the view that it can produce fresh food in urban settings all year round using fewer resources (e.g., land, water, and chemicals) and with reduced food miles. Since such knowledge claims may shape and structure the development and uptake of smart farming practices in urban environments they must be scrutinized. This paper begins to address this need for research by investigating the politics of smart farming expectations in relation to urban environments. Exploratory case study research was conducted on early formations of smart farming initiatives in Sweden. Drawing on the Sociology of Expectations, it explores the politics of knowledge claims embedded in smart farming initiatives at project level, and examines the performativity of these knowledge claims in envisioning more sustainable urban futures. The findings suggest that smart farming at the level of individual projects gives the appearance of change, but at the same time, it produces more of the same.


2021 ◽  
pp. 016224392110300
Author(s):  
Jascha Bareis ◽  
Christian Katzenbach

How to integrate artificial intelligence (AI) technologies in the functioning and structures of our society has become a concern of contemporary politics and public debates. In this paper, we investigate national AI strategies as a peculiar form of co-shaping this development, a hybrid of policy and discourse that offers imaginaries, allocates resources, and sets rules. Conceptually, the paper is informed by sociotechnical imaginaries, the sociology of expectations, myths, and the sublime. Empirically we analyze AI policy documents of four key players in the field, namely China, the United States, France, and Germany. The results show that the narrative construction of AI strategies is strikingly similar: they all establish AI as an inevitable and massively disrupting technological development by building on rhetorical devices such as a grand legacy and international competition. Having established this inevitable, yet uncertain, AI future, national leaders proclaim leadership intervention and articulate opportunities and distinct national pathways. While this narrative construction is quite uniform, the respective AI imaginaries are remarkably different, reflecting the vast cultural, political, and economic differences of the countries under study. As governments endow these imaginary pathways with massive resources and investments, they contribute to coproducing the installment of these futures and, thus, yield a performative lock-in function.


2020 ◽  
Vol 6 ◽  
pp. 591
Author(s):  
Guillaume Dandurand ◽  
François Claveau ◽  
Jean-François Dubé ◽  
Florence Millerand

Public discourse typically blurs the boundary between what artificial intelligence (AI) actually achieves and what it could accomplish in the future. The sociology of expectations teaches us that such elisions play a performative role: they encourage heterogeneous actors to partake, at various levels, in innovation activities. This article explores how optimistic expectations for AI concretely motivate and mobilize actors, how much heterogeneity hides behind the seeming congruence of optimistic visions, and how the expected technological future is in fact difficult to enact as planned. Our main theoretical contribution is to examine the role of heterogeneous expertises in shaping the social dynamics of expectations, thereby connecting the sociology of expectations with the study of expertise and experience. In our case study of a humanitarian organization, we deploy this theoretical contribution to illustrate how heterogeneous specialists negotiate the realization of contending visions of “digital humanitarianism.”


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Lilla Vicsek

PurposeWhat is the future of work going to look like? The aim of this paper is to show how the sociology of expectations (SE) – which deals with the power of visions – can make important contributions in terms of thinking about this issue by critically evaluating the dominant expert positions related to the future-of-employment- and artificial intelligence (AI) debate.Design/methodology/approachAfter providing a literature review regarding SE, an approach based on the latter is applied to interpret the dominant ideal-type expert positions in the future of work debate to illustrate the value of this perspective.FindingsDominant future scripts can be characterized by a focus on the effects of AI technology that give agency to technology and to the future, involve the hype of expectations with polarized frames, and obscure uncertainty. It is argued that these expectations can have significant consequences. They contribute to the closing off of alternative pathways to the future by making some conversations possible, while hindering others. In order to advance understanding, more sophisticated theorizing is needed which goes beyond these positions and which takes uncertainty and the mutual shaping of technology and society into account – including the role expectations play.Research limitations/implicationsThe study asserts that the dominant positions contain problematic assumptions. It makes suggestions for helping move beyond these current framings of the debate theoretically. It also argues that scenario building and backcasting are two tools that could help move forward thinking about the future of work – especially if this is done in a way so as to build strongly on SE.Practical implicationsThe arguments presented herein enhance sense-making in relation to the future-of-work debate, and can contribute to policy development.Originality/valueThere is a lack of adequate exploration of the role of visions related to AI and their consequences. This paper attempts to address this gap by applying an SE approach and emphasizing the performative force of visions.


Author(s):  
Aphra Kerr ◽  
Marguerite Barry ◽  
John Kelleher

This article draws on the sociology of expectations to examine the construction of expectations of ‘ethical AI’ and considers the implications of these expectations for communication governance. We first analyse a range of public documents in the EU, the UK and Ireland to identify the key actors, mechanisms and issues which structure societal expectations around AI and an emerging discourse on ethics. We then explore expectations of AI and ethics through a survey of members of the public. We conclude that discourses of ‘ethical AI’ are generically performative, but to become more effective in practice we need to acknowledge the limitations of contemporary AI and the requirement for extensive human labour to deploy AI in specific societal contexts. An effective ethics of AI requires domain appropriate AI tools, updated professional practices, dignified places of work and robust regulatory and accountability frameworks.


2020 ◽  
pp. 208-239
Author(s):  
Andreu Belsunces Gonçalves ◽  
Grace Polifroni Turtle ◽  
Antonio Calleja ◽  
Raul Nieves Pardo ◽  
Bani Brusadin ◽  
...  

Data Control Wars seeks to explore the development of different futures regarding the extraction, management and exploitation of data and its political, economic and cultural consequences. It has been designed as a research-action device through play, generative conflict, collaborative fiction and performance with three specific objectives: to observe social expectations regarding the relationship between industry, democracy, citizenship and data; to stimulate social imagination through the simulation of sociotechnical scenarios, thus decolonising imaginaries captured by techno-capitalist logic; and to rehearsal transition strategies towards technological sovereignty. This article presents the Data Control Wars case study and explains its functioning. Moreover, it sets out the theoretical scaffolding – which goes from post-human philosophy to critical design passing through the sociology of expectations – that supports it and presents some of the results. After three activations in three different contexts, Data Control Wars has proven useful as an educational tool to address the potential positive and negative effects of using data, as a space for testing strategies on transition design, as a method to identify some of the myths articulated by the social perception of the technological industry and the power of agency that we hold over it and, finally, as a device to question techno-capitalist cultural hegemony through the construction of other stories about what the technosocial body can be.


Journalism ◽  
2020 ◽  
pp. 146488492094753
Author(s):  
J Scott Brennen ◽  
Philip N Howard ◽  
Rasmus K Nielsen

Drawing on scholarship in journalism studies and the sociology of expectations, this article demonstrates how news media shape, mediate, and amplify expectations surrounding artificial intelligence in ways that influence their potential to intervene in the world. Through a critical discourse analysis of news content, this article describes and interrogates the persistent expectation concerning the widescale social integration of AI-related approaches and technologies. In doing so, it identifies two techniques through which news outlets mediate future-oriented expectations surrounding AI: choosing sources and offering comparisons. Finally, it demonstrates how in employing these techniques, outlets construct the expectation of a pseudo-artificial general intelligence: a collective of technologies capable of solving nearly any problem.


Author(s):  
Juho Pääkkönen ◽  
Salla-Maaria Laaksonen ◽  
Mikko Jauho

Social media analytics is a burgeoning new field associated with high promises of societal relevance and business value but also methodological and practical problems. In this article, we build on the sociology of expectations literature and research on expertise in the interaction between humans and machines to examine how analysts and clients make their expectations about social media analytics credible in the face of recognized problems. To investigate how this happens in different contexts, we draw on thematic interviews with 10 social media analytics and client companies. In our material, social media analytics appears as a field facing both hopes and skepticism – toward data, analysis methods, or the users of analytics – from both the clients and the analysts. In this setting, the idea of automated analysis through algorithmic methods emerges as a central notion that lends credibility to expectations about social media analytics. Automation is thought to, first, extend and make expert interpretation of messy social media data more rigorous; second, eliminate subjective judgments from measurement on social media; and, third, allow for coordination of knowledge management inside organizations. Thus, ideas of automation importantly work to uphold the expectations of the value of analytics. Simultaneously, they shape what kinds of expertise, tools, and practices come to be involved in the future of analytics as knowledge production.


2020 ◽  
Vol 7 (1) ◽  
pp. 205395172091593 ◽  
Author(s):  
Aphra Kerr ◽  
Marguerite Barry ◽  
John D Kelleher

This article draws on the sociology of expectations to examine the construction of expectations of ‘ethical AI’ and considers the implications of these expectations for communication governance. We first analyse a range of public documents to identify the key actors, mechanisms and issues which structure societal expectations around artificial intelligence (AI) and an emerging discourse on ethics. We then explore expectations of AI and ethics through a survey of members of the public. Finally, we discuss the implications of our findings for the role of AI in communication governance. We find that, despite societal expectations that we can design ethical AI, and public expectations that developers and governments should share responsibility for the outcomes of AI use, there is a significant divergence between these expectations and the ways in which AI technologies are currently used and governed in large scale communication systems. We conclude that discourses of ‘ethical AI’ are generically performative, but to become more effective we need to acknowledge the limitations of contemporary AI and the requirement for extensive human labour to meet the challenges of communication governance. An effective ethics of AI requires domain appropriate AI tools, updated professional practices, dignified places of work and robust regulatory and accountability frameworks.


Sign in / Sign up

Export Citation Format

Share Document