scholarly journals The Modified Imitation Game: A Method for Measuring Interactional Expertise

2021 ◽  
Vol 12 ◽  
Author(s):  
Güler Arsal ◽  
Joel Suss ◽  
Paul Ward ◽  
Vivian Ta ◽  
Ryan Ringer ◽  
...  

The study of the sociology of scientific knowledge distinguishes between contributory and interactional experts. Contributory experts have practical expertise—they can “walk the walk.” Interactional experts have internalized the tacit components of expertise—they can “talk the talk” but are not able to reliably “walk the walk.” Interactional expertise permits effective communication between contributory experts and others (e.g., laypeople), which in turn facilitates working jointly toward shared goals. Interactional expertise is attained through long-term immersion into the expert community in question. To assess interactional expertise, researchers developed the imitation game—a variant of the Turing test—to test whether a person, or a particular group, possesses interactional expertise of another. The imitation game, which has been used mainly in sociology to study the social nature of knowledge, may also be a useful tool for researchers who focus on cognitive aspects of expertise. In this paper, we introduce a modified version of the imitation game and apply it to examine interactional expertise in the context of blindness. Specifically, we examined blind and sighted individuals’ ability to imitate each other in a street-crossing scenario. In Phase I, blind and sighted individuals provided verbal reports of their thought processes associated with crossing a street—once while imitating the other group (i.e., as a pretender) and once responding genuinely (i.e., as a non-pretender). In Phase II, transcriptions of the reports were judged as either genuine or imitated responses by a different set of blind and sighted participants, who also provided the reasoning for their decisions. The judges comprised blind individuals, sighted orientation-and-mobility specialists, and sighted individuals with infrequent socialization with blind individuals. Decision data were analyzed using probit mixed models for signal-detection-theory indices. Reasoning data were analyzed using natural-language-processing (NLP) techniques. The results revealed evidence that interactional expertise (i.e., relevant tacit knowledge) can be acquired by immersion in the group that possesses and produces the expert knowledge. The modified imitation game can be a useful research tool for measuring interactional expertise within a community of practice and evaluating practitioners’ understanding of true experts.

2018 ◽  
Vol 28 (5) ◽  
pp. 31-52

The principle of reflexivity is a stumbling block for David Bloor’s “strong program” in the sociology of scientific knowledge — the program that gave rise to alternative projects in the field called science and technology studies (STS). The principle of reflexivity would require that the empirical sociology of scientific knowledge must itself be subject to the same kind of causal, impartial, and symmetrical investigation that empirical sociology applies to the natural sciences. However, applying reflexivity to empirical sociology would mean that sociologists of science fall into the trap of the “interpretive flexibility of facts” just as natural scientists do when they try to build theories upon facts, as the empirical sociology of scientific knowledge has discovered. Is there a way to overcome this regression in the empirical sociology of knowledge? Yes, but it lies in the philosophical rather than the empirical plane. However, the philosophical “plane” is not flat, because philosophy is accustomed to inquiring into its own foundations. In the case of STS, this inquiry takes us back to the empirical “plane,” which is also not flat because it requires philosophical reflection and philosophical ontology. This article considers the attempt by Harry Collins to bypass the principle of reflexivity by turning to philosophical ontology, a manoeuver that the empirical sociology of science would deem “illegal.” The “third wave of science studies” proposed by Collins is interpreted as a philosophical justification for STS. It is argued that Collins formulates an ontology of nature and society, which underlies his proposed concepts of “interactional expertise” and “tacit knowledge” — keys to understanding the methodology of third-wave STS. Collins’ ontology begins by questioning the reality of expert knowledge and ends (to date) with a “social Cartesianism” that asserts a dualism between the physical and the mental (or social).


Author(s):  
Tyler J. Renshaw ◽  
Nathan A. Sonnenfeld ◽  
Matthew D. Meyers

Alan Turing developed the imitation game – the Turing Test – in which an interrogator is tasked with discriminating and identifying two subjects by asking a series of questions. Based on subject feedback, the challenge to the interrogator is to correctly identify those subjects. Applying this concept to the discrimination of reality from virtual reality is essential as simulation technology progresses toward a virtual era, in which we experience equal and greater presence in virtuality than reality. It is important to explore the conceptual and theoretical underpinnings of the Turing Test in order to avoid possible issues when adapting the test for virtual reality. This requires an understanding of how users judge virtual and real environments, and how these environments influence their judgement. Turing-type tests, the constructs of reality judgement and presence, and measurement methods for each are explored. Following this brief review, the researchers contribute a theoretical foundation for future development of a Turing-type test for virtual reality, based on the universal experience of the mundane.


Author(s):  
Harry Collins ◽  
Robert Evans

The research programme known as Studies of Expertise and Experience (SEE), often referred to as the “Third Wave of Science Studies,” treats expertise as real and as the property of social groups. This chapter explains the foundations of SEE and sets out the theoretical and methodological innovations created using this approach. These include the development of a new classification of expertise, which identifies a new kind of expertise called “interactional expertise,” and the creation of a new research method known as the Imitation Game designed to explore the content and distribution of interactional expertise. It concludes by showing how SEE illuminates a number of contemporary issues such as the challenges of interdisciplinary working and the role of experts in a “post-truth” society.


Author(s):  
Dawei Wang ◽  
Kai Chen ◽  
Wei Wang

Smart speakers, such as Google Home and Amazon Echo, have become popular. They execute user voice commands via their built-in functionalities together with various third-party voice-controlled applications, called skills. Malicious skills have brought significant threats to users in terms of security and privacy. As a countermeasure, only skills passing the strict vetting process can be released onto markets. However, malicious skills have been reported to exist on markets, indicating that the vetting process can be bypassed. This paper aims to demystify the vetting process of skills on main markets to discover weaknesses and protect markets better. To probe the vetting process, we carefully design numerous skills, perform the Turing test, a test for machine intelligence, to determine whether humans or machines perform vetting, and leverage natural language processing techniques to analyze their behaviors. Based on our comprehensive experiments, we gain a good understanding of the vetting process (e.g., machine or human testers and skill exploration strategies) and discover some weaknesses. In this paper, we design three types of attacks to verify our results and prove an attacker can embed sensitive behaviors in skills and bypass the strict vetting process. Accordingly, we also propose countermeasures to these attacks and weaknesses.


2020 ◽  
Vol 59 (S 02) ◽  
pp. e64-e78
Author(s):  
Antje Wulff ◽  
Marcel Mast ◽  
Marcus Hassler ◽  
Sara Montag ◽  
Michael Marschollek ◽  
...  

Abstract Background Merging disparate and heterogeneous datasets from clinical routine in a standardized and semantically enriched format to enable a multiple use of data also means incorporating unstructured data such as medical free texts. Although the extraction of structured data from texts, known as natural language processing (NLP), has been researched at least for the English language extensively, it is not enough to get a structured output in any format. NLP techniques need to be used together with clinical information standards such as openEHR to be able to reuse and exchange still unstructured data sensibly. Objectives The aim of the study is to automatically extract crucial information from medical free texts and to transform this unstructured clinical data into a standardized and structured representation by designing and implementing an exemplary pipeline for the processing of pediatric medical histories. Methods We constructed a pipeline that allows reusing medical free texts such as pediatric medical histories in a structured and standardized way by (1) selecting and modeling appropriate openEHR archetypes as standard clinical information models, (2) defining a German dictionary with crucial text markers serving as expert knowledge base for a NLP pipeline, and (3) creating mapping rules between the NLP output and the archetypes. The approach was evaluated in a first pilot study by using 50 manually annotated medical histories from the pediatric intensive care unit of the Hannover Medical School. Results We successfully reused 24 existing international archetypes to represent the most crucial elements of unstructured pediatric medical histories in a standardized form. The self-developed NLP pipeline was constructed by defining 3.055 text marker entries, 132 text events, 66 regular expressions, and a text corpus consisting of 776 entries for automatic correction of spelling mistakes. A total of 123 mapping rules were implemented to transform the extracted snippets to an openEHR-based representation to be able to store them together with other structured data in an existing openEHR-based data repository. In the first evaluation, the NLP pipeline yielded 97% precision and 94% recall. Conclusion The use of NLP and openEHR archetypes was demonstrated as a viable approach for extracting and representing important information from pediatric medical histories in a structured and semantically enriched format. We designed a promising approach with potential to be generalized, and implemented a prototype that is extensible and reusable for other use cases concerning German medical free texts. In a long term, this will harness unstructured clinical data for further research purposes such as the design of clinical decision support systems. Together with structured data already integrated in openEHR-based representations, we aim at developing an interoperable openEHR-based application that is capable of automatically assessing a patient's risk status based on the patient's medical history at time of admission.


1992 ◽  
Vol 36 (4) ◽  
pp. 438-442
Author(s):  
H. McIlvaine Parsons

Interactive computer programs and human participants competed in a Turing Test at the Boston Computer Museum last November in the first year of a competition to determine, ultimately, whether such programs can be indistinguishable from humans in dialogues. The test is named for the British mathematician and computer pioneer who proposed it in 1950. This paper describes the competition, its preparation, and problems which await resolution in future Turing Tests that may culminate in a $100,000 award. The 1991 test was “restricted” in its rules and procedures lest a full test disadvantage the computer programs too severely. The contest posed issues concerning dialogue domains, language processing, inclusion of cognitive tasks, and other features. Since a Turing Test could be interpreted as involving “thinking” and “intelligence” (though Turing had little use for such terms), future tests should intrigue human factors.


2021 ◽  
pp. 16-32
Author(s):  
Simone Natale

The relationship between AI and deception was initially explored by Alan Turing, who famously proposed in 1950 a practical test addressing the question “Can machines think?” This chapter argues that Turing’s proposal of the Imitation Game, now more commonly called the Turing test, located the prospects of AI not just in improvements of hardware and software but also in a more complex scenario emerging from the interaction between humans and computers. The Turing test, by placing humans at the center of its design as judges and as conversation agents alongside computers, created a space to imagine and experiment with AI technologies in terms of their credibility to human users. This entailed the discovery that AI was to be achieved not only through the development of more complex and functional computing technologies but also through the use of strategies and techniques exploiting humans’ liability to illusion and deception.


Author(s):  
Alan Turing

Together with ‘On Computable Numbers’, ‘Computing Machinery and Intelligence’ forms Turing’s best-known work. This elegant and sometimes amusing essay was originally published in 1950 in the leading philosophy journal Mind. Turing’s friend Robin Gandy (like Turing a mathematical logician) said that ‘Computing Machinery and Intelligence’. . . was intended not so much as a penetrating contribution to philosophy but as propaganda. Turing thought the time had come for philosophers and mathematicians and scientists to take seriously the fact that computers were not merely calculating engines but were capable of behaviour which must be accounted as intelligent; he sought to persuade people that this was so. He wrote this paper—unlike his mathematical papers—quickly and with enjoyment. I can remember him reading aloud to me some of the passages— always with a smile, sometimes with a giggle. The quality and originality of ‘Computing Machinery and Intelligence’ have earned it a place among the classics of philosophy of mind. ‘Computing Machinery and Intelligence’ contains Turing’s principal exposition of the famous ‘imitation game’ or Turing test. The test first appeared, in a restricted form, in the closing paragraphs of ‘Intelligent Machinery’ (Chapter 10). Chapters 13 and 14, dating from 1951 and 1952 respectively, contain further discussion and amplification; unpublished until 1999, this important additional material throws new light on how the Turing test is to be understood. The imitation game involves three participants: a computer, a human interrogator, and a human ‘foil’. The interrogator attempts to determine, by asking questions of the other two participants, which of them is the computer. All communication is via keyboard and screen, or an equivalent arrangement (Turing suggested a teleprinter link). The interrogator may ask questions as penetrating and wide-ranging as he or she likes, and the computer is permitted to do everything possible to force a wrong identification. (So the computer might answer ‘No’ in response to ‘Are you a computer?’ and might follow a request to multiply one large number by another with a long pause and a plausibly incorrect answer.) The foil must help the interrogator to make a correct identification.


2007 ◽  
pp. 25-43 ◽  
Author(s):  
O. Koshovets

The article investigates the processes of capitalization and ideologization of economic science as preconditions of establishment and reproduction of expert knowledge. The author considers some basic problems of economic expert community functioning in Russia and its relationships with fundamental theoretical economic science, including as an example the technology of Foresight (based on the collection and analysis of expert judgments) which is very popular in contemporary forecasting research in Russia.


Sign in / Sign up

Export Citation Format

Share Document