scholarly journals The iconicity toolbox: empirical approaches to measuring iconicity

2019 ◽  
Vol 11 (02) ◽  
pp. 188-207 ◽  
Author(s):  
YASAMIN MOTAMEDI ◽  
HANNAH LITTLE ◽  
ALAN NIELSEN ◽  
JUSTIN SULIK

abstractGrowing evidence from across the cognitive sciences indicates that iconicity plays an important role in a number of fundamental language processes, spanning learning, comprehension, and online use. One benefit of this recent upsurge in empirical work is the diversification of methods available for measuring iconicity. In this paper, we provide an overview of methods in the form of a ‘toolbox’. We lay out empirical methods for measuring iconicity at a behavioural level, in the perception, production, and comprehension of iconic forms. We also discuss large-scale studies that look at iconicity on a system-wide level, based on objective measures of similarity between signals and meanings. We give a detailed overview of how different measures of iconicity can better address specific hypotheses, providing greater clarity when choosing testing methods.

Author(s):  
Pauline Jacobson

This chapter examines the currently fashionable notion of ‘experimental semantics’, and argues that most work in natural language semantics has always been experimental. The oft-cited dichotomy between ‘theoretical’ (or ‘armchair’) and ‘experimental’ is bogus and should be dropped form the discourse. The same holds for dichotomies like ‘intuition-based’ (or ‘thought experiments’) vs. ‘empirical’ work (and ‘real experiments’). The so-called new ‘empirical’ methods are often nothing more than collecting the large-scale ‘intuitions’ or, doing multiple thought experiments. Of course the use of multiple subjects could well allow for a better experiment than the more traditional single or few subject methodologies. But whether or not this is the case depends entirely on the question at hand. In fact, the chapter considers several multiple-subject studies and shows that the particular methodology in those cases does not necessarily provide important insights, and the chapter argues that some its claimed benefits are incorrect.


2020 ◽  
Author(s):  
Igor Grossmann ◽  
Nic M. Weststrate ◽  
Monika Ardelt ◽  
Justin Peter Brienza ◽  
Mengxi Dong ◽  
...  

Interest in wisdom in the cognitive sciences, psychology, and education has been paralleled by conceptual confusions about its nature and assessment. To clarify these issues and promote consensus in the field, wisdom researchers met in Toronto in July of 2019, resolving disputes through discussion. Guided by a survey of scientists who study wisdom-related constructs, we established a common wisdom model, observing that empirical approaches to wisdom converge on the morally-grounded application of metacognition to reasoning and problem-solving. After outlining the function of relevant metacognitive and moral processes, we critically evaluate existing empirical approaches to measurement and offer recommendations for best practices. In the subsequent sections, we use the common wisdom model to selectively review evidence about the role of individual differences for development and manifestation of wisdom, approaches to wisdom development and training, as well as cultural, subcultural, and social-contextual differences. We conclude by discussing wisdom’s conceptual overlap with a host of other constructs and outline unresolved conceptual and methodological challenges.


Author(s):  
Ashlynn M. Keller ◽  
Holly A. Taylor ◽  
Tad T. Brunyé

Abstract Navigating an unfamiliar city almost certainly brings out uncertainty about getting from place to place. This uncertainty, in turn, triggers information gathering. While navigational uncertainty is common, little is known about what type of information people seek when they are uncertain. The primary choices for information types with environments include landmarks (distal or local), landmark configurations (relation between two or more landmarks), and a distinct geometry, at least for some environments. Uncertainty could lead individuals to more likely seek one of these information types. Extant research informs both predictions about and empirical work exploring this question. This review covers relevant cognitive literature and then suggests empirical approaches to better understand information-seeking actions triggered by uncertainty. Notably, we propose that examining continuous navigation data can provide important insights into information seeking. Benefits of continuous data will be elaborated through one paradigm, spatial reorientation, which intentionally induces uncertainty through disorientation and cue conflict. While this and other methods have been used previously, data have primarily reflected only the final choice. Continuous behavior during a task can better reveal the cognition-action loop contributing to spatial learning and decision making.


2010 ◽  
Vol 3 (2) ◽  
pp. 195-204 ◽  
Author(s):  
W.G Moravia ◽  
A. G. Gumieri ◽  
W. L. Vasconcelos

Nowadays lightweight concrete is used on a large scale for structural purposes and to reduce the self-weight of structures. Specific grav- ity, compressive strength, strength/weight ratio and modulus of elasticity are important factors in the mechanical behavior of structures. This work studies these properties in lightweight aggregate concrete (LWAC) and normal-weight concrete (NWC), comparing them. Spe- cific gravity was evaluated in the fresh and hardened states. Four mixture proportions were adopted to evaluate compressive strength. For each proposed mixture proportion of the two concretes, cylindrical specimens were molded and tested at ages of 3, 7 and 28 days. The modulus of elasticity of the NWC and LWAC was analyzed by static, dynamic and empirical methods. The results show a larger strength/ weight ratio for LWAC, although this concrete presented lower compressive strength.


2013 ◽  
pp. 133-151 ◽  
Author(s):  
Hanne Andersen

This paper focuses on Thomas S. Kuhn's work on taxonomic concepts and how it relates to empirical work from the cognitive sciences on categorization and conceptual development. I shall first review the basic features of Kuhn's family resemblance account and compare to work from the cognitive sciences. I shall then show how Kuhn's account can be extended to cover the development of new taxonomies in science, and I shall illustrate by a detailed case study that Kuhn himself mentioned only briefly in his own work, namely the discovery of X-rays and radioactivity.


Author(s):  
Paul W. Glimcher

In the early twentieth century, neoclassical economic theorists began to explore mathematical models of maximization. The theories of human behavior that they produced explored how optimal human agents, who were subject to no internal computational resource constraints of any kind, should make choices. During the second half of the twentieth century, empirical work laid bare the limitations of this approach. Human decision makers were often observed to fail to achieve maximization in domains ranging from health to happiness to wealth. Psychologists responded to these failures by largely abandoning holistic theory in favor of large-scale multi-parameter models that retained many of the key features of the earlier models. Over the last two decades, scholars combining neurobiology, psychology, economics, and evolutionary approaches have begun to examine alternative theoretical approaches. Their data suggest explanations for some of the failures of neoclassical approaches and revealed new theoretical avenues for exploration. While neurobiologists have largely validated the economic and psychological assumption that decision makers compute and represent a single-decision variable for every option considered during choice, their data also make clear that the human brain faces severe computational resource constraints which force it to rely on very specific modular approaches to the processes of valuation and choice.


Author(s):  
Bruce MacLennan

This chapter considers the question of whether a robot could feel pain or experience other emotions and proposes empirical methods for answering this question. After a review of the biological functions of emotion and pain, the author argues that autonomous robots have similar functions that need to be fulfilled, which require systems analogous to emotion and pain. Protophenomenal analysis, which involves parallel reductions in the phenomenological and neurological domains, is explained and applied to the “hard problem” of robot emotion and pain. The author outlines empirical approaches to answering the fundamental questions on which depends the possibility of robot consciousness in general. The author then explains the importance of sensors distributed throughout a robot's body for the emergence of coherent emotional phenomena in its awareness. Overall, the chapter elucidates the issue of robot pain and emotion and outlines an approach to resolving it empirically.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
M. Obaidul Hamid ◽  
Ian Hardy ◽  
Vicente Reyes

Abstract Although language test-takers have been the focus of much theoretical and empirical work in recent years, this work has been mainly concerned with their attitudes to test preparation and test-taking strategies, giving insufficient attention to their views on broader socio-political and ethical issues. This article examines test-takers’ perceptions and evaluations of the fairness, justice and validity of global tests of English, with a particular focus upon the International English Language Testing System (IELTS). Based on relevant literature and theorizing into such tests, and on self-reported test experience data gathered from test-takers (N = 430) from 49 countries, we demonstrate how test-takers experienced fairness and justice in complex ways that problematized the purported technical excellence and validity of IELTS. Even as there was some evidence of support for the test as a fair measure of students’ English capacity, the extent to which it actually reflected their language capabilities was open to question. At the same time, the participants expressed concerns about whether IELTS was a vehicle for raising revenue and for justifying immigration policies, thus raising questions about the justness of the test. The research foregrounds the importance of focusing attention upon the socio-political and ethical circumstances that currently attend large-scale, standardized English language testing.


2019 ◽  
Vol 79 (5) ◽  
pp. 883-910 ◽  
Author(s):  
Spyros Konstantopoulos ◽  
Wei Li ◽  
Shazia Miller ◽  
Arie van der Ploeg

This study discusses quantile regression methodology and its usefulness in education and social science research. First, quantile regression is defined and its advantages vis-à-vis vis ordinary least squares regression are illustrated. Second, specific comparisons are made between ordinary least squares and quantile regression methods. Third, the applicability of quantile regression to empirical work to estimate intervention effects is demonstrated using education data from a large-scale experiment. The estimation of quantile treatment effects at various quantiles in the presence of dropouts is also discussed. Quantile regression is especially suitable in examining predictor effects at various locations of the outcome distribution (e.g., lower and upper tails).


Author(s):  
Sayyed Mahdi Ziaei

Purpose This paper aims to constitute to the first empirical work that investigated the effects of US unconventional monetary policy shocks on Islamic equities. Design/methodology/approach The authors used the spread between sovereign (term spread) and corporate (corporate spread) yields as proxies of unconventional monetary policy in times that FED implemented different rounds of large-scale asset purchasing programs. Findings This paper demonstrates that monetary policy shocks have significant effects on Islamic equities. The analysis showed substantial evidence that the corporate spread innovation was reflected as a positive signal in Islamic equity markets and has a larger impact on Islamic low leverage equities than term spread. Originality/value The objective of this paper is to shed some insight into the effects of US unconventional monetary policy on low leverage financial assets. It is hypothesized that during this period, specifically from November 2008, unconventional monetary policy and zero bound interest rates have been implemented in the US economy. However, the strength of effects of this range of policies on Islamic financial products is unidentified.


Sign in / Sign up

Export Citation Format

Share Document