Constructing and Deconstructing Concepts

Author(s):  
Charles A. Doan ◽  
Ronaldo Vigo

Abstract. Several empirical investigations have explored whether observers prefer to sort sets of multidimensional stimuli into groups by employing one-dimensional or family-resemblance strategies. Although one-dimensional sorting strategies have been the prevalent finding for these unsupervised classification paradigms, several researchers have provided evidence that the choice of strategy may depend on the particular demands of the task. To account for this disparity, we propose that observers extract relational patterns from stimulus sets that facilitate the development of optimal classification strategies for relegating category membership. We conducted a novel constrained categorization experiment to empirically test this hypothesis by instructing participants to either add or remove objects from presented categorical stimuli. We employed generalized representational information theory (GRIT; Vigo, 2011b , 2013a , 2014 ) and its associated formal models to predict and explain how human beings chose to modify these categorical stimuli. Additionally, we compared model performance to predictions made by a leading prototypicality measure in the literature.

Author(s):  
Robert C. Stalnaker

A mental state is luminous if and only if being in a state of that kind always puts one in a position to know that one is in the state. This chapter is a critique of Timothy Williamson’s margin-of-error argument that no nontrivial states are luminous in this sense. While I agree with Williamson’s rejection of a Cartesian internalist conception of the mind, I argue that an externalist conception (one based on information theory) can be reconciled with the luminosity of intentional mental states such as knowledge. My argument, which uses an artificial and simplified model of knowledge, is not a direct rebuttal to his argument, as applied to a more realistic notion of the knowledge of human beings, but I argue that it shows that a luminosity assumption is compatible with externalism about knowledge, and it suggest an intuitively plausible strategy for resisting his argument.


Atmosphere ◽  
2020 ◽  
Vol 11 (4) ◽  
pp. 345 ◽  
Author(s):  
Paula Doubrawa ◽  
Domingo Muñoz-Esparza

Recent computational and modeling advances have led a diverse modeling community to experiment with atmospheric boundary layer (ABL) simulations at subkilometer horizontal scales. Accurately parameterizing turbulence at these scales is a complex problem. The modeling solutions proposed to date are still in the development phase and remain largely unvalidated. This work assesses the performance of methods currently available in the Weather Research and Forecasting (WRF) model to represent ABL turbulence at a gray-zone grid spacing of 333 m. We consider three one-dimensional boundary layer parameterizations (MYNN, YSU and Shin-Hong) and coarse large-eddy simulations (LES). The reference dataset consists of five real-case simulations performed with WRF-LES nested down to 25 m. Results reveal that users should refrain from coarse LES and favor the scale-aware, Shin-Hong parameterization over traditional one-dimensional schemes. Overall, the spread in model performance is large for the cellular convection regime corresponding to the majority of our cases, with coarse LES overestimating turbulent energy across scales and YSU underestimating it and failing to reproduce its horizontal structure. Despite yielding the best results, the Shin-Hong scheme overestimates the effect of grid dependence on turbulent transport, highlighting the outstanding need for improved solutions to seamlessly parameterize turbulence across scales.


2021 ◽  
Vol 2 (3) ◽  
Author(s):  
Jennifer M. Gómez

In this essay, I detail how homogenizing appraisals of diverse faculty women during COVID-19 are harmful to all, including myself. I highlight how academic demands to be “talking heads” and not full human beings, though not new, is especially harmful in the current era. As a Black woman faculty dealing with the double pandemic of COVID-19 and anti-Black racism, the one-dimensional appraisals of women faculty exclude me: I am not a mother dealing with sexist overburden in household responsibilities that interfere with my work. Instead, I am dealing with isolation and loneliness, which I sublimate through work productivity. Resulting in shame, I also realize that universities could operate differently, recognizing women scholars for their diversity in identities, backgrounds, responsibilities, work styles, and personalities during the pandemic and beyond. Given that work productivity is not synonymous with well-being, I hope my colleagues know that, in this moment, I am not okay.


10.29007/k855 ◽  
2018 ◽  
Author(s):  
Sara Alonso ◽  
Elena Ridolfi ◽  
Chiara Biscarini ◽  
Leonardo Alfonso

Accurate flood propagation and inundation models are crucial in flood risk assessments. For fast flowing rivers such as the Magdalena River (Colombia) with high vulnerability and exposure rates is even more essential. Indeed, floods in Magdalena River account for 90% of the damages and 70% of the causalities in Colombia. River cross-sectional information (i.e. their number and spacing) must be optimally selected to properly capture river’s hydraulic behaviour. Optimization is a powerful tool for doing such selection often necessary to increase the efficiency of field works and decrease model simulation time. A methodology based on the entropy concept provides interesting results in agreement with those proposed in literature. The optimization method proposes the use of two concepts belonging to information theory: the joint entropy and total correlation. Total correlation quantifies the redundancy of cross-sections; joint entropy provides their information content. This approach is applied to a reach of the Magdalena River. This study analyses the interrelation between the location of the optimal set of cross-sections and the hydraulic behaviour of the Middle-Magdalena River. Further work considers the evaluation of model performance with the optimized cross-sections, where no negative impacts on the reliability of flood profiles with respect to the original model are expected.


2013 ◽  
Vol 10 (2) ◽  
pp. 2029-2065 ◽  
Author(s):  
S. V. Weijs ◽  
N. van de Giesen ◽  
M. B. Parlange

Abstract. When inferring models from hydrological data or calibrating hydrological models, we might be interested in the information content of those data to quantify how much can potentially be learned from them. In this work we take a perspective from (algorithmic) information theory (AIT) to discuss some underlying issues regarding this question. In the information-theoretical framework, there is a strong link between information content and data compression. We exploit this by using data compression performance as a time series analysis tool and highlight the analogy to information content, prediction, and learning (understanding is compression). The analysis is performed on time series of a set of catchments, searching for the mechanisms behind compressibility. We discuss both the deeper foundation from algorithmic information theory, some practical results and the inherent difficulties in answering the question: "How much information is contained in this data?". The conclusion is that the answer to this question can only be given once the following counter-questions have been answered: (1) Information about which unknown quantities? (2) What is your current state of knowledge/beliefs about those quantities? Quantifying information content of hydrological data is closely linked to the question of separating aleatoric and epistemic uncertainty and quantifying maximum possible model performance, as addressed in current hydrological literature. The AIT perspective teaches us that it is impossible to answer this question objectively, without specifying prior beliefs. These beliefs are related to the maximum complexity one is willing to accept as a law and what is considered as random.


2019 ◽  
Vol 7 (6) ◽  
pp. 1040-1047
Author(s):  
Rajesh K ◽  
Rajasekaran V

Purpose of the study: The present study mainly argues the limitations of normative ethics and analyzes the anthropocentrism in Kim Stanley Robinson’s 2312 based on the actions or duties of the characters. Methodology: The article used normative ethics as a methodology. Normative ethics is the study of ethical actions that has certain rules and regulations about how we ought to do and decide. So, this study has chosen a normative ethic that consists of three ethical theories Utilitarian approach, Kantian ethics and Virtue ethics to judge duties that are right and wrong.   Main Findings: As a result, normative ethics compact with a one-dimensional approach. All three ethics deal with its own specific code of ethics. Utilitarianism has focused on good outcomes. Kantian ethics has paid attention to good rules with duty. Virtue ethics focused on the good people but all three theories have a strong common objective of focusing on only human beings (sentient entities) and omit other entities (plants and animals). So all normative ethics have certain limitations and do their duties without thinking about consequences and situations. In conclusion, this code of normative ethics has provoked as anthropocentric. In addition that Swan’s actions and the rational behavior made her miserably failed in Mercury through the construction of the biome and creation of quantum computers. So this cause, in the end, the space people want to move from space to earth to rebuild the biome. Applications of this study: The prudent study analyses the normative ethics in a detailed manner under the Utilitarian approach, Kantian ethics and Virtue ethics. These philosophical domains can be benefitted for researchers to practice and implement during the research process in Humanities and Social Sciences especially. Novelty/Originality of this study: The study analyzed the anthropocentric attitude of the character Swan in 2312 based on her actions or duties through the code of normative ethics (Utilitarianism, Kantian ethics and Virtue ethics).


Author(s):  
Jonathan Readshaw ◽  
Stefano Giani

AbstractThis work presents a convolutional neural network for the prediction of next-day stock fluctuations using company-specific news headlines. Experiments to evaluate model performance using various configurations of word embeddings and convolutional filter widths are reported. The total number of convolutional filters used is far fewer than is common, reducing the dimensionality of the task without loss of accuracy. Furthermore, multiple hidden layers with decreasing dimensionality are employed. A classification accuracy of 61.7% is achieved using pre-learned embeddings, that are fine-tuned during training to represent the specific context of this task. Multiple filter widths are also implemented to detect different length phrases that are key for classification. Trading simulations are conducted using the presented classification results. Initial investments are more than tripled over an 838-day testing period using the optimal classification configuration and a simple trading strategy. Two novel methods are presented to reduce the risk of the trading simulations. Adjustment of the sigmoid class threshold and re-labelling headlines using multiple classes form the basis of these methods. A combination of these approaches is found to be more than double the Average Trade Profit achieved during baseline simulations.


2005 ◽  
Vol 17 (5) ◽  
pp. 996-1009 ◽  
Author(s):  
Jens Christian Claussen

A new family of self-organizing maps, the winner-relaxing Kohonen algorithm, is introduced as a generalization of a variant given by Kohonen in 1991. The magnification behavior is calculated analytically. For the original variant, a magnification exponent of 4/7 is derived; the generalized version allows steering the magnification in the wide range from exponent 1/2 to 1 in the one-dimensional case, thus providing optimal mapping in the sense of information theory. The winner-relaxing algorithm requires minimal extra computations per learning step and is conveniently easy to implement.


2011 ◽  
Vol 11 (1-2) ◽  
pp. 49-59
Author(s):  
Setareh Mohsenifar ◽  
Mohsen Arjmand ◽  
Habibollah Ghassemzadeh ◽  
Shabnam Salimi

AbstractThe ability to categorize has been known as one of the most important cognitive abilities in human beings. When it comes to the topic of categorization type, it seems different people select differently. Some of them categorize on the basis of similarity judgment and some based on the uni-dimensional rule. The present study attempts to evaluate the tendency toward a specific type of categorization as can be observed in a voluntary group of medical students in Iran. Most of the studies in categorization have been conducted in Western world and some of Eastern-Asian people. To the best of our knowledge, this is the first study which has been done in categorization in Iran. The results suggest that Iranians, like Eastern-Asian people, tend to categorize mostly based on similarity. There was not any relationship between the IQ scores of the participants and the type of categorization. We also examined the implications of the words “similarity” and “belonging to” as translated into Persian.


Sign in / Sign up

Export Citation Format

Share Document