numerical representation
Recently Published Documents


TOTAL DOCUMENTS

368
(FIVE YEARS 110)

H-INDEX

27
(FIVE YEARS 3)

2022 ◽  
Author(s):  
Bo Gao ◽  
Ethan T. Coon

Abstract. Permafrost degradation within a warming climate poses a significant environmental threat through both the permafrost carbon feedback and damage to human communities and infrastructure. Understanding this threat relies on better understanding and numerical representation of thermo-hydrological permafrost processes, and the subsequent accurate prediction of permafrost dynamics. All models include simplified assumptions, implying a tradeoff between model complexity and prediction accuracy. The main purpose of this work is to investigate this tradeoff when applying the following commonly made assumptions: (1) assuming equal density of ice and liquid water in frozen soil; (2) neglecting the effect of cryosuction in unsaturated freezing soil; and (3) neglecting advective heat transport during soil freezing and thaw. This study designed a set of 62 numerical experiments using the Advanced Terrestrial Simulator (ATS v1.2) to evaluate the effects of these choices on permafrost hydrological outputs, including both integrated and pointwise quantities. Simulations were conducted under different climate conditions and soil properties from three different sites in both column- and hillslope-scale configurations. Results showed that amongst the three physical assumptions, soil cryosuction is the most crucial yet commonly ignored process. Neglecting cryosuction, on average, can cause 10 % ~ 20 % error in predicting evaporation, 50 % ~ 60 % error in discharge, 10 % ~ 30 % error in thaw depth, and 10 % ~ 30 % error in soil temperature at 1 m beneath surface. The prediction error for subsurface temperature and water saturation is more obvious at hillslope scales due to the presence of lateral flux. By comparison, using equal ice-liquid density has a minor impact on most hydrological variables, but significantly affects soil water saturation with an averaged 5 % ~ 15 % error. Neglecting advective heat transport presents the least error, 5 % or even much lower, in most variables for a general Arctic tundra system, and can decrease the simulation time at hillslope scales by 40 % ~ 80 %. By challenging these commonly made assumptions, this work provides permafrost hydrology modelers important context for better choosing the appropriate process representation for a given modeling experiment.


Author(s):  
Tongtong Wang ◽  
Anand Kumar Sharma ◽  
Christian Wolfrum

AbstractWhen normalized to volume, adipose tissue is comprised mainly of large lipid metabolizing and storing cells called adipocytes. Strikingly, the numerical representation of non-adipocytes, composed of a wide variety of cell types found in the so-called stromal vascular fraction (SVF), outnumber adipocytes by far. Besides its function in energy storage, adipose tissue has emerged as a versatile organ that regulates systemic metabolism and has therefore constituted an attractive target for the treatment of metabolic diseases. Recent high-resolution single cells/nucleus RNA seq data exemplify an intriguingly profound diversity of both adipocytes and SVF cells in all adipose depots, and the current data, while limited, demonstrate the significance of the intra-tissue cell composition in shaping the overall functionality of this tissue. Due to the complexity of adipose tissue, our understanding of the biological relevance of this heterogeneity and plasticity is fractional. Therefore, establishing atlases of adipose tissue cell heterogeneity is the first step towards generating an understanding of these functionalities. In this review, we will describe the current knowledge on adipose tissue cell composition and the heterogeneity of single-cell RNA sequencing, including the technical limitations.


2021 ◽  
pp. 095001702110359
Author(s):  
Maryam Aldossari ◽  
Sara Chaudhry ◽  
Ahu Tatli ◽  
Cathrine Seierstad

Extending tokenism theory, and Kanter’s work on numerical representation within organisations, we emphasise the societal context of gender inequality in order to understand token women’s lived experiences at work. Based on analysis of 29 in-depth interviews in a multinational (MNC) situated in the distinctive socio-institutional setting of Saudi Arabia, the article expands Kanter’s typology of roles, to capture token assimilation in a context-embedded way. In particular, we explore the interaction of a seemingly Western MNC espousing liberal values, rules and norms with the enduring patriarchal and traditional context of Saudi Arabia. Further adding texture to Kanter’s theory, this study reveals that the organisational context cannot be seen as fundamentally neutral and inevitably interacts with the societal context, resulting in unique manifestations of tokenism.


2021 ◽  
Vol 9 (1) ◽  
pp. 121-132
Author(s):  
Anis Haron ◽  
Wong Chee Onn ◽  
Hew Soon Hin

Timbre are commonly described using semantic descriptors such as ’dark’, ’bright’ and ’warm’. The use of such descriptors are useful and largely practiced by trained individuals in music related industries. Such descriptors are subjective as it could be interpreted differently by different individuals determined by factors such as training and exposure. Semantic descriptors also lacks granularity, in a sense that the descriptor does not indicate the amount or intensity of the description. A numerical representation for timbral description addressees these issues. Computational approach for numerical measure of timbre are at present under study by music technology researchers. Such studies requires a benchmarking process in order for viability tests. To provide a set of data that can be used for benchmarking, a survey on auditory perception and semantic descriptors of musical timbres were conducted. The conducted survey looks to find out if a general consensus can be observed for semantic description of musical timbres using a normative survey methodology. This article reviews the conducted survey, presenting the survey’s approach, results and findings


2021 ◽  
Vol 12 (6) ◽  
pp. 7249-7266

Topological index is a numerical representation of a chemical structure. Based on these indices, physicochemical properties, thermodynamic behavior, chemical reactivity, and biological activity of chemical compounds are calculated. Acetaminophen is an essential drug to prevent/treat various types of viral fever, including malaria, flu, dengue, SARS, and even COVID-19. This paper computes the sum and multiplicative version of various topological indices such as General Zagreb, General Randić, General OGA, AG, ISI, SDD, Forgotten indices M-polynomials of Acetaminophen. To the best of our knowledge, for the Acetaminophen drugs, these indices have not been computed previously.


2021 ◽  
Vol 15 (1-2) ◽  
pp. 1-29
Author(s):  
Axel Pichler ◽  
Nils Reiter

Abstract The present article discusses and reflects on possible ways of operationalizing the terminology of traditional literary studies for use in computational literary studies. By »operationalization«, we mean the development of a method for tracing a (theoretical) term back to text-surface phenomena; this is done explicitly and in a rule-based manner, involving a series of substeps. This procedure is presented in detail using as a concrete example Norbert Altenhofer’s »model interpretation« (Modellinterpretation) of Heinrich von Kleist’s The Earthquake in Chile. In the process, we develop a multi-stage operation – reflected upon throughout in terms of its epistemological implications – that is based on a rational-hermeneutic reconstruction of Altenhofer’s interpretation, which focuses on »mysteriousness« (Rätselhaftigkeit), a concept from everyday language. As we go on to demonstrate, when trying to operationalize this term, one encounters numerous difficulties, which is owing to the fact that Altenhofer’s use of it is underspecified in a number of ways. Thus, for instance, and contrary to Altenhofer’s suggestion, Kleist’s sentences containing »relativizing or perspectivizing phrases such as ›it seemed‹ or ›it was as if‹« (Altenhofer 2007, 45) do by no means, when analyzed linguistically, suggest a questioning or challenge of the events narrated, since the unreal quality of those German sentences only relates to the comparison in the subordinate clause, not to the respective main clause. Another indicator central to Altenhofer’s ascription of »mysteriousness« is his concept of a »complete facticity« (lückenlose Faktizität) which »does not seem to leave anything ›open‹« (Altenhofer 2007, 45). Again, the precise designation of what exactly qualifies facticity as »complete« is left open, since Kleist’s novella does indeed select for portrayal certain phenomena and actions within the narrated world (and not others). The degree of factuality in Kleist’s text may be higher than it is in other texts, but it is by no means »complete«. In the context of Altenhofer’s interpretation, »complete facticity« may be taken to mean a narrative mode in which terrible events are reported using conspicuously sober and at times drastic language. Following the critical reconstruction of Altenhofer’s use of terminology, the central terms and their relationship to one another are first explicated (in natural language), which already necessitates intensive conceptual work. We do so implementing a hierarchical understanding of the terms discussed: the definition of one term uses other terms which also need to be defined and operationalized. In accordance with the requirements of computational text analysis, this hierarchy of terms should end in »directly measurable« terms – i. e., in terms that can be clearly identified on the surface of the text. This, however, leads to the question of whether (and, if so, on the basis of which theoretical assumptions) the terminology of literary studies may be traced back in this way to text-surface phenomena. Following the pragmatic as well as the theoretical discussion of this complex of questions, we indicate ways by which such definitions may be converted into manual or automatic recognition. In the case of manual recognition, the paradigm of annotation – as established and methodologically reflected in (computational) linguistics – will be useful, and a well-controlled annotation process will help to further clarify the terms in question. The primary goal, however, is to establish a recognition rule by which individuals may intersubjectively and reliably identify instances of the term in question in a given text. While it is true that in applying this method to literary studies, new challenges arise – such as the question of the validity and reliability of the annotations –, these challenges are at present being researched intensively in the field of computational literary studies, which has resulted in a large and growing body of research to draw on. In terms of computer-aided recognition, we examine, by way of example, two distinct approaches: 1) The kind of operationalization which is guided by precedent definitions and annotation rules benefits from the fact that each of its steps is transparent, may be validated and interpreted, and that existing tools from computational linguistics can be integrated into the process. In the scenario used here, these would be tools for recognizing and assigning character speech, for the resolution of coreference and the assessment of events; all of these, in turn, may be based on either machine learning, prescribed rules or dictionaries. 2) In recent years, so-called end-to-end systems have become popular which, with the help of neural networks, »infer« target terms directly from a numerical representation of the data. These systems achieve superior results in many areas. However, their lack of transparency also raises new questions, especially with regard to the interpretation of results. Finally, we discuss options for quality assurance and draw a first conclusion. Since numerous decisions have to be made in the course of operationalization, and these, in practice, are often pragmatically justified, the question quickly arises as to how »good« a given operationalization actually is. And since the tools borrowed from computational linguistics (especially the so-called inter-annotator agreement) can only partially be transferred to computational literary studies and, moreover, objective standards for the quality of a given implementation will be difficult to find, it ultimately falls to the community of researchers and scholars to decide, based on their research standards, which operationalizations they accept. At the same time, operationalization is the central link between the computer sciences and literary studies, as well as being a necessary component for a large part of the research done in computational literary studies. The advantage of a conscious, deliberate and reflective operationalization practice lies not only in the fact that it can be used to achieve reliable quantitative results (or that a certain lack of reliability at least is a known factor); it also lies in its facilitation of interdisciplinary cooperation: in the course of operationalization, concrete sets of data are discussed, as are the methods for analysing them, which taken together minimizes the risk of misunderstandings, »false friends« and of an unproductive exchange more generally.


Author(s):  
Alexander Demidovskij ◽  
Eduard Babkin

Introduction: The construction of integrated neurosymbolic systems is an urgent and challenging task. Building neurosymbolic decision support systems requires new approaches to represent knowledge about a problem situation and to express symbolic reasoning at the subsymbolic level.  Purpose: Development of neural network architectures and methods for effective distributed knowledge representation and subsymbolic reasoning in decision support systems in terms of algorithms for aggregation of fuzzy expert evaluations to select alternative solutions. Methods: Representation of fuzzy and uncertain estimators in a distributed form using tensor representations; construction of a trainable neural network architecture for subsymbolic aggregation of linguistic estimators. Results: The study proposes two new methods of representation of linguistic assessments in a distributed form. The first approach is based on the possibility of converting an arbitrary linguistic assessment into a numerical representation and consists in converting this numerical representation into a distributed one by converting the number itself into a bit string and further forming a matrix storing the distributed representation of the whole expression for aggregating the assessments. The second approach to translating linguistic assessments to a distributed representation is based on representing the linguistic assessment as a tree and coding this tree using the method of tensor representations, thus avoiding the step of translating the linguistic assessment into a numerical form and ensuring the transition between symbolic and subsymbolic representations of linguistic assessments without any loss of information. The structural elements of the linguistic assessment are treated as fillers with their respective positional roles. A new subsymbolic method of aggregation of linguistic assessments is proposed, which consists in creating a trainable neural network module in the form of a Neural Turing Machine. Practical relevance: The results of the study demonstrate how a symbolic algorithm for aggregation of linguistic evaluations can be implemented by connectionist (or subsymbolic) mechanisms, which is an essential requirement for building distributed neurosymbolic decision support systems.


PLoS Biology ◽  
2021 ◽  
Vol 19 (10) ◽  
pp. e3001402
Author(s):  
Alexander Kroll ◽  
Martin K. M. Engqvist ◽  
David Heckmann ◽  
Martin J. Lercher

The Michaelis constant KM describes the affinity of an enzyme for a specific substrate and is a central parameter in studies of enzyme kinetics and cellular physiology. As measurements of KM are often difficult and time-consuming, experimental estimates exist for only a minority of enzyme–substrate combinations even in model organisms. Here, we build and train an organism-independent model that successfully predicts KM values for natural enzyme–substrate combinations using machine and deep learning methods. Predictions are based on a task-specific molecular fingerprint of the substrate, generated using a graph neural network, and on a deep numerical representation of the enzyme’s amino acid sequence. We provide genome-scale KM predictions for 47 model organisms, which can be used to approximately relate metabolite concentrations to cellular physiology and to aid in the parameterization of kinetic models of cellular metabolism.


Author(s):  
Andrea Adriano ◽  
Luca Rinaldi ◽  
Luisa Girelli

AbstractThe visual mechanisms underlying approximate numerical representation are still intensely debated because numerosity information is often confounded with continuous sensory cues (e.g., texture density, area, convex hull). However, numerosity is underestimated when a few items are connected by illusory contours (ICs) lines without changing other physical cues, suggesting in turn that numerosity processing may rely on discrete visual input. Yet, in these previous works, ICs were generated by black-on-gray inducers producing an illusory brightness enhancement, which could represent a further continuous sensory confound. To rule out this possibility, we tested participants in a numerical discrimination task in which we manipulated the alignment of 0, 2, or 4 pairs of open/closed inducers and their contrast polarity. In Experiment 1, aligned open inducers had only one polarity (all black or all white) generating ICs lines brighter or darker than the gray background. In Experiment 2, open inducers had always opposite contrast polarity (one black and one white inducer) generating ICs without strong brightness enhancement. In Experiment 3, reverse-contrast inducers were aligned but closed with a line preventing ICs completion. Results showed that underestimation triggered by ICs lines was independent of inducer contrast polarity in both Experiment 1 and Experiment 2, whereas no underestimation was found in Experiment 3. Taken together, these results suggest that mere brightness enhancement is not the primary cause of the numerosity underestimation induced by ICs lines. Rather, a boundary formation mechanism insensitive to contrast polarity may drive the effect, providing further support to the idea that numerosity processing exploits discrete inputs.


Plaridel ◽  
2021 ◽  
Author(s):  
Michael Prieler ◽  
Vannak Dom

This study analyzes 157 unduplicated Cambodian television advertisements for differences in gender representation. The findings indicate gender differences for several variables, including the degree of dress (more men than women were fully dressed and more women than men were suggestively dressed), the setting (more women than men were at home and more men than women were in the workplace), voiceovers (male voiceovers clearly outnumbered female ones), and product categories (women were featured in advertisements for body care/toiletries/cosmetics/beauty products, and men were in advertisements for alcoholic drinks and automotive/vehicles/transportation/accessories products). Most of these gender differences were expected in the patriarchal society of Cambodia, where there are traditionally strict codes of conduct for men and women. However, some results (equal numerical representation, age) ran counter to most previous research. The potential effects of such representations on audiences are discussed based on social cognitive theory and cultivation theory.


Sign in / Sign up

Export Citation Format

Share Document