intended interpretation
Recently Published Documents


TOTAL DOCUMENTS

32
(FIVE YEARS 12)

H-INDEX

5
(FIVE YEARS 1)

2021 ◽  
Vol 9 (1) ◽  
Author(s):  
Frank Goldhammer ◽  
Carolin Hahnel ◽  
Ulf Kroehne ◽  
Fabian Zehner

AbstractInternational large-scale assessments such as PISA or PIAAC have started to provide public or scientific use files for log data; that is, events, event-related attributes and timestamps of test-takers’ interactions with the assessment system. Log data and the process indicators derived from it can be used for many purposes. However, the intended uses and interpretations of process indicators require validation, which here means a theoretical and/or empirical justification that inferences about (latent) attributes of the test-taker’s work process are valid. This article reviews and synthesizes measurement concepts from various areas, including the standard assessment paradigm, the continuous assessment approach, the evidence-centered design (ECD) framework, and test validation. Based on this synthesis, we address the questions of how to ensure the valid interpretation of process indicators by means of an evidence-centered design of the task situation, and how to empirically challenge the intended interpretation of process indicators by developing and implementing correlational and/or experimental validation strategies. For this purpose, we explicate the process of reasoning from log data to low-level features and process indicators as the outcome of evidence identification. In this process, contextualizing information from log data is essential in order to reduce interpretative ambiguities regarding the derived process indicators. Finally, we show that empirical validation strategies can be adapted from classical approaches investigating the nomothetic span and construct representation. Two worked examples illustrate possible validation strategies for the design phase of measurements and their empirical evaluation.


2021 ◽  
Vol 11 (3) ◽  
pp. 432-440
Author(s):  
Bahya Alfitri ◽  
Issy Yuliasri

This study aims to analyze the use of cohesive devices and the chain interaction of cohesive devices to achieve coherence in argumentative essays of Universitas Negeri Semarang graduate students.  This study employed descriptive qualitative research design. It focused on cohesion and coherence analysis of students’ writing. The findings of the study showed that all of the types of cohesive devices such as reference, substitution, ellipsis, and reiteration were found in the students’ essays. These cohesive devices provide the coherence of the text through their semantic relation or bound which create the two cohesive chains; identity chains and similarity chains. The interaction between both chains can give explicit signals to guide readers towards the intended interpretation of the text. The result of this chain interaction is known as cohesive harmony. Most of students’ essays achieve coherence because the total tokens of the students’ essays enter more than 50% of chain interaction. Unfortunately, the students overuse certain types of cohesive devices such as repetition in creating the chains.   


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Melanie Hawkins ◽  
Gerald R. Elsworth ◽  
Sandra Nolte ◽  
Richard H. Osborne

Abstract Background Contrary to common usage in the health sciences, the term “valid” refers not to the properties of a measurement instrument but to the extent to which data-derived inferences are appropriate, meaningful, and useful for intended decision making. The aim of this study was to determine how validity testing theory (the Standards for Educational and Psychological Testing) and methodology (Kane’s argument-based approach to validation) from education and psychology can be applied to validation practices for patient-reported outcomes that are measured by instruments that assess theoretical constructs in health. Methods The Health Literacy Questionnaire (HLQ) was used as an example of a theory-based self-report assessment for the purposes of this study. Kane’s five inferences (scoring, generalisation, extrapolation, theory-based interpretation, and implications) for theoretical constructs were applied to the general interpretive argument for the HLQ. Existing validity evidence for the HLQ was identified and collated (as per the Standards recommendation) through a literature review and mapped to the five inferences. Evaluation of the evidence was not within the scope of this study. Results The general HLQ interpretive argument was built to demonstrate Kane’s five inferences (and associated warrants and assumptions) for theoretical constructs, and which connect raw data to the intended interpretation and use of the data. The literature review identified 11 HLQ articles from which 57 sources of validity evidence were extracted and mapped to the general interpretive argument. Conclusions Kane’s five inferences and associated warrants and assumptions were demonstrated in relation to the HLQ. However, the process developed in this study is likely to be suitable for validation planning for other measurement instruments. Systematic and transparent validation planning and the generation (or, as in this study, collation) of relevant validity evidence supports developers and users of PRO instruments to determine the extent to which inferences about data are appropriate, meaningful and useful (i.e., valid) for intended decisions about the health and care of individuals, groups and populations.


Author(s):  
Julien Murzi ◽  
Brett Topey

AbstractOn a widespread naturalist view, the meanings of mathematical terms are determined, and can only be determined, by the way we use mathematical language—in particular, by the basic mathematical principles we’re disposed to accept. But it’s mysterious how this can be so, since, as is well known, minimally strong first-order theories are non-categorical and so are compatible with countless non-isomorphic interpretations. As for second-order theories: though they typically enjoy categoricity results—for instance, Dedekind’s categoricity theorem for second-order and Zermelo’s quasi-categoricity theorem for second-order —these results require full second-order logic. So appealing to these results seems only to push the problem back, since the principles of second-order logic are themselves non-categorical: those principles are compatible with restricted interpretations of the second-order quantifiers on which Dedekind’s and Zermelo’s results are no longer available. In this paper, we provide a naturalist-friendly, non-revisionary solution to an analogous but seemingly more basic problem—Carnap’s Categoricity Problem for propositional and first-order logic—and show that our solution generalizes, giving us full second-order logic and thereby securing the categoricity or quasi-categoricity of second-order mathematical theories. Briefly, the first-order quantifiers have their intended interpretation, we claim, because we’re disposed to follow the quantifier rules in an open-ended way. As we show, given this open-endedness, the interpretation of the quantifiers must be permutation-invariant and so, by a theorem recently proved by Bonnay and Westerståhl, must be the standard interpretation. Analogously for the second-order case: we prove, by generalizing Bonnay and Westerståhl’s theorem, that the permutation invariance of the interpretation of the second-order quantifiers, guaranteed once again by the open-endedness of our inferential dispositions, suffices to yield full second-order logic.


Author(s):  
Wachara Fungwacharakorn ◽  
Ken Satoh

Since the legal rules cannot be perfect, we have proposed a work called Legal Debugging for handling counterintuitive consequences caused by imperfection of the law. Legal debugging consists of two steps. Firstly, legal debugging interacts with a judge as an oracle that gives the intended interpretation of the law and collaboratively figures out a legal rule called a culprit, which determines as a root cause of counterintuitive consequences. Secondly, the legal debugging determines possible resolutions for a culprit . The way we have proposed to resolve a culprit is to use extra facts that have not been considered in the legal rules to describe the exceptional situation of the case. Nevertheless, the result of the resolution is usually considered as too specific and no generalizations of the resolution are provided. Therefore, in this paper, we introduce a rule generalization step into Legal Debugging. Specifically, we have reorganized Legal Debugging into four steps, namely a culprit detection, an exception invention, a fact-based induction, and a rule-based induction. During these four steps, a new introduced rule is specific at first then becomes more generalized. This new step allows a user to use existing legal concepts from the background knowledge for revising and generalizing legal rules.


2020 ◽  
Vol 0 (0) ◽  
Author(s):  
Liljana Mitkovska ◽  
Eleni Bužarovska

AbstractIt is common for languages crosslinguistically to employ the same verb form in several diathetic constructions distinguished by a different degree of agent suppression. In South Slavic languages the so called ‘quasi-passive reflexive se-constructions’ (QRCs) encode a number of non-factual situations, expressing an array of semantically close meanings unified by modal semantics. The paper argues that QRCs in South Slavic languages represent a gradient category comprising potential, normative and generalizing situation types. The difference between these subclasses depends on the degree of implication of the agent in the construction: the agent is indirectly evoked in the potential, its presence can be felt in the normative, and a non-referring agent is present in the generalizing constructions. The intended interpretation of QRCs is obtained through the predicate-participant relation and pragmatic factors. In shaping the setting the latter may trigger overlapping between the subclasses. The goal of the paper is to prove that QRCs supply the cognitive link between anticausative reflexive (coding autonomous events) and passive reflexive constructions (coding agent defocusing situations): the potential type is closer to anticausatives, while the generalizing type shows affinity with passives. Such scalar analysis of QRCs may contribute to a better understanding of the typology of reflexive constructions.


Axioms ◽  
2020 ◽  
Vol 9 (3) ◽  
pp. 100 ◽  
Author(s):  
Henrique Antunes ◽  
Walter Carnielli ◽  
Andreas Kapsner ◽  
Abilio Rodrigues

In this paper, we propose Kripke-style models for the logics of evidence and truth LETJ and LETF. These logics extend, respectively, Nelson’s logic N4 and the logic of first-degree entailment (FDE) with a classicality operator ∘ that recovers classical logic for formulas in its scope. According to the intended interpretation here proposed, these models represent a database that receives information as time passes, and such information can be positive, negative, non-reliable, or reliable, while a formula ∘A means that the information about A, either positive or negative, is reliable. This proposal is in line with the interpretation of N4 and FDE as information-based logics, but adds to the four scenarios expressed by them two new scenarios: reliable (or conclusive) information (i) for the truth and (ii) for the falsity of a given proposition.


2020 ◽  
Vol 2 (2) ◽  
pp. 1-17
Author(s):  
MASHAEL ALRAJHI

Thematization serves to focus the readers’ attention to the focal aspects of a text in order to deliver its intended interpretation. The cohesion of texts relies on the structure of messages. Consequently, the way in which messages are constructed as the text unfolds contributes to its cohesion. Since the probability of making mistakes in writing is higher in nonnative texts as their writers are not using their mother tongue, a comparison between medical articles written by native and nonnative writers is drawn in the present study to shed light on the similarities and differences among them. Due to the scientific nature of medical texts, writers might face difficulties in the interconnectedness of ideas within the text. Therefore, the medical field texts are inspected to check their correspondence with texts in other fields. The Hallidayan systemic-functional approach (SFL) was utilized to conduct the analysis. The results show that there is a consistency in the distribution of Theme types and Thematic progression patterns among native and nonnative writers. In addition, the findings that relate to the dominance of the topical Theme and the constant Theme pattern in medical texts are in alignment with the results of studies in other fields such as academia.


Author(s):  
Thiago Nascimento ◽  
Umberto Rivieccio ◽  
João Marcos ◽  
Matthew Spinks

Abstract Besides the better-known Nelson logic ($\mathcal{N}3$) and paraconsistent Nelson logic ($\mathcal{N}4$), in 1959 David Nelson introduced, with motivations of realizability and constructibility, a logic called $\mathcal{S}$. The logic $\mathcal{S}$ was originally presented by means of a calculus (crucially lacking the contraction rule) with infinitely many rule schemata and no semantics (other than the intended interpretation into Arithmetic). We look here at the propositional fragment of $\mathcal{S}$, showing that it is algebraizable (in fact, implicative), in the sense of Blok and Pigozzi, with respect to a variety of three-potent involutive residuated lattices. We thus introduce the first known algebraic semantics for $\mathcal{S}$ as well as a finite Hilbert-style calculus equivalent to Nelson’s presentation; this also allows us to clarify the relation between $\mathcal{S}$ and the other two Nelson logics $\mathcal{N}3$ and $\mathcal{N}4$.


2019 ◽  
Vol 35 (3) ◽  
Author(s):  
Dam Ha Thuy

The paper attempts to explain English native speakers’ use of the discourse marker yeah from a relevance-theoretic perspective (Sperber & Wilson, 1995). As a discourse marker, yeah normally functions as a continuer, an agreement marker, a turn-taking marker, or a disfluency marker. However, according to Relevance Theory, yeah can also be considered a procedural expression, and therefore, is expected to help yield necessary constraints on the contexts, which facilitates understanding in human communication by encoding one of the three contextual effects (contextual implication, strengthening, or contradiction) or reorienting the audience to certain assumptions which lead to the intended interpretation. Analyses of examples taken from conversations with a native speaker of English suggest that each use of yeah as a discourse marker is able to put a certain type of constraints on the relevance of the accompanying utterance. These initial analyses serve as a foundation for further research to confirm its multi-functionality as a procedural expression when examined within the framework of Relevance Theory.


Sign in / Sign up

Export Citation Format

Share Document