How to Account for Information

Author(s):  
Luciano Floridi

In Floridi (2005), I argued that a definition of semantic information in terms of alethically-neutral content – that is, strings of well-formed and meaningful data that can be additionally qualified as true or untrue (false, for the classicists among us), depending on supervening evaluations – provides only necessary but insufficient conditions: if some content is to qualify as semantic information, it must also be true. One speaks of false information in the same way as one qualifies someone as a false friend, i.e. not a friend at all. According to it, semantic information is, strictly speaking, inherently truth-constituted and not a contingent truth-bearer, exactly like knowledge but unlike propositions or beliefs, for example, which are what they are independently of their truth values and then, because of their truth-aptness, may be further qualified alethically.

2020 ◽  
Vol 34 (05) ◽  
pp. 8131-8138
Author(s):  
Anne Lauscher ◽  
Goran Glavaš ◽  
Simone Paolo Ponzetto ◽  
Ivan Vulić

Distributional word vectors have recently been shown to encode many of the human biases, most notably gender and racial biases, and models for attenuating such biases have consequently been proposed. However, existing models and studies (1) operate on under-specified and mutually differing bias definitions, (2) are tailored for a particular bias (e.g., gender bias) and (3) have been evaluated inconsistently and non-rigorously. In this work, we introduce a general framework for debiasing word embeddings. We operationalize the definition of a bias by discerning two types of bias specification: explicit and implicit. We then propose three debiasing models that operate on explicit or implicit bias specifications and that can be composed towards more robust debiasing. Finally, we devise a full-fledged evaluation framework in which we couple existing bias metrics with newly proposed ones. Experimental findings across three embedding methods suggest that the proposed debiasing models are robust and widely applicable: they often completely remove the bias both implicitly and explicitly without degradation of semantic information encoded in any of the input distributional spaces. Moreover, we successfully transfer debiasing models, by means of cross-lingual embedding spaces, and remove or attenuate biases in distributional word vector spaces of languages that lack readily available bias specifications.


Author(s):  
Galina Shvetsova-Vodka

The author examines the concept of noosphere as applied in the document studies. In 1990ies, Kim Gelman-Vinogradov used in the document studies and introduced the concepts of the ‘noospheric document environment’ and noospheric document memory’. The author also demonstrates how Gelman-Vingradov’s ideas are related with A. V. Sokolov’s interpretation of documentosphere and definition of the document. The concept of nooinformation is examined as applied to the works by Yury Stolyarov and Roman Moltulsky, along with its relation to the documents as the object of library studies and to the concept of social (semantic) information. In the works of bibliography theorists: Galina Gordukalova, V. Fokeev, Alexandra Kumanova, the concept of noosphere is used to characterize document flows, document information, information modeling, and bibliographic compression of information. Arkady Sokolov’s proposal to develop noospherology, and his idea of the libraries; role in promoting the noospheric future and educating homo noospheric, is examined. The author also suggests that the noospheric approach will become a methodological instrument in documentospheric knowledge and will be of practical use for defining the role of libraries in the modern society.


1959 ◽  
Vol 14 ◽  
pp. 95-107
Author(s):  
Sigekatu Kuroda

The V-system T(V) is defined in §2 by using §1, and its consistency is proved in §3. The definition of T(V) is given in such a way that the consistency proof of T(V) in §3 shows a typical way to prove the consistency of some subsystems of UL. Otherwise we could define T(V) more simply by using truth values. After T(V)-sets are treated in §4, it is proved in §5 as a T(V)-theorem that T(V)-sets are all equal to V.


1998 ◽  
Vol 63 (4) ◽  
pp. 1201-1217
Author(s):  
Norman Feldman

In this paper we consider the three-valued logic used by Kleene [6] in the theory of partial recursive functions. This logic has three truth values: true (T), false (F), and undefined (U). One interpretation of U is as follows: Suppose we have two partially recursive predicates P(x) and Q(x) and we want to know the truth value of P(x) ∧ Q(x) for a particular x0. If x0 is in the domain of definition of both P and Q, then P(x0) ∧ Q(x0) is true if both P(x0) and Q(x0) are true, and false otherwise. But what if x0 is not in the domain of definition of P, but is in the domain of definition of Q? There are several choices, but the one chosen by Kleene is that if Q(X0) is false, then P(x0) ∧ Q(x0) is also false and if Q(X0) is true, then P(x0) ∧ Q(X0) is undefined.What arises is the question about knowledge of whether or not x0 is in the domain of definition of P. Is there an effective procedure to determine this? If not, then we can interpret U as being unknown. If there is an effective procedure, then our decision for the truth value for P(x) ∧ Q(x) is based on the knowledge that is not in the domain of definition of P. In this case, U can be interpreted as undefined. In either case, we base our truth value of P(x) ∧ Q(x) on the truth value of Q(X0).


2010 ◽  
Vol 53 (1) ◽  
pp. 109-123
Author(s):  
Drago Djuric

Aristotle in De Interpretatione 9 considers the use of predicates in combination with subjects which are forming propositions, each of which is necessarily either true or false. This necessity was later named as Principle of bivalence (of truth values). Although he grants the truth or falsity of propositions about past and present events, propositions about the future seem problematic. If a proposition about tomorrow is true (or false) today, then the future event it describes will happen (or not happen) necessary. It leads to (logical) determinism. Aristotle attempts to avoid it. His solution was to maintain that the disjunction is necessarily true today even though neither of its disjuncts is. Thus, it is necessary that either tomorrow's event will occur or it will not, but it is neither necessary that it will occur nor necessary that it will not occur. Because of fact that according to Aristotle Principle of bivalence is not valid for a propositions about future, for him is not valid Principle of plenitude also. On the other side, according to his Master argument and his definitions of modalities for Diodorus Cronus possible is something what is or will be. In opposition to Aristotle, for him does not exists any non-actualized possibility. In some sense Diodorus implicit respects Principle of bivalence. It is compatible with the Principle of plenitude which is also respected from Diodorus Cronus. Aristotle attempts to save difference between modal categories (necessity and possibility), and trys to reject logical determinism. According to his definition of possibility in Diodorus Cronus conception this difference collapses in to determinism.


2019 ◽  
Vol 4 (1) ◽  
pp. 19-46
Author(s):  
Abdulmalik Sugow

With the proliferation of peer-to-peer networks as a source of information, concerns on the accuracy of information shared have been raised, necessitating attempts by governments to regulate fake news. Kenya’s Computer Misuse and Cybercrimes Act, for instance, criminalises the intentional dissemination of false or misleading data. However, such regulation has resulted in a different set of concerns, particularly its potential to bring about undue limitation on the freedom of expression. In appraising the approach taken in Kenya of imposing liability on perpetrators, and that taken in some jurisdictions of imposing intermediary liability, the article posits that similar difficulties are faced in regulating fake news – the freedom of expression could be curtailed. This is fuelled by ambiguity in the definition of ‘fake news’. Consequently, this article seeks to find out if indeed, it is possible to regulate fake news while preserving the freedom of expression in Kenya. Further, the article delves into some of the effects the proliferation of fake news has had on the democratic process in Kenya, thereby requiring regulation. In doing so, it tackles fake news from two general conceptions: fake news as calculated disinformation campaigns by individuals for certain purposes, and fake news as an overarching culture of misinformation that enables the spread of false information. Regarding the former, it finds that legislative measures may prove sufficient. However, the latter requires a combination of non-legislative measures such as collaborative measure, awareness initiatives and fact-checking.


Author(s):  
Bente Kalsnes

Fake news is not new, but the American presidential election in 2016 placed the phenomenon squarely onto the international agenda. Manipulation, disinformation, falseness, rumors, conspiracy theories—actions and behaviors that are frequently associated with the term—have existed as long as humans have communicated. Nevertheless, new communication technologies have allowed for new ways to produce, distribute, and consume fake news, which makes it harder to differentiate what information to trust. Fake news has typically been studied along four lines: Characterization, creation, circulation, and countering. How to characterize fake news has been a major concern in the research literature, as the definition of the term is disputed. By differentiating between intention and facticity, researchers have attempted to study different types of false information. Creation concerns the production of fake news, often produced with either a financial, political, or social motivation. The circulation of fake news refers to the different ways false information has been disseminated and amplified, often through communication technologies such as social media and search engines. Lastly, countering fake news addresses the multitude of approaches to detect and combat fake news on different levels, from legal, financial, and technical aspects to individuals’ media and information literacy and new fact-checking services.


Author(s):  
Yingkui Wang ◽  
Di Jin ◽  
Katarzyna Musial ◽  
Jianwu Dang

Network contents including node contents and edge contents can be utilized for community detection in social networks. Thus, the topic of each community can be extracted as its semantic information. A plethora of models integrating topic model and network topologies have been proposed. However, a key problem has not been resolved that is the semantic division of a community. Since the definition of community is based on topology, a community might involve several topics. To ach


2014 ◽  
Vol 1 ◽  
Author(s):  
Allison Klempka ◽  
Arielle Stimson

Internet Trolls are an online subculture who participate in posting upsetting or shocking content, harassing users, and spreading false information for their own enjoyment. As of the time of this study, research is limited on the trolling culture, the perception of trolls, and trolling behavior. The researchers have investigated trolling culture, as well as conducted a study in which subjects were asked to relay their emotional reactions to a selection of online comments, and mark the comments they considered to be trolling behavior. The results were meant to discover whether subjects of different age generations differed in their perception and definition of trolls. The results clarified that trolling was frequently associated with poor behavior, although the degree of disapproval and definitions for trolling varied between age groups.


2021 ◽  
Author(s):  
Uyiosa Omoregie

Misinformation propagation in its current form is a global problem that requires urgent solutions. Historically, instances of misinformation publicly propagated can be found as far back as the sixth century AD.Scholars and researchers have generally settled for a definition of ‘information disorder’ that reveals three variants: misinformation, disinformation and malinformation. What should be of paramount importance, in the fight against information disorders, is the potential of false information to cause harm. The ‘harm principle’ was proposed by the British philosopher John Stuart Mill in 1859 and needs an upgrade for the social media age. One such upgrade is proposed by Cass Sunstein.


Sign in / Sign up

Export Citation Format

Share Document