Online content moderation and the Dark Web: Policy responses to radicalizing hate speech and malicious content on the Darknet

First Monday ◽  
2019 ◽  
Author(s):  
Eric Jardine

De-listing, de-platforming, and account bans are just some of the increasingly common steps taken by major Internet companies to moderate their online content environments. Yet these steps are not without their unintended effects. This paper proposes a surface-to-Dark Web content cycle. In this process, malicious content is initially posted on the surface Web. It is then moderated by platforms. Moderated content does not necessarily disappear when major Internet platforms crackdown, but simply shifts to the Dark Web. From the Dark Web, malicious informational content can then percolate back to the surface Web through a series of three pathways. The implication of this cycle is that managing the online information environment requires careful attention to the whole system, not just content hosted on surface Web platforms per se. Both government and private sector actors can more effectively manage the surface-to-Dark Web content cycle through a series of discrete practices and policies implemented at each stage of the wider process.

2021 ◽  
pp. 50-71
Author(s):  
Shakeel Ahmed ◽  
Shubham Sharma ◽  
Saneh Lata Yadav

Information retrieval is finding material of unstructured nature within large collections stored on computers. Surface web consists of indexed content accessible by traditional browsers whereas deep or hidden web content cannot be found with traditional search engines and requires a password or network permissions. In deep web, dark web is also growing as new tools make it easier to navigate hidden content and accessible with special software like Tor. According to a study by Nature, Google indexes no more than 16% of the surface web and misses all of the deep web. Any given search turns up just 0.03% of information that exists online. So, the key part of the hidden web remains inaccessible to the users. This chapter deals with positing some questions about this research. Detailed definitions, analogies are explained, and the chapter discusses related work and puts forward all the advantages and limitations of the existing work proposed by researchers. The chapter identifies the need for a system that will process the surface and hidden web data and return integrated results to the users.


2021 ◽  
Vol 2021 ◽  
pp. 1-21
Author(s):  
Randa Basheer ◽  
Bassel Alkhatib

From proactive detection of cyberattacks to the identification of key actors, analyzing contents of the Dark Web plays a significant role in deterring cybercrimes and understanding criminal minds. Researching in the Dark Web proved to be an essential step in fighting cybercrime, whether with a standalone investigation of the Dark Web solely or an integrated one that includes contents from the Surface Web and the Deep Web. In this review, we probe recent studies in the field of analyzing Dark Web content for Cyber Threat Intelligence (CTI), introducing a comprehensive analysis of their techniques, methods, tools, approaches, and results, and discussing their possible limitations. In this review, we demonstrate the significance of studying the contents of different platforms on the Dark Web, leading new researchers through state-of-the-art methodologies. Furthermore, we discuss the technical challenges, ethical considerations, and future directions in the domain.


Author(s):  
Ramanujam Elangovan

The deep web (also called deepnet, the invisible web, dark web, or the hidden web) refers to world wide web content that is not part of the surface web, which is indexed by standard search engines. The more familiar “surface” web contains only a small fraction of the information available on the internet. The deep web contains much of the valuable data on the web, but is largely invisible to standard web crawling techniques. Besides it being the huge source of information, it also provides the rostrum for cybercrime like by providing download links for movies, music, games, etc. without having their copyrights. This article aims to provide context and policy recommendations pertaining to the dark web. The dark web's complete history, from its creation to the latest incidents and the way to access and their sub forums are briefly discussed with respective to the user perspective.


Author(s):  
Punam Bedi ◽  
Neha Gupta ◽  
Vinita Jindal

The World Wide Web is a part of the Internet that provides data dissemination facility to people. The contents of the Web are crawled and indexed by search engines so that they can be retrieved, ranked, and displayed as a result of users' search queries. These contents that can be easily retrieved using Web browsers and search engines comprise the Surface Web. All information that cannot be crawled by search engines' crawlers falls under Deep Web. Deep Web content never appears in the results displayed by search engines. Though this part of the Web remains hidden, it can be reached using targeted search over normal Web browsers. Unlike Deep Web, there exists a portion of the World Wide Web that cannot be accessed without special software. This is known as the Dark Web. This chapter describes how the Dark Web differs from the Deep Web and elaborates on the commonly used software to enter the Dark Web. It highlights the illegitimate and legitimate sides of the Dark Web and specifies the role played by cryptocurrencies in the expansion of Dark Web's user base.


The Dark Web ◽  
2018 ◽  
pp. 114-137
Author(s):  
Dilip Kumar Sharma ◽  
A. K. Sharma

Web crawlers specialize in downloading web content and analyzing and indexing from surface web, consisting of interlinked HTML pages. Web crawlers have limitations if the data is behind the query interface. Response depends on the querying party's context in order to engage in dialogue and negotiate for the information. In this paper, the authors discuss deep web searching techniques. A survey of technical literature on deep web searching contributes to the development of a general framework. Existing frameworks and mechanisms of present web crawlers are taxonomically classified into four steps and analyzed to find limitations in searching the deep web.


2020 ◽  
Vol 11 ◽  
Author(s):  
Friederike Hendriks ◽  
Elisabeth Mayweg-Paus ◽  
Mark Felton ◽  
Kalypso Iordanou ◽  
Regina Jucks ◽  
...  

Many urgent problems that societies currently face—from climate change to a global pandemic—require citizens to engage with scientific information as members of democratic societies as well as to solve problems in their personal lives. Most often, to solve their epistemic aims (aims directed at achieving knowledge and understanding) regarding such socio-scientific issues, individuals search for information online, where there exists a multitude of possibly relevant and highly interconnected sources of different perspectives, sometimes providing conflicting information. The paper provides a review of the literature aimed at identifying (a) constraints and affordances that scientific knowledge and the online information environment entail and (b) individuals' cognitive and motivational processes that have been found to hinder, or conversely, support practices of engagement (such as critical information evaluation or two-sided dialogue). Doing this, a conceptual framework for understanding and fostering what we call online engagement with scientific information is introduced, which is conceived as consisting of individual engagement (engaging on one's own in the search, selection, evaluation, and integration of information) and dialogic engagement (engaging in discourse with others to interpret, articulate and critically examine scientific information). In turn, this paper identifies individual and contextual conditions for individuals' goal-directed and effortful online engagement with scientific information.


Hand ◽  
2019 ◽  
pp. 155894471987883
Author(s):  
Shuting Zhong ◽  
Gabriella E. Reed ◽  
Loree K. Kalliainen

Background: People with tetraplegia lack awareness of, and subsequently underutilize, reconstructive surgery to improve upper extremity function. This is a topic of international discussion. To bridge the information gap, proposed mandates encourage providers to discuss surgical options with all tetraplegic patients. Outside of the clinical setting, little is known about information available to patients and caregivers—particularly online. The purpose of this study is to evaluate online content for surgical options for improved upper extremity function for people with tetraplegia. Methods: A sample of online content was generated using common search engines and 2 categories of key words and phrases, general and specific. Articles on the first 2 search pages were evaluated for content and audience. Results: A total of 76 different search results appeared on the first 2 pages using 8 unique search phrases. Of articles generated from general phrases, only 5% mentioned tendon or nerve transfers in tetraplegia. When more specific key search phrases were used, the number of lay articles increased to 71%. Conclusions: Based on initial results, general online information on the management of tetraplegia largely excludes discussions of upper limb reconstruction and the well-known benefits. Unless patients, their caregivers, and nonsurgical health care providers have baseline knowledge of tendon and/or nerve transfers, they are unlikely to obtain de novo awareness of surgical options with self-initiated searches. Thus, the challenge and opportunity is to revise the online dialogue to include upper extremity surgery as a fundamental tenet of tetraplegia care.


Author(s):  
Andrey Aleksandrov ◽  
Andrey Safronov

The article examines the concept, essence, specificity, structural elements of the Surface Network (from the English «Surface web») as well as so-called Deep Internet (from the English «Deep web»). The peculiarity of the use of the deep Internet in which the content is available only through connections created with the help of special software is discussed. The article describes the type of network separated from the rest of the public content forming the Darknet. It existed under the name of ARPANET (network of advanced research project agencies) before the civilian Internet known to us today has been separated from it. The creators of the Darknet haven’t foreseen all its applications. The paper lists software products used to connect to the Darknet. The purpose of special software products usage is to ensure its users’ maximum anonymity to complicate the tracking of their identity, IP-address as well as location in the network. The study reveals the main types of Darknet crimes and outlines ways to improve law enforcement activities to tackle these crimes. In addition, it identifies the problem of development and increasing use of the dark web for criminal purposes.


Sign in / Sign up

Export Citation Format

Share Document