Detecting Network Anomalies In ISP Network Using DNS And NetFlow

2019 ◽  
Vol 2 (3) ◽  
pp. 238-242
Author(s):  
Andreas Tedja ◽  
Charles Lim ◽  
Heru Purnomo Ipung

The Internet has become the biggest medium for people to communicate with otherpeople all around the world. However, the Internet is also home to hackers with maliciouspurposes. This poses a problem for Internet Service Providers (ISP) and its user, since it ispossible that their network is compromised and damages may be done. There are many types ofmalware that currently exist on the Internet. One of the growing type of malware is botnet.Botnet can infect a system and make it a zombie machine capable of doing distributed attacksunder the command of the botmaster. In order to make detection of botnet more difficult,botmasters often deploy fast flux. Fast flux will shuffle IP address of the domain of themalicious server, making tracking and detection much more difficult. However, there are stillnumerous ways to detect fast flux, one of them is by analysing DNS data. Domain Name System(DNS) is a crucial part of the Internet. DNS works by translating IP address to its associateddomain name. DNS are often being exploited by hackers to do its malicious activities. One ofthem is to deploy fast flux.Because the characteristics of fast flux is significantly different thannormal Internet traffic characteristics, it is possible to detect fast flux from normal Internettraffic from its DNS information. However, while detecting fast flux services, one must becautious since there are a few Internet services which have almost similar characteristics as fastflux service. This research manages to detect the existence of fast flux services in an ISPnetwork. The result is that fast flux mostly still has the same characteristics as found on previousresearches. However, current fast flux trend is to use cloud hosting services. The reason behindthis is that cloud hosting services tend to have better performance than typical zombie machine.Aside from this, it seems like there has been no specific measures taken by the hosting service toprevent this, making cloud hosting service the perfect medum for hosting botnet and fast fluxservices.

Author(s):  
Torsten Bettinger

Although the Internet has no cross-organizational, financial, or operational management responsible for the entire Internet, certain administrative tasks are coordinated centrally. Among the most important organizational tasks that require global regulation is the management of Internet Protocol (IP) addresses and their corresponding domain names. The IP address consists of an existing 32 bit (IP4) or 128 bit (IP6) sequence of digits and is the actual physical network address by which routing on the Internet takes place and which will ensure that the data packets reach the correct host computer.


2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Mariano Di Martino ◽  
Peter Quax ◽  
Wim Lamotte

Zero-rating is a technique where internet service providers (ISPs) allow consumers to utilize a specific website without charging their internet data plan. Implementing zero-rating requires an accurate website identification method that is also efficient and reliable to be applied on live network traffic. In this paper, we examine existing website identification methods with the objective of applying zero-rating. Furthermore, we demonstrate the ineffectiveness of these methods against modern encryption protocols such as Encrypted SNI and DNS over HTTPS and therefore show that ISPs are not able to maintain the current zero-rating approaches in the forthcoming future. To address this concern, we present “Open-Knock,” a novel approach that is capable of accurately identifying a zero-rated website, thwarts free-riding attacks, and is sustainable on the increasingly encrypted web. In addition, our approach does not require plaintext protocols or preprocessed fingerprints upfront. Finally, our experimental analysis unveils that we are able to convert each IP address to the correct domain name for each website in the Tranco top 6000 websites list with an accuracy of 50.5% and therefore outperform the current state-of-the-art approaches.


Author(s):  
Andrew Ward ◽  
Brian Prosser

In the last decade of the twentieth century, with the advent of computers networked through Internet Service Providers and the declining cost of such computers, the traditional topography of secondary and post-secondary education has begun to change. Where before students were required to travel to a geographically central location in order to receive instruction, this is often no longer the case. In this connection, Todd Oppenheimer writes in The Atlantic Monthly that one of the principal arguments used to justify increasing the presence of computer technology in educational settings is that “[W]ork with computers – particularly using the Internet – brings students valuable connections with teachers, other schools and students, and a wide network of professionals around the globe.”1 This shift from the traditional to the “virtual” classroom2 has been welcomed by many. As Gary Goettling writes, “[D]istance learning is offered by hundreds, if not thousands, of colleges and universities around the world, along with a rapidly growing number of corporate and private entities.”3 Goettling’s statement echoes an earlier claim by the University of Idaho School of Engineering that one of the advantages of using computers in distance education is that they “increase access. Local, regional, and national networks link resources and individuals, wherever they might be.”4


Author(s):  
Torsten Bettinger ◽  
Mike Rodenbaugh

Since its creation in 1998, the Internet Corporation for Assigned Names and Numbers (ICANN) has been responsible for ensuring free trade and marketplace competition in the sale and regulation of domain names, as well as overseeing the stability of the Domain Name System (DNS) and the creation of consistent, functional policies. Therefore, its responsibilities include assessing when, and to what degree, additional generic top-level domains (gTLDs) are needed in order to ensure the proper functioning of the DNS. In order to make such a determination, ICANN relied on the input of interested Internet stakeholders as mandated through its multi-stakeholder model, which involves interested business entities, individuals, and governments from around the world.


1994 ◽  
Vol 41 (5) ◽  
pp. 276-281
Author(s):  
M. G. (Peggy) Kelly ◽  
James H. Wiebe

Throughout the Curriculum and Evaluation Standards for School Mathematics (NCTM 1989) the notion of students and teachers as critical thinkers, information seekers, and problem solvers is a priority. The Internet, an electronic highway connected by gateways from one computer network to another, furnishes a telecommunications link around the world. The Internet enables students and teachers to access authentic, real-time data for critical analysis. With access to such Internet service providers as a university computer network or a commercial service like Compuserve, Prodigy, or Applelink, students and teachers become active information seekers.


Author(s):  
Elisabeth Jay Friedman

This chapter offers an alternative account of the invention of the internet. It tells the story of how social justice-oriented web enthusiasts built the internet as we know it today – a networks of networks – because they wanted to ensure access for activist counterpublics around the world. They concretized their goals with the formation of the Association for Progressive Communications, a network of civil society-based Internet Service Providers. Within this global project, feminist communication activists carved out a space for women’s organizing through the APC’s Women’s Networking Support Programme. From their early efforts to today, such activists have contested the gendering of internet technology as the province of men. In doing so, they have also subverted the West’s domination over the internet by extending resources to women from the Global South, particularly Latin America, to nurture their own counterpublics.


Author(s):  
Mohammed Al-Drees ◽  
Marwah M. Almasri ◽  
Mousa Al-Akhras ◽  
Mohammed Alawairdhi

Background:: Domain Name System (DNS) is considered the phone book of the Internet. Its main goal is to translate a domain name to an IP address that the computer can understand. However, DNS can be vulnerable to various kinds of attacks, such as DNS poisoning attacks and DNS tunneling attacks. Objective:: The main objective of this paper is to allow researchers to identify DNS tunnel traffic using machine-learning algorithms. Training machine-learning algorithms to detect DNS tunnel traffic and determine which protocol was used will help the community to speed up the process of detecting such attacks. Method:: In this paper, we consider the DNS tunneling attack. In addition, we discuss how attackers can exploit this protocol to infiltrate data breaches from the network. The attack starts by encoding data inside the DNS queries to the outside of the network. The malicious DNS server will receive the small chunk of data decoding the payload and put it together at the server. The main concern is that the DNS is a fundamental service that is not usually blocked by a firewall and receives less attention from systems administrators due to a vast amount of traffic. Results:: This paper investigates how this type of attack happens using the DNS tunneling tool by setting up an environment consisting of compromised DNS servers and compromised hosts with the Iodine tool installed in both machines. The generated dataset contains the traffic of HTTP, HTTPS, SSH, SFTP, and POP3 protocols over the DNS. No features were removed from the dataset so that researchers could utilize all features in the dataset. Conclusion:: DNS tunneling remains a critical attack that needs more attention to address. DNS tunneled environment allows us to understand how such an attack happens. We built the appropriate dataset by simulating various attack scenarios using different protocols. The created dataset contains PCAP, JSON, and CSV files to allow researchers to use different methods to detect tunnel traffic.


2020 ◽  
Vol 338 ◽  
pp. 111-122
Author(s):  
Domenica Bagnato

The NIS Directive [1] defines critical infrastructures and operators of essential services. It also calls for organizational measures to ensure these infrastructures are protected from cybercrime and terrorism. This also includes the establishment of a national framework for emergency response. The list of essential services in Annex II does contain certain elements of Internet infrastructures, such as Domain Name Servers and Internet Exchange Points. However, in a truly remarkable omission, the Directive does not include Internet Service Providers (ISP) [2]. Since operators of essential services are subject to stringent security requirements, it would be helpful to include them as operators of essential services. This seems even more appropriate as many other Annex II infrastructures, such as banking, health and transport, heavily rely on a working Internet infrastructure, which is largely dependent on ISPs. This paper discusses the omission in the NIS Directive of the ISPs and the incomplete list and codependent registries namely, the IP address space registry and the Autonomous System registry and their necessity in supporting the root Domain Name System.


2021 ◽  
pp. 27-35
Author(s):  
Kieron O’Hara

This chapter presents the history of the Internet and associated applications. The Internet grew out of the ARPANET, founded on network engineering ideas such as packet switching and the end-to-end principle. The chapter describes the development of TCP/IP to connect networks by Cerf and Kahn, creating the modern Internet as a permissionless open system which anyone can join without a gatekeeper, allowing it to scale up. The evolution of the governance system of Internet Service Providers (ISPs) and Internet Exchange Points (IXPs) is presented. The chapter also describes the development of applications that sit on the Internet platform, including the World Wide Web, linked data, cloud computing, and social media.


ADALAH ◽  
2020 ◽  
Vol 4 (2) ◽  
Author(s):  
Munadhil Abdul Muqsith

Abstract:The internet developed for the first time in Indonesia in the early 1990s. Starting from the pagayuban network, it is now expanding without boundaries anywhere. A survey conducted by the Indonesian Internet Service Providers Association (APJII) said that the number of internet users in Indonesia in 2012 reached 63 million people or 24.23 percent of the country's total population. Next year, that figure is predicted to increase by close to 30 percent to 82 million users and continue to grow to 107 million in 2014 and 139 million or 50 percent of the total population in 2015. million people. This matter also results in political communication with the internet media, or is often said to be cyber politics. Cyber politics in Indonesia has faced growth in recent years. There are many facilities that support the growth of cyber politics, such as Facebook, Twitter, mailing list, YouTube, and others.Keywords: Cyberpolitik, Internet  Abstrak:Internet berkembang pertama kali di Indonesia pada awal tahun 1990-an. Diawali dari pagayuban network kini berkembang luas tanpa batas dimanapun juga. Suatu survei yang diselenggarakan Asosiasi Penyelenggara Jasa Internet Indonesia (APJII) mengatakan kalau jumlah pengguna internet di Indonesia tahun 2012 menggapai 63 juta orang ataupun 24,23 persen dari total populasi negeri ini. Tahun depan, angka itu diprediksi naik dekat 30 persen jadi 82 juta pengguna serta terus berkembang jadi 107 juta pada 2014 serta 139 juta ataupun 50 persen total populasi pada 2015. juta orang. Perihal ini pula berakibat pada komunikasi politik dengan media internet, ataupun kerap diucap dengan cyber politic. Cyber politic di Indonesia hadapi pertumbuhan sebagian tahun terakhir. Banyaknya fasilitas yang menunjang pertumbuhan cyber politic semacam terdapatnya facebook, Twitter, mailing list, youtobe, serta lain-lain.Kata Kunci: Cyberpolitik, Internet 


Sign in / Sign up

Export Citation Format

Share Document