A Review of the Common DDoS Attack: Types and Protection Approaches Based on Artificial Intelligence

2021 ◽  
pp. 08-14
Author(s):  
Nafea ali majeed .. ◽  
◽  
◽  
◽  
Khalid Hameed Zaboon ◽  
...  

Recently, the technology become an important part of our live, and it is employed to work together with the Medicine, Space Science, Agriculture, and industry and more else. Stored the information in the servers and cloud become required. It is a global force that has transformed people's lives with the availability of various web applications that serve billions of websites every day. However, there are many types of attack could be targeting the internet, and there is a need to recognize, classify and protect thesis types of attack. Due to its important global role, it has become important to ensure that web applications are secure, accurate, and of high quality. One of the basic problems found on the Web is DDoS attacks. In this work, the review classifies and delineates attack types, test characteristics, evaluation techniques; evaluation methods and test data sets used in the proposed Strategic Strategy methodology. Finally, this work affords guidance and possible targets in the fight against creating better events to overcome the most dangers Cyber-attack types which is DDoS attacks.

GEOMATICA ◽  
2020 ◽  
Author(s):  
Françoise Bahoken ◽  
Grégoire Le Campion ◽  
Marion Maisonobe ◽  
Laurent Jégou ◽  
Étienne Côme

RESUMÉ. L’analyse de la dynamique des aires urbaines ou des métropoles, la délimitation de leurs aires fonctionnelles et la comparaison spatio-temporelle de leurs motifs est souvent freinée par l’insuffisance de données relationnelles (portant sur des liens entre des entités) ouvertes et l’absence jusque récemment de dispositifs d’analyse et de géo-visualisation dédiés. Au-delà des questions d’ouverture des données (géo)numériques, nous proposons un panorama du geoweb, le processus de création de cartes dans le contexte du web 2.0, spécifique aux flux et réseaux localisés. L’éclairage ainsi apporté sur les pratiques cartographiques actuelles révèle trois grandes familles d’applications web ainsi que les besoins d’une communauté, restreinte mais dynamique, d’analyser librement ses propres jeux de données. ABSTRACT. Analysing the dynamics of urban areas or metropolises, delineating their functional areas and comparing their spatio-temporal patterns is often limited by the lack of open relational data (on links between entities) and the absence until recently of dedicated analysis and geo-visualization frameworks. Beyond the questions of opening (geo)digital data, we propose a panorama of a geoweb, the process of creating maps in the context of the Web 2.0, specific to flows and networks. The insights provided on current mapping practices reveal three main families of web applications, as well as the needs of a small but dynamic community to freely analyze its own data sets.


Author(s):  
Kieron O’Hara ◽  
Harith Alani ◽  
Yannis Kalfoglou ◽  
Nigel Shadbolt

There are certain features that distinguish killer apps from other ordinary applications. This chapter examines those features in the context of the Semantic Web, in the hope that a better understanding of the characteristics of killer apps might encourage their consideration when developing Semantic Web applications. Killer apps are highly transformative technologies that create new e-Commerce venues and widespread patterns of behaviour. Information Technology generally, and the Web in particular, has benefited from killer apps to create new networks of users and increase its value. The Semantic Web community on the other hand is still awaiting a killer app that proves the superiority of its technologies. The authors hope that this chapter will help to highlight some of the common ingredients of killer app in e-Commerce, and discuss how such applications might emerge in the Semantic Web.


Author(s):  
H. Inbarani ◽  
K. Thangavel

Recommender systems represent a prominent class of personalized Web applications, which particularly focus on the user-dependent filtering and selection of relevant information. Recommender Systems have been a subject of extensive research in Artificial Intelligence over the last decade, but with today’s increasing number of e-commerce environments on the Web, the demand for new approaches to intelligent product recommendation is higher than ever. There are more online users, more online channels, more vendors, more products, and, most importantly, increasingly complex products and services. These recent developments in the area of recommender systems generated new demands, in particular with respect to interactivity, adaptivity, and user preference elicitation. These challenges, however, are also in the focus of general Web page recommendation research. The goal of this chapter is to develop robust techniques to model noisy data sets containing an unknown number of overlapping categories and apply them for Web personalization and mining. In this chapter, rough set-based clustering approaches are used to discover Web user access patterns, and these techniques compute a number of clusters automatically from the Web log data using statistical techniques. The suitability of rough clustering approaches for Web page recommendation are measured using predictive accuracy metrics.


10.29007/gjh5 ◽  
2018 ◽  
Author(s):  
Thierry Sans ◽  
Iliano Cervesato

Web applications (webapps) are very popular because they are easy to prototype and they can make use of other webapps, supplied by third parties, as building blocks. Yet, writing correct webapps is complex because developers are required to reason about distributed computation and to write code using heterogeneous languages, often not originally designed with distributed computing in mind. Testing is the common way to catch bugs as current technologies provide limited support. There are doubts this can scale up to meet the expectations of more sophisticated web applications. In this paper, we propose an abstraction that provides simple primitives to manage the two main forms of distributed computation found on the web: remote procedure calls (code executed on a server on behalf of a client) and mobile code (server code executed on a client). We embody this abstraction in a type-safe language with localized static typechecking that we call QWeSST and for which we have implemented a working prototype. We use it to express interaction patterns commonly found on the Web as well more sophisticated forms that are beyond current web technologies.


Author(s):  
Amit Sharma

Distributed Denial of Service attacks are significant dangers these days over web applications and web administrations. These assaults pushing ahead towards application layer to procure furthermore, squander most extreme CPU cycles. By asking for assets from web benefits in gigantic sum utilizing quick fire of solicitations, assailant robotized programs use all the capacity of handling of single server application or circulated environment application. The periods of the plan execution is client conduct checking and identification. In to beginning with stage by social affair the data of client conduct and computing individual user’s trust score will happen and Entropy of a similar client will be ascertained. HTTP Unbearable Load King (HULK) attacks are also evaluated. In light of first stage, in recognition stage, variety in entropy will be watched and malevolent clients will be recognized. Rate limiter is additionally acquainted with stop or downsize serving the noxious clients. This paper introduces the FAÇADE layer for discovery also, hindering the unapproved client from assaulting the framework.


2018 ◽  
Vol 48 (3) ◽  
pp. 84-90 ◽  
Author(s):  
E. A. Lapchenko ◽  
S. P. Isakova ◽  
T. N. Bobrova ◽  
L. A. Kolpakova

It is shown that the application of the Internet technologies is relevant in the selection of crop production technologies and the formation of a rational composition of the machine-and-tractor fl eet taking into account the conditions and production resources of a particular agricultural enterprise. The work gives a short description of the web applications, namely “ExactFarming”, “Agrivi” and “AgCommand” that provide a possibility to select technologies and technical means of soil treatment, and their functions. “ExactFarming” allows to collect and store information about temperature, precipitation and weather forecast in certain areas, keep records of information about crops and make technological maps using expert templates. “Agrivi” allows to store and provide access to weather information in the fi elds with certain crops. It has algorithms to detect and make warnings about risks related to diseases and pests, as well as provides economic calculations of crop profi tability and crop planning. “AgCommand” allows to track the position of machinery and equipment in the fi elds and provides data on the weather situation in order to plan the use of agricultural machinery in the fi elds. The web applications presented hereabove do not show relation between the technologies applied and agro-climatic features of the farm location zone. They do not take into account the phytosanitary conditions in the previous years, or the relief and contour of the fi elds while drawing up technological maps or selecting the machine-and-tractor fl eet. Siberian Physical-Technical Institute of Agrarian Problems of Siberian Federal Scientifi c Center of AgroBioTechnologies of the Russian Academy of Sciences developed a software complex PIKAT for supporting machine agrotechnologies for production of spring wheat grain at an agricultural enterprise, on the basis of which there is a plan to develop a web application that will consider all the main factors limiting the yield of cultivated crops.


2021 ◽  
Vol 13 (2) ◽  
pp. 50
Author(s):  
Hamed Z. Jahromi ◽  
Declan Delaney ◽  
Andrew Hines

Content is a key influencing factor in Web Quality of Experience (QoE) estimation. A web user’s satisfaction can be influenced by how long it takes to render and visualize the visible parts of the web page in the browser. This is referred to as the Above-the-fold (ATF) time. SpeedIndex (SI) has been widely used to estimate perceived web page loading speed of ATF content and a proxy metric for Web QoE estimation. Web application developers have been actively introducing innovative interactive features, such as animated and multimedia content, aiming to capture the users’ attention and improve the functionality and utility of the web applications. However, the literature shows that, for the websites with animated content, the estimated ATF time using the state-of-the-art metrics may not accurately match completed ATF time as perceived by users. This study introduces a new metric, Plausibly Complete Time (PCT), that estimates ATF time for a user’s perception of websites with and without animations. PCT can be integrated with SI and web QoE models. The accuracy of the proposed metric is evaluated based on two publicly available datasets. The proposed metric holds a high positive Spearman’s correlation (rs=0.89) with the Perceived ATF reported by the users for websites with and without animated content. This study demonstrates that using PCT as a KPI in QoE estimation models can improve the robustness of QoE estimation in comparison to using the state-of-the-art ATF time metric. Furthermore, experimental result showed that the estimation of SI using PCT improves the robustness of SI for websites with animated content. The PCT estimation allows web application designers to identify where poor design has significantly increased ATF time and refactor their implementation before it impacts end-user experience.


2021 ◽  
Vol 4 (1) ◽  
pp. 251524592092800
Author(s):  
Erin M. Buchanan ◽  
Sarah E. Crain ◽  
Ari L. Cunningham ◽  
Hannah R. Johnson ◽  
Hannah Stash ◽  
...  

As researchers embrace open and transparent data sharing, they will need to provide information about their data that effectively helps others understand their data sets’ contents. Without proper documentation, data stored in online repositories such as OSF will often be rendered unfindable and unreadable by other researchers and indexing search engines. Data dictionaries and codebooks provide a wealth of information about variables, data collection, and other important facets of a data set. This information, called metadata, provides key insights into how the data might be further used in research and facilitates search-engine indexing to reach a broader audience of interested parties. This Tutorial first explains terminology and standards relevant to data dictionaries and codebooks. Accompanying information on OSF presents a guided workflow of the entire process from source data (e.g., survey answers on Qualtrics) to an openly shared data set accompanied by a data dictionary or codebook that follows an agreed-upon standard. Finally, we discuss freely available Web applications to assist this process of ensuring that psychology data are findable, accessible, interoperable, and reusable.


Author(s):  
Theodorus Kristian Widianto ◽  
Wiwin Sulistyo

Security on computer networks is currently a matter that must be considered especially for internet users because many risks must be borne if this is negligent of attention. Data theft, system destruction, and so on are threats to users, especially on the server-side. DDoS is a method of attack that is quite popular and is often used to bring down servers. This method runs by consuming resources on the server computer so that it can no longer serve requests from the user side. With this problem, security is needed to prevent the DDoS attack, one of which is using iptables that has been provided by Linux. Implementing iptables can prevent or stop external DDoS attacks aimed at the server.


2019 ◽  
Vol 15 (2) ◽  
pp. 201-214 ◽  
Author(s):  
Mahmoud Elish

Purpose Effective and efficient software security inspection is crucial as the existence of vulnerabilities represents severe risks to software users. The purpose of this paper is to empirically evaluate the potential application of Stochastic Gradient Boosting Trees (SGBT) as a novel model for enhanced prediction of vulnerable Web components compared to common, popular and recent machine learning models. Design/methodology/approach An empirical study was conducted where the SGBT and 16 other prediction models have been trained, optimized and cross validated using vulnerability data sets from multiple versions of two open-source Web applications written in PHP. The prediction performance of these models have been evaluated and compared based on accuracy, precision, recall and F-measure. Findings The results indicate that the SGBT models offer improved prediction over the other 16 models and thus are more effective and reliable in predicting vulnerable Web components. Originality/value This paper proposed a novel application of SGBT for enhanced prediction of vulnerable Web components and showed its effectiveness.


Sign in / Sign up

Export Citation Format

Share Document