A Large-scale Measurement Study of Mobile Web Security Through Traffic Monitoring

Author(s):  
Shaoran Xiao ◽  
Fang Wang ◽  
Jingtao Ding ◽  
Yong Li
2007 ◽  
Vol 9 (8) ◽  
pp. 1672-1687 ◽  
Author(s):  
Xiaojun Hei ◽  
Chao Liang ◽  
Jian Liang ◽  
Yong Liu ◽  
K.W. Ross

Queue ◽  
2016 ◽  
Vol 14 (4) ◽  
pp. 80-95
Author(s):  
Jean Yang ◽  
Vijay Janapa Reddi ◽  
Yuhao Zhu
Keyword(s):  

2019 ◽  
Author(s):  
Daoyuan Yang ◽  
Shaojun Zhang ◽  
Tianlin Niu ◽  
Yunjie Wang ◽  
Honglei Xu ◽  
...  

Abstract. On-road vehicle emissions are a major contributor to elevated air pollution levels in populous metropolitan areas. We developed a link-level emissions inventory of vehicular pollutants, called EMBEV-Link, based on multiple datasets extracted from the extensive road traffic monitoring network that covers the entire municipality of Beijing, China (16 400 km2). We employed the EMBEV-Link model under various traffic scenarios to capture the significant variability in vehicle emissions, temporally and spatially, due to the real-world traffic dynamics and the traffic restrictions implemented by the local government. The results revealed high carbon monoxide (CO) and total hydrocarbon (THC) emissions in the urban area (i.e., within the Fifth Ring Road) and during rush hours, both associated with the passenger vehicle traffic. By contrast, considerable fractions of nitrogen oxides (NOX), fine particulate matter (PM2.5) and black carbon (BC) emissions were present beyond the urban area, as heavy-duty trucks (HDTs) were not allowed to drive through the urban area during daytime. The EMBEV-Link model indicates that non-local HDTs could for 29 % and 38 % of estimated total on-road emissions of NOX and PM2.5, which were ignored in previous conventional emission inventories. We further combined the EMBEV-Link emission inventory and a computationally efficient dispersion model, RapidAir®, to simulate vehicular NOX concentrations at fine resolutions (10 m × 10 m in the entire municipality and 1 m × 1 m in the hotspots). The simulated results indicated a close agreement with ground observations and captured sharp concentration gradients from line sources to ambient areas. During the nighttime when the HDT traffic restrictions are lifted, HDTs could be responsible for approximately 10 μg m−3 of NOX in the urban area. The uncertainties of conventional top-down allocation methods, which were widely used to enhance the spatial resolution of vehicle emissions, are also discussed by comparison with the EMBEV-Link emission inventory.


2010 ◽  
Vol 2010 ◽  
pp. 1-16 ◽  
Author(s):  
Jia Liu ◽  
Peng Gao ◽  
Jian Yuan ◽  
Xuetao Du

Mechanisms to extract the characteristics of network traffic play a significant role in traffic monitoring, offering helpful information for network management and control. In this paper, a method based on Random Matrix Theory (RMT) and Principal Components Analysis (PCA) is proposed for monitoring and analyzing large-scale traffic patterns in the Internet. Besides the analysis of the largest eigenvalue in RMT, useful information is also extracted from small eigenvalues by a method based on PCA. And then an appropriate approach is put forward to select some observation points on the base of the eigen analysis. Finally, some experiments about peer-to-peer traffic pattern recognition and backbone aggregate flow estimation are constructed. The simulation results show that using about 10% of nodes as observation points, our method can monitor and extract key information about Internet traffic patterns.


2017 ◽  
Vol 2017 (3) ◽  
pp. 130-146 ◽  
Author(s):  
Muhammad Haris Mughees ◽  
Zhiyun Qian ◽  
Zubair Shafiq

Abstract The rise of ad-blockers is viewed as an economic threat by online publishers who primarily rely on online advertising to monetize their services. To address this threat, publishers have started to retaliate by employing anti ad-blockers, which scout for ad-block users and react to them by pushing users to whitelist the website or disable ad-blockers altogether. The clash between ad-blockers and anti ad-blockers has resulted in a new arms race on the Web. In this paper, we present an automated machine learning based approach to identify anti ad-blockers that detect and react to ad-block users. The approach is promising with precision of 94.8% and recall of 93.1%. Our automated approach allows us to conduct a large-scale measurement study of anti ad-blockers on Alexa top-100K websites. We identify 686 websites that make visible changes to their page content in response to ad-block detection. We characterize the spectrum of different strategies used by anti ad-blockers. We find that a majority of publishers use fairly simple first-party anti ad-block scripts. However, we also note the use of third-party anti ad-block services that use more sophisticated tactics to detect and respond to ad-blockers.


Author(s):  
ABHISHEK.V. RAO ◽  
MANIKANDAN. A ◽  
AZARUDEEN. A ◽  
DEVENDAR RAO

A Number of commercial peer-to-peer (P2P) systems for live streaming have been introduced in recent years. The behaviour of the popular systems has been extensively studied in several measurement papers. However, these studies have to rely on a “black-box” approach, where packet traces are collected from a single or a limited number of measurement points, to infer various properties of the traffic on the control and data planes. Although, such studies are useful to compared different systems from the end user’s perspective. It is difficult to intuitively understand the observed properties without fully reverseengineering the underlying systems. In this paper, we describe the network architecture of Zattoo, one of the largest production, live streaming providers, in Europe, at the time of writing, and present a large-scale measurement study of zattoo, using data collected by the provider. To highlight we found that even, when the zattoo system was heavily loaded with as high as 20000 concurrent users on a single overlay, the median channel join delay remained less than 2-5 s, and that, for a majority of users, the streamed signal lags over-the-air broadcast signal by more than 3 s.


2018 ◽  
Vol 7 (3.33) ◽  
pp. 183
Author(s):  
Sung-Ho Cho ◽  
Sung-Uk Choi ◽  
. .

This paper proposes a method to optimize the performance of web application firewalls according to their positions in large scale networks. Since ports for web services are always open and vulnerable in security, the introduction of web application firewalls is essential. Methods to configure web application firewalls in existing networks are largely divided into two types. There is an in-line type where a web application firewall is located between the network and the web server to be protected. This is mostly used in small scale single networks and is vulnerable to the physical obstruction of web application firewalls. The port redirection type configured with the help of peripheral network equipment such as routers or L4 switches can maintain web services even when physical obstruction of the web application firewall occurs and is suitable for large scale networks where several web services are mixed. In this study, port redirection type web application firewalls were configured in large-scale networks and there was a problem in that the performance of routers was degraded due to the IP-based VLAN when a policy was set for the ports on the routers for web security. In order to solve this problem, only those agencies and enterprises that provide web services of networks were separated and in-line type web application firewalls were configured for them. Internet service providers (ISPs) or central line-concentration agencies can apply the foregoing to configure systems for web security for unit small enterprises or small scale agencies at low costs.  


2019 ◽  
Vol 36 (1) ◽  
pp. 1-9 ◽  
Author(s):  
Vahid Jalili ◽  
Enis Afgan ◽  
James Taylor ◽  
Jeremy Goecks

Abstract Motivation Large biomedical datasets, such as those from genomics and imaging, are increasingly being stored on commercial and institutional cloud computing platforms. This is because cloud-scale computing resources, from robust backup to high-speed data transfer to scalable compute and storage, are needed to make these large datasets usable. However, one challenge for large-scale biomedical data on the cloud is providing secure access, especially when datasets are distributed across platforms. While there are open Web protocols for secure authentication and authorization, these protocols are not in wide use in bioinformatics and are difficult to use for even technologically sophisticated users. Results We have developed a generic and extensible approach for securely accessing biomedical datasets distributed across cloud computing platforms. Our approach combines OpenID Connect and OAuth2, best-practice Web protocols for authentication and authorization, together with Galaxy (https://galaxyproject.org), a web-based computational workbench used by thousands of scientists across the world. With our enhanced version of Galaxy, users can access and analyze data distributed across multiple cloud computing providers without any special knowledge of access/authorization protocols. Our approach does not require users to share permanent credentials (e.g. username, password, API key), instead relying on automatically generated temporary tokens that refresh as needed. Our approach is generalizable to most identity providers and cloud computing platforms. To the best of our knowledge, Galaxy is the only computational workbench where users can access biomedical datasets across multiple cloud computing platforms using best-practice Web security approaches and thereby minimize risks of unauthorized data access and credential use. Availability and implementation Freely available for academic and commercial use under the open-source Academic Free License (https://opensource.org/licenses/AFL-3.0) from the following Github repositories: https://github.com/galaxyproject/galaxy and https://github.com/galaxyproject/cloudauthz.


2014 ◽  
Vol 644-650 ◽  
pp. 1351-1354
Author(s):  
Jun Ye Wang

The design method of large-scale intelligent traffic monitoring system is studied. Traffic monitoring methods have become the core problem of intelligent transportation research field. To this end, this paper proposes an intelligent traffic monitoring method based on clustering RBF neural network algorithm. Fourier coefficient normalization method is used to extract the feature of traffic state, to be as the basis for intelligent traffic monitoring. Using clustering RBF neural network algorithm identify the traffic state effectively, thus to complete the state recognition of intelligent traffic monitoring. Experimental results show that the proposed algorithm performed in intelligent traffic monitoring, can greatly improve the accuracy of monitoring.


Sign in / Sign up

Export Citation Format

Share Document