scholarly journals PF : Website Fingerprinting Attack Using Probabilistic Topic Model

2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
Hongcheng Zou ◽  
Ziling Wei ◽  
Jinshu Su ◽  
Baokang Zhao ◽  
Yusheng Xia ◽  
...  

Website fingerprinting (WFP) attack enables identifying the websites a user is browsing even under the protection of privacy-enhancing technologies (PETs). Previous studies demonstrate that most machine-learning attacks need multiple types of features as input, thus inducing tremendous feature engineering work. However, we show the other alternative. That is, we present Probabilistic Fingerprinting (PF), a new website fingerprinting attack that merely leverages one type of features. They are produced by using a mathematical model PWFP that combines a probabilistic topic model with WFP for the first time, due to a finding that a plain text and the sequence file generated from a traffic instance are essentially the same. Experimental results show that the proposed new features are more distinguishing than the existing features. In a closed-world setting, PF attains a better accuracy performance (99.79% at most) than prior attacks on various datasets gathered in the scenarios of Shadowsocks, SSH, and TLS, respectively. Besides, even when the number of training instances drops to as few as 4, PF still reaches an accuracy of above 90%. In the more realistic open-world setting, PF attains a high true positive rate (TPR) and Bayes detection rate (BDR), and a low false positive rate (FPR) in all evaluations, which outperforms the other attacks. These results highlight that it is meaningful and possible to explore new features to improve the accuracy of WFP attacks.

2019 ◽  
Vol 2019 (4) ◽  
pp. 292-310 ◽  
Author(s):  
Sanjit Bhat ◽  
David Lu ◽  
Albert Kwon ◽  
Srinivas Devadas

Abstract In recent years, there have been several works that use website fingerprinting techniques to enable a local adversary to determine which website a Tor user visits. While the current state-of-the-art attack, which uses deep learning, outperforms prior art with medium to large amounts of data, it attains marginal to no accuracy improvements when both use small amounts of training data. In this work, we propose Var-CNN, a website fingerprinting attack that leverages deep learning techniques along with novel insights specific to packet sequence classification. In open-world settings with large amounts of data, Var-CNN attains over 1% higher true positive rate (TPR) than state-of-the-art attacks while achieving 4× lower false positive rate (FPR). Var-CNN’s improvements are especially notable in low-data scenarios, where it reduces the FPR of prior art by 3.12% while increasing the TPR by 13%. Overall, insights used to develop Var-CNN can be applied to future deep learning based attacks, and substantially reduce the amount of training data needed to perform a successful website fingerprinting attack. This shortens the time needed for data collection and lowers the likelihood of having data staleness issues.


Sensors ◽  
2019 ◽  
Vol 19 (18) ◽  
pp. 4045 ◽  
Author(s):  
Xiang Cheng ◽  
Jiale Zhang ◽  
Bing Chen

With the emergence of the Advanced Persistent Threat (APT) attacks, many Internet of Things (IoT) systems have faced large numbers of potential threats with the characteristics of concealment, permeability, and pertinence. However, existing methods and technologies cannot provide comprehensive and prompt recognition of latent APT attack activities in the IoT systems. To address this problem, we propose an APT Alerts and Logs Correlation Method, named APTALCM and a framework of deploying APTALCM on the IoT system, where an edge computing architecture was used to achieve cyber situation comprehension without too much data transmission cost. Specifically, we firstly present a cyber situation ontology for modeling the concepts and properties to formalize APT attack activities in the IoT systems. Then, we introduce a cyber situation instance similarity measurement method based on the SimRank mechanism for APT alerts and logs Correlation. Combining with instance similarity, we further propose an APT alert instances correlation method to reconstruct APT attack scenarios and an APT log instances correlation method to detect log instance communities. Through the coalescence of these methods, APTALCM can accomplish the cyber situation comprehension effectively by recognizing the APT attack intentions in the IoT systems. The exhaustive experimental results demonstrate that the two kernel modules, i.e., Alert Instance Correlation Module (AICM) and Log Instance Correlation Module (LICM) in our APTALCM, can achieve both high true-positive rate and low false-positive rate.


2020 ◽  
Vol 11 (1) ◽  
pp. 36
Author(s):  
Joanna Kazzandra Dumagpi ◽  
Yong-Jin Jeong

The inherent imbalance in the data distribution of X-ray security images is one of the most challenging aspects of computer vision algorithms applied in this domain. Most of the prior studies in this field have ignored this aspect, limiting their application in the practical setting. This paper investigates the effect of employing Generative Adversarial Networks (GAN)-based image augmentation, or image synthesis, in improving the performance of computer vision algorithms on an imbalanced X-ray dataset. We used Deep Convolutional GAN (DCGAN) to generate new X-ray images of threat objects and Cycle-GAN to translate camera images of threat objects to X-ray images. We synthesized new X-ray security images by combining threat objects with background X-ray images, which are used to augment the dataset. Then, we trained various Faster (Region Based Convolutional Neural Network) R-CNN models using different augmentation approaches and evaluated their performance on a large-scale practical X-ray image dataset. Experiment results show that image synthesis is an effective approach to combating the imbalance problem by significantly reducing the false-positive rate (FPR) by up to 15.3%. The FPR is further improved by up to 19.9% by combining image synthesis and conventional image augmentation. Meanwhile, a relatively high true positive rate (TPR) of about 94% was maintained regardless of the augmentation method used.


2021 ◽  
pp. 1-21
Author(s):  
Asma Naseer ◽  
Maria Tamoor ◽  
Arifah Azhar

Background: Coronavirus Disease 2019 (COVID-19) is contagious, producing respiratory tract infection, caused by a newly discovered coronavirus. Its death toll is too high, and early diagnosis is the main problem nowadays. Infected people show a variety of symptoms such as fatigue, fever, tastelessness, dry cough, etc. Some other symptoms may also be manifested by radiographic visual identification. Therefore, Chest X-Rays (CXR) play a key role in the diagnosis of COVID-19. Methods: In this study, we use Chest X-Rays images to develop a computer-aided diagnosis (CAD) of the disease. These images are used to train two deep networks, the Convolution Neural Network (CNN), and the Long Short-Term Memory Network (LSTM) which is an artificial Recurrent Neural Network (RNN). The proposed study involves three phases. First, the CNN model is trained on raw CXR images. Next, it is trained on pre-processed CXR images and finally enhanced CXR images are used for deep network CNN training. Geometric transformations, color transformations, image enhancement, and noise injection techniques are used for augmentation. From augmentation, we get 3,220 augmented CXRs as training datasets. In the final phase, CNN is used to extract the features of CXR imagery that are fed to the LSTM model. The performance of the four trained models is evaluated by the evaluation techniques of different models, including accuracy, specificity, sensitivity, false-positive rate, and receiver operating characteristic (ROC) curve. Results: We compare our results with other benchmark CNN models. Our proposed CNN-LSTM model gives superior accuracy (99.02%) than the other state-of-the-art models. Our method to get improved input, helped the CNN model to produce a very high true positive rate (TPR 1) and no false-negative result whereas false negative was a major problem while using Raw CXR images. Conclusions: We conclude after performing different experiments that some image pre-processing and augmentation, remarkably improves the results of CNN-based models. It will help a better early detection of the disease that will eventually reduce the mortality rate of COVID.


Electronics ◽  
2020 ◽  
Vol 9 (11) ◽  
pp. 1894
Author(s):  
Chun Guo ◽  
Zihua Song ◽  
Yuan Ping ◽  
Guowei Shen ◽  
Yuhei Cui ◽  
...  

Remote Access Trojan (RAT) is one of the most terrible security threats that organizations face today. At present, two major RAT detection methods are host-based and network-based detection methods. To complement one another’s strengths, this article proposes a phased RATs detection method by combining double-side features (PRATD). In PRATD, both host-side and network-side features are combined to build detection models, which is conducive to distinguishing the RATs from benign programs because that the RATs not only generate traffic on the network but also leave traces on the host at run time. Besides, PRATD trains two different detection models for the two runtime states of RATs for improving the True Positive Rate (TPR). The experiments on the network and host records collected from five kinds of benign programs and 20 famous RATs show that PRATD can effectively detect RATs, it can achieve a TPR as high as 93.609% with a False Positive Rate (FPR) as low as 0.407% for the known RATs, a TPR 81.928% and FPR 0.185% for the unknown RATs, which suggests it is a competitive candidate for RAT detection.


2021 ◽  
pp. 103985622110286
Author(s):  
Tracey Wade ◽  
Jamie-Lee Pennesi ◽  
Yuan Zhou

Objective: Currently eligibility for expanded Medicare items for eating disorders (excluding anorexia nervosa) require a score ⩾ 3 on the 22-item Eating Disorder Examination-Questionnaire (EDE-Q). We compared these EDE-Q “cases” with continuous scores on a validated 7-item version of the EDE-Q (EDE-Q7) to identify an EDE-Q7 cut-off commensurate to 3 on the EDE-Q. Methods: We utilised EDE-Q scores of female university students ( N = 337) at risk of developing an eating disorder. We used a receiver operating characteristic (ROC) curve to assess the relationship between the true-positive rate (sensitivity) and the false-positive rate (1-specificity) of cases ⩾ 3. Results: The area under the curve showed outstanding discrimination of 0.94 (95% CI: .92–.97). We examined two specific cut-off points on the EDE-Q7, which included 100% and 87% of true cases, respectively. Conclusion: Given the EDE-Q cut-off for Medicare is used in conjunction with other criteria, we suggest using the more permissive EDE-Q7 cut-off (⩾2.5) to replace use of the EDE-Q cut-off (⩾3) in eligibility assessments.


2016 ◽  
Vol 24 (2) ◽  
pp. 263-272 ◽  
Author(s):  
Kosuke Imai ◽  
Kabir Khanna

In both political behavior research and voting rights litigation, turnout and vote choice for different racial groups are often inferred using aggregate election results and racial composition. Over the past several decades, many statistical methods have been proposed to address this ecological inference problem. We propose an alternative method to reduce aggregation bias by predicting individual-level ethnicity from voter registration records. Building on the existing methodological literature, we use Bayes's rule to combine the Census Bureau's Surname List with various information from geocoded voter registration records. We evaluate the performance of the proposed methodology using approximately nine million voter registration records from Florida, where self-reported ethnicity is available. We find that it is possible to reduce the false positive rate among Black and Latino voters to 6% and 3%, respectively, while maintaining the true positive rate above 80%. Moreover, we use our predictions to estimate turnout by race and find that our estimates yields substantially less amounts of bias and root mean squared error than standard ecological inference estimates. We provide open-source software to implement the proposed methodology.


Author(s):  
Yosef S. Razin ◽  
Jack Gale ◽  
Jiaojiao Fan ◽  
Jaznae’ Smith ◽  
Karen M. Feigh

This paper evaluates Banks et al.’s Human-AI Shared Mental Model theory by examining how a self-driving vehicle’s hazard assessment facilitates shared mental models. Participants were asked to affirm the vehicle’s assessment of road objects as either hazards or mistakes in real-time as behavioral and subjective measures were collected. The baseline performance of the AI was purposefully low (<50%) to examine how the human’s shared mental model might lead to inappropriate compliance. Results indicated that while the participant true positive rate was high, overall performance was reduced by the large false positive rate, indicating that participants were indeed being influenced by the Al’s faulty assessments, despite full transparency as to the ground-truth. Both performance and compliance were directly affected by frustration, mental, and even physical demands. Dispositional factors such as faith in other people’s cooperativeness and in technology companies were also significant. Thus, our findings strongly supported the theory that shared mental models play a measurable role in performance and compliance, in a complex interplay with trust.


2014 ◽  
Author(s):  
Andreas Tuerk ◽  
Gregor Wiktorin ◽  
Serhat Güler

Quantification of RNA transcripts with RNA-Seq is inaccurate due to positional fragment bias, which is not represented appropriately by current statistical models of RNA-Seq data. This article introduces the Mix2(rd. "mixquare") model, which uses a mixture of probability distributions to model the transcript specific positional fragment bias. The parameters of the Mix2model can be efficiently trained with the Expectation Maximization (EM) algorithm resulting in simultaneous estimates of the transcript abundances and transcript specific positional biases. Experiments are conducted on synthetic data and the Universal Human Reference (UHR) and Brain (HBR) sample from the Microarray quality control (MAQC) data set. Comparing the correlation between qPCR and FPKM values to state-of-the-art methods Cufflinks and PennSeq we obtain an increase in R2value from 0.44 to 0.6 and from 0.34 to 0.54. In the detection of differential expression between UHR and HBR the true positive rate increases from 0.44 to 0.71 at a false positive rate of 0.1. Finally, the Mix2model is used to investigate biases present in the MAQC data. This reveals 5 dominant biases which deviate from the common assumption of a uniform fragment distribution. The Mix2software is available at http://www.lexogen.com/fileadmin/uploads/bioinfo/mix2model.tgz.


2021 ◽  
Author(s):  
Shloak Rathod

<div><div><div><p>The proliferation of online media allows for the rapid dissemination of unmoderated news, unfortunately including fake news. The extensive spread of fake news poses a potent threat to both individuals and society. This paper focuses on designing author profiles to detect authors who are primarily engaged in publishing fake news articles. We build on the hypothesis that authors who write fake news repeatedly write only fake news articles, at least in short-term periods. Fake news authors have a distinct writing style compared to real news authors, who naturally want to maintain trustworthiness. We explore the potential to detect fake news authors by designing authors’ profiles based on writing style, sentiment, and co-authorship patterns. We evaluate our approach using a publicly available dataset with over 5000 authors and 20000 articles. For our evaluation, we build and compare different classes of supervised machine learning models. We find that the K-NN model performed the best, and it could detect authors who are prone to writing fake news with an 83% true positive rate with only a 5% false positive rate.</p></div></div></div>


Sign in / Sign up

Export Citation Format

Share Document