scholarly journals From “Onion Not Found” to Guard Discovery

2021 ◽  
Vol 2022 (1) ◽  
pp. 522-543
Author(s):  
Lennart Oldenburg ◽  
Gunes Acar ◽  
Claudia Diaz

Abstract We present a novel web-based attack that identifies a Tor user’s guard in a matter of seconds. Our attack is low-cost, fast, and stealthy. It requires only a moderate amount of resources and can be deployed by website owners, third-party script providers, and malicious exits—if the website traffic is unencrypted. The attack works by injecting resources from non-existing onion service addresses into a webpage. Upon visiting the attack webpage with Tor Browser, the victim’s Tor client creates many circuits to look up the non-existing addresses. This allows middle relays controlled by the adversary to detect the distinctive traffic pattern of the “404 Not Found” lookups and identify the victim’s guard. We evaluate our attack with extensive simulations and live Tor network measurements, taking a range of victim machine, network, and geolocation configurations into account. We find that an adversary running a small number of HSDirs and providing 5 % of Tor’s relay bandwidth needs 12.06 seconds to identify the guards of 50 % of the victims, while it takes 22.01 seconds to discover 90 % of the victims’ guards. Finally, we evaluate a set of countermeasures against our attack including a defense that we develop based on a token bucket and the recently proposed Vanguards-lite defense in Tor.

2019 ◽  
Vol 16 (6) ◽  
pp. 589-598
Author(s):  
Karen Bracken ◽  
Anthony Keech ◽  
Wendy Hague ◽  
Carolyn Allan ◽  
Ann Conway ◽  
...  

Background/aims: Participant recruitment to diabetes prevention randomised controlled trials is challenging and expensive. The T4DM study, a multicentre, Australia-based, Phase IIIb randomised controlled trial of testosterone to prevent Type 2 diabetes in men aged 50–74 years, faced the challenge of screening a large number of prospective participants at a small number of sites, with few staff, and a limited budget for screening activities. This article evaluates a high-volume, low-cost, semi-automated approach to screen and enrol T4DM study participants. Methods: We developed a sequential multi-step screening process: (1) web-based pre-screening, (2) laboratory screening through a network of third-party pathology centres, and (3) final on-site screening, using online data collection, computer-driven eligibility checking, and automated, email-based communication with prospective participants. Phone- and mail-based data collection and communication options were available to participants at their request. The screening process was administered by the central coordinating centre through a central data management system. Results: Screening activities required staffing of approximately 1.6 full-time equivalents over 4 years. Of 19,022 participants pre-screened, 13,108 attended a third-party pathology collection centre for laboratory screening, 1217 received final, on-site screening, and 1007 were randomised. In total, 95% of the participants opted for online pre-screening over phone-based pre-screening. Screening costs, including both direct and staffing costs, totalled AUD1,420,909 (AUD75 per subject screened and AUD1411 per randomised participant). Conclusion: A multi-step, semi-automated screening process with web-based pre-screening facilitated low-cost, high-volume participant enrolment to this large, multicentre randomised controlled trial. Centralisation and automation of screening activities resulted in substantial savings compared to previous, similar studies. Our screening approach could be adapted to other randomised controlled trial settings to minimise the cost of screening large numbers of participants.


Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3515
Author(s):  
Sung-Ho Sim ◽  
Yoon-Su Jeong

As the development of IoT technologies has progressed rapidly recently, most IoT data are focused on monitoring and control to process IoT data, but the cost of collecting and linking various IoT data increases, requiring the ability to proactively integrate and analyze collected IoT data so that cloud servers (data centers) can process smartly. In this paper, we propose a blockchain-based IoT big data integrity verification technique to ensure the safety of the Third Party Auditor (TPA), which has a role in auditing the integrity of AIoT data. The proposed technique aims to minimize IoT information loss by multiple blockchain groupings of information and signature keys from IoT devices. The proposed technique allows IoT information to be effectively guaranteed the integrity of AIoT data by linking hash values designated as arbitrary, constant-size blocks with previous blocks in hierarchical chains. The proposed technique performs synchronization using location information between the central server and IoT devices to manage the cost of the integrity of IoT information at low cost. In order to easily control a large number of locations of IoT devices, we perform cross-distributed and blockchain linkage processing under constant rules to improve the load and throughput generated by IoT devices.


2017 ◽  
Vol 34 (8) ◽  
pp. 8-19
Author(s):  
Stacy Brody

Purpose The purpose of this paper is to profile various types of Web-based tools to facilitate research collaboration within and across institutions. Design/methodology/approach Various Web-based tools were tested by the author. Additionally, tutorial videos and guides were reviewed. Findings There are various free and low-cost tools available to assist in the collaborative research process, and librarians are well-positioned to facilitate their usage. Practical implications Librarians and researchers will learn about various types of tools available at free or at low cost to fulfill needs of the collaborative research process. Social implications As the tools highlighted are either free or of low cost, they are also valuable to start-ups and can be recommended for entrepreneurs. Originality/value As the realm of Web-based collaborative tools continues to evolve, the options must be continually revisited and reviewed for currency.


Author(s):  
Shrutika Khobragade ◽  
Rohini Bhosale ◽  
Rahul Jiwahe

Cloud Computing makes immense use of internet to store a huge amount of data. Cloud computing provides high quality service with low cost and scalability with less requirement of hardware and software management. Security plays a vital role in cloud as data is handled by third party hence security is the biggest concern to matter. This proposed mechanism focuses on the security issues on the cloud. As the file is stored at a particular location which might get affected due to attack and will lost the data. So, in this proposed work instead of storing a complete file at a particular location, the file is divided into fragments and each fragment is stored at various locations. Fragments are more secured by providing the hash key to each fragment. This mechanism will not reveal all the information regarding a particular file even after successful attack. Here, the replication of fragments is also generated with strong authentication process using key generation. The auto update of a fragment or any file is also done here. The concept of auto update of filles is done where a file or a fragment can be updated online. Instead of downloading the whole file, a fragment can be downloaded to update. More time is saved using this methodology.


10.28945/3557 ◽  
2016 ◽  
Vol 1 ◽  
pp. 001-016
Author(s):  
Grandon Gill ◽  
Joni Jones

Jeffrey Stiles pondered these seemingly straightforward questions. As IT Director of Jagged Peak, Inc., a developer of e-commerce solutions located in the Tampa Bay region of Florida, it would be his responsibility to oversee the implementation of security measures that went beyond the existing user name and password currently required for each user. Recent events suggested that a move towards increased security might be inevitable. In just the past year, highly publicized security failures at the U.S. Department of Defense, major healthcare providers and large companies, such as Sony and JP Morgan Chase, had made executives acutely aware of the adverse consequences of IT system vulnerabilities. In fact, a study of business risk managers conducted in 2014 found that 69% of all businesses had experienced some level of hacking in the previous year. The nature of Jagged Peak’s business made the security of its systems a particular concern. The company, which had grown rapidly over the years, reporting over $61 million in revenue in 2014, provided its customers with software that supported web-based ordering, fulfillment and logistics activities, built around a philosophy of “buy anywhere, fulfill anywhere, return anywhere”. To support these activities, the company’s Edge platform needed to handle a variety of payment types, including gift cards (a recent target of hackers), as well as sensitive personal identifying information (PII). Compounding the security challenge: each customer ran its own instance of the Edge platform, and managed its own users. When only a single customer was being considered, the addition of further layers of security to authenticate uses was an eminently solvable problem. A variety of alternative approaches existed, including the use of various biometrics, key fobs that provided codes the user could enter, personalized security questions, and many others. The problem was that where multiple customers were involved, it was much more difficult to form a consensus. One customer might object to biometrics because it users lacked the necessary hardware. Another might object to security keys as being too costly, easily stolen or lost. Personalized questions might be considered too failure-prone by some customers. Furthermore, it was not clear that adding additional layers of authentication would necessarily be the most cost-effective way to reduce vulnerability. Other approaches, such as user training might provide greater value. Even if Stiles decided to proceed with additional authentication, questions remained. Mandatory or a free/added-cost option? Developed in house or by a third party? Used for internal systems only, customer platforms only, or both? Implementation could not begin until these broad questions were answered.


Transport ◽  
2015 ◽  
Vol 30 (3) ◽  
pp. 320-329 ◽  
Author(s):  
Erik Wilhelm ◽  
Joshua Siegel ◽  
Simon Mayer ◽  
Leyna Sadamori ◽  
Sohan Dsouza ◽  
...  

We present a novel approach to developing a vehicle communication platform consisting of a low-cost, open-source hardware for moving vehicle data to a secure server, a Web Application Programming Interface (API) for the provision of third-party services, and an intuitive user dashboard for access control and service distribution. The CloudThink infrastructure promotes the commoditization of vehicle telematics data by facilitating easier, flexible, and more secure access. It enables drivers to confidently share their vehicle information across multiple applications to improve the transportation experience for all stakeholders, as well as to potentially monetize their data. The foundations for an application ecosystem have been developed which, taken together with the fair value for driving data and low barriers to entry, will drive adoption of CloudThink as the standard method for projecting physical vehicles into the cloud. The application space initially consists of a few fundamental and important applications (vehicle tethering and remote diagnostics, road-safety monitoring, and fuel economy analysis) but as CloudThink begins to gain widespread adoption, the multiplexing of applications on the same data structure and set will accelerate its adoption.


2010 ◽  
Vol 79 (6) ◽  
pp. 459-467 ◽  
Author(s):  
Pablo Moreno-Ger ◽  
Javier Torrente ◽  
Julián Bustamante ◽  
Carmen Fernández-Galaz ◽  
Baltasar Fernández-Manjón ◽  
...  

2021 ◽  
Author(s):  
Benjamin Kellenberger ◽  
Devis Tuia ◽  
Dan Morris

<p>Ecological research like wildlife censuses increasingly relies on data on the scale of Terabytes. For example, modern camera trap datasets contain millions of images that require prohibitive amounts of manual labour to be annotated with species, bounding boxes, and the like. Machine learning, especially deep learning [3], could greatly accelerate this task through automated predictions, but involves expansive coding and expert knowledge.</p><p>In this abstract we present AIDE, the Annotation Interface for Data-driven Ecology [2]. In a first instance, AIDE is a web-based annotation suite for image labelling with support for concurrent access and scalability, up to the cloud. In a second instance, it tightly integrates deep learning models into the annotation process through active learning [7], where models learn from user-provided labels and in turn select the most relevant images for review from the large pool of unlabelled ones (Fig. 1). The result is a system where users only need to label what is required, which saves time and decreases errors due to fatigue.</p><p><img src="https://contentmanager.copernicus.org/fileStorageProxy.php?f=gnp.0402be60f60062057601161/sdaolpUECMynit/12UGE&app=m&a=0&c=131251398e575ac9974634bd0861fadc&ct=x&pn=gnp.elif&d=1" alt=""></p><p><em>Fig. 1: AIDE offers concurrent web image labelling support and uses annotations and deep learning models in an active learning loop.</em></p><p>AIDE includes a comprehensive set of built-in models, such as ResNet [1] for image classification, Faster R-CNN [5] and RetinaNet [4] for object detection, and U-Net [6] for semantic segmentation. All models can be customised and used without having to write a single line of code. Furthermore, AIDE accepts any third-party model with minimal implementation requirements. To complete the package, AIDE offers both user annotation and model prediction evaluation, access control, customisable model training, and more, all through the web browser.</p><p>AIDE is fully open source and available under https://github.com/microsoft/aerial_wildlife_detection.</p><p> </p><p><strong>References</strong></p>


2021 ◽  
Author(s):  
Michal Moskal ◽  
Thomas Ball ◽  
Abhijith Chatra ◽  
James Devine ◽  
Peli de Halleux ◽  
...  
Keyword(s):  
Low Cost ◽  

Author(s):  
Luan Ibraimi ◽  
Qiang Tang ◽  
Pieter Hartel ◽  
Willem Jonker

Commercial Web-based Personal-Health Record (PHR) systems can help patients to share their personal health records (PHRs) anytime from anywhere. PHRs are very sensitive data and an inappropriate disclosure may cause serious problems to an individual. Therefore commercial Web-based PHR systems have to ensure that the patient health data is secured using state-of-the-art mechanisms. In current commercial PHR systems, even though patients have the power to define the access control policy on who can access their data, patients have to trust entirely the access-control manager of the commercial PHR system to properly enforce these policies. Therefore patients hesitate to upload their health data to these systems as the data is processed unencrypted on untrusted platforms. Recent proposals on enforcing access control policies exploit the use of encryption techniques to enforce access control policies. In such systems, information is stored in an encrypted form by the third party and there is no need for an access control manager. This implies that data remains confidential even if the database maintained by the third party is compromised. In this paper we propose a new encryption technique called a type-and-identity-based proxy re-encryption scheme which is suitable to be used in the healthcare setting. The proposed scheme allows users (patients) to securely store their PHRs on commercial Web-based PHRs, and securely share their PHRs with other users (doctors).


Sign in / Sign up

Export Citation Format

Share Document