scholarly journals An Explainable Password Strength Meter Addon via Textual Pattern Recognition

2019 ◽  
Vol 2019 ◽  
pp. 1-10 ◽  
Author(s):  
Ming Xu ◽  
Weili Han

Textual passwords are still dominating the authentication of remote file sharing and website logins, although researchers recently showed several vulnerabilities about this authentication mechanism. When a user creates or changes a password, a website usually leverages a password strength meter (PSM for short) to show the strength of the password. When the password is evaluated as a weak one, the user may replace the password with a stronger or securer one. However, the user is usually confused when the password, especially a frequently used password, is shown as a weak one. We argue that an explainable password strength meter addon, which could show the reasons of weak, may help users to more effectively create a secure password. Unfortunately, we find few sites in Alexa global top 100 showing these details. Motivated to help users with an explainable PSM, this paper proposes an addon to PSMs providing feedbacks in the form of pattern passwords explaining why a password is weak. This PSM addon can detect twelve types of patterns, which cover a very large proportion among 70 million of leaked real passwords from high-profile websites. According to our evaluation and user study, our PSM addon, which leverages textual pattern passwords, can effectively detect these popular patterns and effectively help users create securer passwords.

2011 ◽  
Vol 3 (2) ◽  
pp. 65-74 ◽  
Author(s):  
Christian Leichsenring ◽  
René Tünnermann ◽  
Thomas Hermann

Touch can create a feeling of intimacy and connectedness. This work proposes feelabuzz, a system to transmit movements of one mobile phone to the vibration actuator of another one. This is done in a direct, non-abstract way, without the use of pattern recognition techniques in order not to destroy the feel for the other. The tactile channel enables direct communication, i. e. what another person explicitly signals, as well as implicit context communication, the complex movements any activity consists of or even those that are produced by the environment. This paper explores the potential of this approach, presents the mapping use and discusses further possible development beyond the existing prototype to enable a large-scale user study.


2016 ◽  
Author(s):  
Annemarie Bridy

When is the developer or distributor of a copying technology legally responsible for the copyright infringements committed by users of that technology? Over the past twenty years or so, development and deployment of digital copying technologies (personal computers, CD and DVD burners, iPods and other portable music devices, the Internet itself, etc.), and tools for Internet file sharing and file distribution, have thrust that question into the center of a high-profile public debate. That debate gave rise to the most closely watched copyright case of recent years, MGMStudios Inc. v. Grokster, Ltd. The Ninth Circuit Court of Appeals had held that defendants Grokster and StreamCast, the developers and distributors of peer-to-peer file-sharing software, were shielded from copyright liability by the so-called Sony doctrine (also called the Betamax case), interpreting that doctrine to mean that distributors of copying technology that is capable of commercially significant noninfringing use are shielded from liability for the infringement committed by users of the technology, unless the distributors had specific knowledge of infringement obtained at a time at which they contributed to the infringement and had failed to act upon that information. The Supreme Court unanimously reversed, holding that because Grokster and StreamCast had distributed their software with the object of promoting its use to infringe copyright, as shown by clear expression or other affirmative steps taken to foster infringement, Sony did not protect them from liability, whether or not their software was capable of commercially significant noninfringing use. The unanimous decision in the copyright holders' favor is, obviously, a big loss for Grokster Inc. and StreamCast, Ltd.; its broader implications for Internet file-sharing practices and file-sharing technology, however, are much less clear; to try to understand what they might be, we rewind the tape, back to Sony in 1984.Annemarie BridyProfessor<http://www.uidaho.edu/law/faculty/annemariebridy>|University of Idaho College of Law|PO Box 83720-0051|Boise, ID 83720|Ph. 208.364.4583Affiliate Scholar<https://cyberlaw.stanford.edu/about/people/annemarie-bridy>|Stanford Center for Internet and SocietyAffiliate Fellow<http://isp.yale.edu/people-directory?type=19>|Yale Information Society ProjectSSRN<http://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=630766>|HeinOnline<http://heinonline.org/HOL/AuthorProfile?collection=journals&search_name=Bridy,%20Annemarie&base=js>|LinkedIn<https://www.linkedin.com/in/annemariebridy>|Twitter<https://twitter.com/AnnemarieBridy>


Author(s):  
Mir Tafseer Nayeem ◽  
Mamunur Rashid Akand ◽  
Nazmus Sakib ◽  
Wasi Ul Kabir

Nowadays, many services in the internet including Email, search engine, social networking are provided with free of charge due to enormous growth of web users. With the expansion of Web services, denial of service (DoS) attacks by malicious automated programs (e.g., web bots) is becoming a serious problem of web service accounts. A HIP, or Human Interactive Proofs, is a human authentication mechanism that generates and grades tests to determine whether the user is a human or a malicious computer program. Unfortunately, the existing HIPs tried to maximize the difficulty for automated programs to pass tests by increasing distortion or noise. Consequently, it has also become difficult for potential users too. So there is a tradeoff between the usability and robustness in designing HIP tests. In their propose technique the authors tried to balance the readability and security by adding contextual information in the form of natural conversation without reducing the distortion and noise. In the result section, a microscopic large-scale user study was conducted involving 110 users to investigate the actual user views compare to existing state of the art CAPTCHA systems like Google's reCAPTCHA and Microsoft's CAPTCHA in terms of usability and security and found the authors' system capable of deploying largely over internet.


Author(s):  
G.Y. Fan ◽  
J.M. Cowley

In recent developments, the ASU HB5 has been modified so that the timing, positioning, and scanning of the finely focused electron probe can be entirely controlled by a host computer. This made the asynchronized handshake possible between the HB5 STEM and the image processing system which consists of host computer (PDP 11/34), DeAnza image processor (IP 5000) which is interfaced with a low-light level TV camera, array processor (AP 400) and various peripheral devices. This greatly facilitates the pattern recognition technique initiated by Monosmith and Cowley. Software called NANHB5 is under development which, instead of employing a set of photo-diodes to detect strong spots on a TV screen, uses various software techniques including on-line fast Fourier transform (FFT) to recognize patterns of greater complexity, taking advantage of the sophistication of our image processing system and the flexibility of computer software.


Author(s):  
J. A. Eades

For well over two decades computers have played an important role in electron microscopy; they now pervade the whole field - as indeed they do in so many other aspects of our lives. The initial use of computers was mainly for large (as it seemed then) off-line calculations for image simulations; for example, of dislocation images.Image simulation has continued to be one of the most notable uses of computers particularly since it is essential to the correct interpretation of high resolution images. In microanalysis, too, the computer has had a rather high profile. In this case because it has been a necessary part of the equipment delivered by manufacturers. By contrast the use of computers for electron diffraction analysis has been slow to prominence. This is not to say that there has been no activity, quite the contrary; however it has not had such a great impact on the field.


Author(s):  
L. Fei ◽  
P. Fraundorf

Interface structure is of major interest in microscopy. With high resolution transmission electron microscopes (TEMs) and scanning probe microscopes, it is possible to reveal structure of interfaces in unit cells, in some cases with atomic resolution. A. Ourmazd et al. proposed quantifying such observations by using vector pattern recognition to map chemical composition changes across the interface in TEM images with unit cell resolution. The sensitivity of the mapping process, however, is limited by the repeatability of unit cell images of perfect crystal, and hence by the amount of delocalized noise, e.g. due to ion milling or beam radiation damage. Bayesian removal of noise, based on statistical inference, can be used to reduce the amount of non-periodic noise in images after acquisition. The basic principle of Bayesian phase-model background subtraction, according to our previous study, is that the optimum (rms error minimizing strategy) Fourier phases of the noise can be obtained provided the amplitudes of the noise is given, while the noise amplitude can often be estimated from the image itself.


1989 ◽  
Vol 34 (11) ◽  
pp. 988-989
Author(s):  
Erwin M. Segal
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document