scholarly journals IMPLEMENTASI PERMAINAN FLOW PADA PEMBANGUNAN SISTEM CAPTCHA

2016 ◽  
Vol 11 (2) ◽  
Author(s):  
Indra Setiawan ◽  
Willy Sudiarto Raharjo ◽  
Budi Susanto

The basic challenge in designing an obfuscating CAPTCHAs is to make them easy enough that users are not dissuaded from attempting a solution, yet still too difficult to solve using available image-based computer vision algorithms. CAPTCHA has been widely used in many web applications and there has been so many research on CAPTCHA. Current technology enables computer to easily solve image-based CAPTCHA with high probability, so we propose another type of CAPTCHA-based authenticaton that can not be solved by utilizing Optical Character Recognition but still easy to use for new users. We implemented the new model of CAPTCHA using FLOW game. We found that the success rate of this new system is 92.025%, completion time is 6.3614s, and 81,67% of users are able to solve it in less than 10s.

2021 ◽  
Vol 4 ◽  
Author(s):  
Logan Froese ◽  
Joshua Dian ◽  
Carleen Batson ◽  
Alwyn Gomez ◽  
Amanjyot Singh Sainbhi ◽  
...  

Introduction: As real time data processing is integrated with medical care for traumatic brain injury (TBI) patients, there is a requirement for devices to have digital output. However, there are still many devices that fail to have the required hardware to export real time data into an acceptable digital format or in a continuously updating manner. This is particularly the case for many intravenous pumps and older technological systems. Such accurate and digital real time data integration within TBI care and other fields is critical as we move towards digitizing healthcare information and integrating clinical data streams to improve bedside care. We propose to address this gap in technology by building a system that employs Optical Character Recognition through computer vision, using real time images from a pump monitor to extract the desired real time information.Methods: Using freely available software and readily available technology, we built a script that extracts real time images from a medication pump and then processes them using Optical Character Recognition to create digital text from the image. This text was then transferred to an ICM + real-time monitoring software in parallel with other retrieved physiological data.Results: The prototype that was built works effectively for our device, with source code openly available to interested end-users. However, future work is required for a more universal application of such a system.Conclusion: Advances here can improve medical information collection in the clinical environment, eliminating human error with bedside charting, and aid in data integration for biomedical research where many complex data sets can be seamlessly integrated digitally. Our design demonstrates a simple adaptation of current technology to help with this integration.


Compiler ◽  
2018 ◽  
Vol 7 (1) ◽  
Author(s):  
Indra Hading Kurniawan ◽  
Nurcahyani Dewi Retnowati

Template matching method is a simple and widely used method to recognize patterns. The weakness of this algorithm is the limited model that will be used as a template as a comparison in the database such as shape, size, and orientation. The Extraction Feature algorithm addresses the problem of template models such as the shape, size, and orientation that exist in the matching template algorithm by mapping the characteristics of the image object to be recognized. Optical character recognition is used to translate characters into digital images into text formats. Its simple implementation makes the template matching method widely used. In this final project discusses the introduction of color in an image to be detected color, this color recognition is not fully successful because of the influence of lightness. The workings of this application take picture is by taking a picture and then the application identifies the color of any existing and will issue results in the form of text percent, with a success rate of 15% and 85% failure when detecting a color.


Author(s):  
Rashmi Gupta ◽  
Dipti Gupta ◽  
Megha Dua ◽  
Manju Khari

Recognition is an important part in the computer vision. Optical character recognition is nowadays gaining its importance in terms of the digital and handwritten documents recognition. Devanagari is widely spoken script with more than 300 million people relying on it for their day-to-day activities, so recognition of Devanagari characters is gaining its importance in the recent times. Tasksin handwritten recognition handle the differences along with alteration of Hindi characters written in offline mode. Furthermore, Hindi character are written in different sizes shapes and orientation in contrast to hand writing usually written along a particular baseline in a horizontal direction. Handwritten and machine printed documents are needed to be recognized for the applications like bank Cheque processing, library automation, publication house, manuscripts, Granths and other forms and documents. In this paper an attempt has been made to shortlist the methods and processing techniques studied so far in the field of Devanagari character recognition. The performance analysis and the results for the various techniques are given in the chapter.


This paper discusses about License plate recognition using digital processing of images, where the image of a vehicle is taken and the number plate is then recognized by various layers of digital image processing. The number plate is then allowed to undergo optical character recognition (OCR), this extracts the data and then compares it with a database containing the details of the vehicle. This allows the user to identify the type of vehicle and the identity of the person who is driving the vehicle. It will denote the user about the registration of the vehicle by comparing it with the database of the registered vehicle in the area. The device will consist of a camera which will take the real time footage of the vehicles and a snap from the video of the vehicle is used to recognize the number plate. The processor will process the images and will display the number of the vehicle and the owner of the vehicle in the display, this is achieved by comparing the number of the vehicle with the previously fed data from the database. This device will provide an efficient way for automating a parking system where there will be no need for a human to interfere with the checking of the vehicle and providing passes for the vehicle.


Author(s):  
Anne Dozias ◽  
Cristian Camilo Otalora-Leguizamón ◽  
Marco Bianchetti ◽  
Maria Susana Avila-Garcia

Reproducibility is one of the big challenges in research. Lab notebooks have been used to record data, observations and relevant remarks of the research processes. Smart pens are devices that record audio, handwriting notes thanks to micro patterned paper, and generate pdf files and audio enriched notes (pencasts). The handwriting notes can then be processed using optical character recognition (OCR) software to generate digital documents allowing the user to archive and access these notes in an easier way. However, OCR for handwriting is still a challenge in the computer vision research area. In this paper, we report the evaluation results of different OCR tools when processing handwriting notes written by 7 participants focusing on the main elements and technical vocabulary identified in fibre optic sensors research.


1997 ◽  
Vol 9 (1-3) ◽  
pp. 58-77
Author(s):  
Vitaly Kliatskine ◽  
Eugene Shchepin ◽  
Gunnar Thorvaldsen ◽  
Konstantin Zingerman ◽  
Valery Lazarev

In principle, printed source material should be made machine-readable with systems for Optical Character Recognition, rather than being typed once more. Offthe-shelf commercial OCR programs tend, however, to be inadequate for lists with a complex layout. The tax assessment lists that assess most nineteenth century farms in Norway, constitute one example among a series of valuable sources which can only be interpreted successfully with specially designed OCR software. This paper considers the problems involved in the recognition of material with a complex table structure, outlining a new algorithmic model based on ‘linked hierarchies’. Within the scope of this model, a variety of tables and layouts can be described and recognized. The ‘linked hierarchies’ model has been implemented in the ‘CRIPT’ OCR software system, which successfully reads tables with a complex structure from several different historical sources.


2020 ◽  
Vol 2020 (1) ◽  
pp. 78-81
Author(s):  
Simone Zini ◽  
Simone Bianco ◽  
Raimondo Schettini

Rain removal from pictures taken under bad weather conditions is a challenging task that aims to improve the overall quality and visibility of a scene. The enhanced images usually constitute the input for subsequent Computer Vision tasks such as detection and classification. In this paper, we present a Convolutional Neural Network, based on the Pix2Pix model, for rain streaks removal from images, with specific interest in evaluating the results of the processing operation with respect to the Optical Character Recognition (OCR) task. In particular, we present a way to generate a rainy version of the Street View Text Dataset (R-SVTD) for "text detection and recognition" evaluation in bad weather conditions. Experimental results on this dataset show that our model is able to outperform the state of the art in terms of two commonly used image quality metrics, and that it is capable to improve the performances of an OCR model to detect and recognise text in the wild.


Sign in / Sign up

Export Citation Format

Share Document