scholarly journals A Counterfeit Paper Currency Recognition System Using LVQ based on UV Light

Author(s):  
Dewanto Harjunowibowo ◽  
Sri Hartati ◽  
Aris Budianto

This research is aimed to test a paper currency counterfeit detection system based on Linear Vector Quantization (LVQ) Neural Network. The input image of the system is the dancer object image of paper currency Rp. 50.000,- fluorescent by ultraviolet light. The image of paper currency data was taken from conventional banks. The LVQ method is used to recognize whether the paper currency being tested is counterfeit or not. The coding was carried out using visual programming language. The feature size of the dancer tested object is 114x90 px and the RGBHSI was extracted as the input for LVQ. The experimental results show that the system has an accuracy 100% of detecting 20 real test case data, and 96% of detecting 22 simulated test case data. The simulated case data was generated by varying the brightness of the image data. The real test case data contains of 10 counterfeit paper currency and 10 original paper currency. The simulated case data contains of 11 original paper currency and 11 counterfeit paper currency. The best setting for the system is Learning Rate = 0.01 and MaxEpoh = 10.

Processes ◽  
2019 ◽  
Vol 7 (7) ◽  
pp. 457 ◽  
Author(s):  
William Raveane ◽  
Pedro Luis Galdámez ◽  
María Angélica González Arrieta

The difficulty in precisely detecting and locating an ear within an image is the first step to tackle in an ear-based biometric recognition system, a challenge which increases in difficulty when working with variable photographic conditions. This is in part due to the irregular shapes of human ears, but also because of variable lighting conditions and the ever changing profile shape of an ear’s projection when photographed. An ear detection system involving multiple convolutional neural networks and a detection grouping algorithm is proposed to identify the presence and location of an ear in a given input image. The proposed method matches the performance of other methods when analyzed against clean and purpose-shot photographs, reaching an accuracy of upwards of 98%, but clearly outperforms them with a rate of over 86% when the system is subjected to non-cooperative natural images where the subject appears in challenging orientations and photographic conditions.


Pharmaceutics ◽  
2021 ◽  
Vol 13 (11) ◽  
pp. 1860
Author(s):  
Jiří Zeman ◽  
Sylvie Pavloková ◽  
David Vetchý ◽  
Adam Staňo ◽  
Zdeněk Moravec ◽  
...  

Pharmaceutical technology offers various dosage forms that can be applied interdisciplinary. One of them are spherical pellets which could be utilized as a carrier in emerging second-generation detection tubes. This detection system requires carriers with high specific surface area (SSA), which should allow better adsorption of toxic substances and detection reagents. In this study, a magnesium aluminometasilicate with high SSA was utilized along with various concentrations of volatile substances (menthol, camphor and ammonium bicarbonate) to increase further the carrier SSA after their sublimation. The samples were evaluated in terms of physicochemical parameters, their morphology was assessed by scanning electron microscopy, and the Brunauer–Emmett–Teller (BET) method was utilized to measure SSA. The samples were then impregnated with a detection reagent o-phenylenediamine-pyronine and tested with diphosgene. Only samples prepared using menthol or camphor were found to show red fluorescence under the UV light in addition to the eye-visible red-violet color. This allowed the detection of diphosgene/phosgene at a concentration of only 0.1 mg/m3 in the air for samples M20.0 and C20.0 with their SSA higher than 115 m2/g, thus exceeding the sensitivity of the first-generation DT-12 detection tube.


The mortality rate is increasing among the growing population and one of the leading causes is lung cancer. Early diagnosis is required to decrease the number of deaths and increase the survival rate of lung cancer patients. With the advancements in the medical field and its technologies CAD system has played a significant role to detect the early symptoms in the patients which cannot be carried out manually without any error in it. CAD is detection system which has combined the machine learning algorithms with image processing using computer vision. In this research a novel approach to CAD system is presented to detect lung cancer using image processing techniques and classifying the detected nodules by CNN approach. The proposed method has taken CT scan image as input image and different image processing techniques such as histogram equalization, segmentation, morphological operations and feature extraction have been performed on it. A CNN based classifier is trained to classify the nodules as cancerous or non-cancerous. The performance of the system is evaluated in the terms of sensitivity, specificity and accuracy


Author(s):  
Adamantios Koumpis

Classes and taxonomies of services – how can they be categorized, with respect to different parameters, factors, dimensions. Why some of them matter and some other don’t? How can they be organized to serve specific purposes, etc. The major part of the chapter is justly devoted to the presentation of the Service Analysis Model (SAM). With its four constituent building blocks, SAM provides an insight to the analysis of services and is followed by a section devoted to the synthesis of service and the composition of new ones. The chapter closes with the presentation of a real test case implemented for a manufacturing company to improve their service supply chain.


Energies ◽  
2019 ◽  
Vol 13 (1) ◽  
pp. 116
Author(s):  
Ya-Wen Hsu ◽  
Yi-Horng Lai ◽  
Kai-Quan Zhong ◽  
Tang-Kai Yin ◽  
Jau-Woei Perng

In this study, a millimeter-wave (MMW) radar and an onboard camera are used to develop a sensor fusion algorithm for a forward collision warning system. This study proposed integrating an MMW radar and camera to compensate for the deficiencies caused by relying on a single sensor and to improve frontal object detection rates. Density-based spatial clustering of applications with noise and particle filter algorithms are used in the radar-based object detection system to remove non-object noise and track the target object. Meanwhile, the two-stage vision recognition system can detect and recognize the objects in front of a vehicle. The detected objects include pedestrians, motorcycles, and cars. The spatial alignment uses a radial basis function neural network to learn the conversion relationship between the distance information of the MMW radar and the coordinate information in the image. Then a neural network is utilized for object matching. The sensor with a higher confidence index is selected as the system output. Finally, three kinds of scenario conditions (daytime, nighttime, and rainy-day) were designed to test the performance of the proposed method. The detection rates and the false alarm rates of proposed system were approximately 90.5% and 0.6%, respectively.


Sensors ◽  
2020 ◽  
Vol 20 (12) ◽  
pp. 3558 ◽  
Author(s):  
Miroslav Schneider ◽  
Zdenek Machacek ◽  
Radek Martinek ◽  
Jiri Koziorek ◽  
Rene Jaros

This article deals with the design and implementation of a prototype of an efficient Low-Cost, Low-Power, Low Complexity–hereinafter (L-CPC) an image recognition system for person detection. The developed and presented methods for processing, analyzing and recognition are designed exactly for inbuilt devices (e.g., motion sensor, identification of property and other specific applications), which will comply with the requirements of intelligent building technologies. The paper describes detection methods using a static background, where, during the search for people, the background image field being compared does not change, and a dynamic background, where the background image field is continually adjusted or complemented by objects merging into the background. The results are compared with the output of the Horn-Schunck algorithm applied using the principle of optical flow. The possible objects detected are subsequently stored and evaluated in the actual algorithm described. The detection results, using the change detection methods, are then evaluated using the Saaty method in order to determine the most successful configuration of the entire detection system. Each of the configurations used was also tested on a video sequence divided into a total of 12 story sections, in which the normal activities of people inside the intelligent building were simulated.


2010 ◽  
Vol 5 (6) ◽  
Author(s):  
Kalyan Kumar Debnath ◽  
Sultan Uddin Ahmed ◽  
Md. Shahjahan ◽  
Kazuyuki Murase

2018 ◽  
Vol 30 (4) ◽  
pp. 513-522 ◽  
Author(s):  
Yuichi Konishi ◽  
◽  
Kosuke Shigematsu ◽  
Takashi Tsubouchi ◽  
Akihisa Ohya

The Tsukuba Challenge is an open experiment competition held annually since 2007, and wherein the autonomous navigation robots developed by the participants must navigate through an urban setting in which pedestrians and cyclists are present. One of the required tasks in the Tsukuba Challenge from 2013 to 2017 was to search for persons wearing designated clothes within the search area. This is a very difficult task since it is necessary to seek out these persons in an environment that includes regular pedestrians, and wherein the lighting changes easily because of weather conditions. Moreover, the recognition system must have a light computational cost because of the limited performance of the computer that is mounted onto the robot. In this study, we focused on a deep learning method of detecting the target persons in captured images. The developed detection system was expected to achieve high detection performance, even when small-sized input images were used for deep learning. Experiments demonstrated that the proposed system achieved better performance than an existing object detection network. However, because a vast amount of training data is necessary for deep learning, a method of generating training data to be used in the detection of target persons is also discussed in this paper.


Sign in / Sign up

Export Citation Format

Share Document