Anti-Raider ATM System Using Mobilenetv2

2022 ◽  
Vol 9 (1) ◽  
pp. 0-0

Cash vending machines are ubiquitous and although their technology vouches for its security, they are erratically stormed by the raiders. Albeit the escalating crime counts, the raiders are fleeing from the justice by virtue of evidence lacking. This research work proposes a computer vision based Anti-Raider ATM system. The proposed approach models the image, acquired from the CCTVs against the raider images based on the computer vision and deduces the fact from the MobileNetv2 architecture. Once the model identifies the raider, the image is uploaded to the Google Drive, which serves as evidence for the judicial department. The proposed research is modeled against several optimizers and the result concludes that, among them Adam optimizer has excelled in both computation time and accuracy.

2019 ◽  
Vol 15 (2) ◽  
pp. 133-140
Author(s):  
Ramesh Bhandari ◽  
Sharad Kumar Ghimire

 Automatically extracting most conspicuous object from an image is useful and important for many computer vision related tasks. Performance of several applications such as object segmentation, image classification based on salient object and content based image editing in computer vision can be improved using this technique. In this research work, performance of structured matrix decomposition with contour based spatial prior is analyzed for extracting salient object from the complex scene. To separate background and salient object, structured matrix decomposition model based on low rank matrix recovery theory is used along with two structural regularizations. Tree structured sparsity inducing regularization is used to capture image structure and to enforce the same object to assign similar saliency values. And, Laplacian regularization is used to enlarge the gap between background part and salient object part. In addition to structured matrix decomposition model, general high level priors along with biologically inspired contour based spatial prior is integrated to improve the performance of saliency related tasks. The performance of the proposed method is evaluated on two demanding datasets, namely, ICOSEG and PASCAL-S for complex scene images. For PASCAL-S dataset precision recall curve of proposed method starts from 0.81 and follows top and right-hand border more than structured matrix decomposition which starts from 0.79. Similarly, structural similarity index score, which is 0.596654 and 0.394864 without using contour based spatial prior and 0.720875 and 0.568001 using contour based spatial prior for ICOSEG and PASCAL-S datasets shows improved result.


Entropy ◽  
2020 ◽  
Vol 22 (10) ◽  
pp. 1174
Author(s):  
Ashish Kumar Gupta ◽  
Ayan Seal ◽  
Mukesh Prasad ◽  
Pritee Khanna

Detection and localization of regions of images that attract immediate human visual attention is currently an intensive area of research in computer vision. The capability of automatic identification and segmentation of such salient image regions has immediate consequences for applications in the field of computer vision, computer graphics, and multimedia. A large number of salient object detection (SOD) methods have been devised to effectively mimic the capability of the human visual system to detect the salient regions in images. These methods can be broadly categorized into two categories based on their feature engineering mechanism: conventional or deep learning-based. In this survey, most of the influential advances in image-based SOD from both conventional as well as deep learning-based categories have been reviewed in detail. Relevant saliency modeling trends with key issues, core techniques, and the scope for future research work have been discussed in the context of difficulties often faced in salient object detection. Results are presented for various challenging cases for some large-scale public datasets. Different metrics considered for assessment of the performance of state-of-the-art salient object detection models are also covered. Some future directions for SOD are presented towards end.


2013 ◽  
Vol 380-384 ◽  
pp. 3534-3537
Author(s):  
Li Ya Liu

For traditional methods of library identifies based on the two-dimensional code characteristics, these methods are time consuming and a lot of prior experience is required. A method of library identifies based on computer vision technology is proposed. In this method, a preprocessing, such as image equalization, binarization and wavelet change, is first performed on the acquired library label images. Then on the basis of the structural features of the character, the features of library identifiers are obtained by applying PCA for a principal component analysis. A quantum neural network model is designed to have an optimization analysis and calculation on the extracted features, to avoid the drawbacks which need a lot of prior knowledge for the traditional methods. At the same time, an optimization is carried out for the neural network model saving a large amount of computation time. The experimental results show that a recognition rate, up to 98.13%, is obtained by using this method. With a high recognition speed, the method can meet the actual needs to be applied in a practical system.


Author(s):  
Péter Troll ◽  
Károly Szipka ◽  
Andreas Archenti

The research work in this paper was carried out to reach advanced positioning capabilities of unmanned aerial vehicles (UAVs) for indoor applications. The paper includes the design of a quadcopter and the implementation of a control system with the capability to position the quadcopter indoor using onboard visual pose estimation system, without the help of GPS. The project also covered the design and implementation of quadcopter hardware and the control software. The developed hardware enables the quadcopter to raise at least 0.5kg additional payload. The system was developed on a Raspberry single-board computer in combination with a PixHawk flight controller. OpenCV library was used to implement the necessary computer vision. The Open-source software-based solution was developed in the Robotic Operating System (ROS) environment, which performs sensor reading and communication with the flight controller while recording data about its operation and transmits those to the user interface. For the vision-based position estimation, pre-positioned printed markers were used. The markers were generated by ArUco coding, which exactly defines the current position and orientation of the quadcopter, with the help of computer vision. The resulting data was processed in the ROS environment. LiDAR with Hector SLAM algorithm was used to map the objects around the quadcopter. The project also deals with the necessary camera calibration. The fusion of signals from the camera and from the IMU (Inertial Measurement Unit) was achieved by using Extended Kalman Filter (EKF). The evaluation of the completed positioning system was performed with an OptiTrack optical-based external multi-camera measurement system. The introduced evaluation method has enough precision to be used to investigate the enhancement of positioning performance of quadcopters, as well as fine-tuning the parameters of the used controller and filtering approach. The payload capacity allows autonomous material handling indoors. Based on the experiments, the system has an accurate positioning system to be suitable for industrial application.


The expanded utilization of blue screens in the work environment and home has realized the advancement of various health concerns. Numerous people who uses blue screens such as Computers, Tablets, Mobiles and Etc., report an elevated level of occupation related grievances and side effects, including visual fatigue and stress. The complex of eye and vision issues identified with close to such usages are called as "computer vision syndrome". In this research work, we study and understand the flow level of a user, while using a smart phone. The study of the flow level will majorly depend on the eye-activity of the user. The data mentioned below is carefully recorded after examining the activity of eyes including the size of the pupil, blink rate, and blink duration. The purpose of this study is to understand the connection between the flow level and the activity of the eyes. A clear understanding of this connection could prove to be very useful information in the computer vision field. Additionally, this can also help a lot to understand about Visual Fatigue caused by Digital Medium


2021 ◽  
Vol 13 (22) ◽  
pp. 12513
Author(s):  
Ikram Ullah ◽  
Munam Ali Shah ◽  
Abid Khan ◽  
Carsten Maple ◽  
Abdul Waheed ◽  
...  

Preserving location privacy is increasingly an essential concern in Vehicular Adhoc Networks (VANETs). Vehicles broadcast beacon messages in an open form that contains information including vehicle identity, speed, location, and other headings. An adversary may track the various locations visited by a vehicle using sensitive information transmitted in beacons such as vehicle identity and location. By matching the vehicle identity used in beacon messages at various locations, an adversary learns the location history of a vehicle. This compromises the privacy of the vehicle driver. In existing research work, pseudonyms are used in place of the actual vehicle identity in the beacons. Pseudonyms should be changed regularly to safeguard the location privacy of vehicles. However, applying simple change in pseudonyms does not always provide location privacy. Existing schemes based on mix zones operate efficiently in higher traffic environments but fail to provide privacy in lower vehicle traffic densities. In this paper, we take the problem of location privacy in diverse vehicle traffic densities. We propose a new Crowd-based Mix Context (CMC) privacy scheme that provides location privacy as well as identity protection in various vehicle traffic densities. The pseudonym changing process utilizes context information of road such as speed, direction and the number of neighbors in transmission range for the anonymisation of vehicles, adaptively updating pseudonyms based on the number of a vehicle neighbors in the vicinity. We conduct formal modeling and specification of the proposed scheme using High-Level Petri Nets (HPLN). Simulation results validate the effectiveness of CMC in terms of location anonymisation, the probability of vehicle traceability, computation time (cost) and effect on vehicular applications.


Author(s):  
Priyank Jain ◽  
Meenu Chawla ◽  
Sanskar Sahu

Identification of a person by looking at the image is really a topic of interest in this modern world. There are many different ways by which this can be achieved. This research work describes various technologies available in the open-computer-vision (OpenCV) library and methodology to implement them using Python. To detect the face Haar Cascade are used, and for the recognition of face eigenfaces, fisherfaces, and local binary pattern, histograms has been used. Also, the results shown are followed by a discussion of encountered challenges and also the solution of the challenges.


1999 ◽  
Vol 11 (2) ◽  
pp. 87-87
Author(s):  
Shunichiro Oe ◽  

The widely used term <B>Computer Vision</B> applies to when computers are substituted for human visual information processing. As Real-world objects, except for characters, symbols, figures and photographs created by people, are 3-dimensional (3-D), their two-dimensional (2-D) images obtained by camera are produced by compressing 3-D information to 2-D. Many methods of 2-D image processing and pattern recognition have been developed and widely applied to industrial and medical processing, etc. Research work enabling computers to recognize 3-D objects by 3-D information extracted from 2-D images has been carried out in artificial intelligent robotics. Many techniques have been developed and some applied practically in scene analysis or 3-D measurement. These practical applications are based on image sensing, image processing, pattern recognition, image measurement, extraction of 3-D information, and image understanding. New techniques are constantly appearing. The title of this special issue is <B>Vision</B>, and it features 8 papers from basic computer vision theory to industrial applications. These papers include the following: Kohji Kamejima proposes a method to detect self-similarity in random image fields - the basis of human visual processing. Akio Nagasaka et al. developed a way to identify a real scene in real time using run-length encoding of video feature sequences. This technique will become a basis for active video recording and new robotic machine vision. Toshifumi Honda presents a method for visual inspection of solder joint by 3-D image analysis - a very important issue in the inspection of printed circuit boards. Saburo Okada et al. contribute a new technique on simultaneous measurement of shape and normal vector for specular objects. These methods are all useful for obtaining 3-D information. Masato Nakajima presents a human face identification method for security monitoring using 3-D gray-level information. Kenji Terada et al. propose a method of automatic counting passing people using image sensing. These two technologies are very useful in access control. Yoji. Ogawa presents a new image processing method for automatic welding in turbid water under a non-preparatory environment. Liu Wei et al. develop a method for detection and management of cutting-tool wear using visual sensors. We are certain that all of these papers will contribute greatly to the development of vision systems in robotics and mechatronics.


Robotics ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 52
Author(s):  
Luiz F. P. Oliveira ◽  
António P. Moreira ◽  
Manuel F. Silva

The constant advances in agricultural robotics aim to overcome the challenges imposed by population growth, accelerated urbanization, high competitiveness of high-quality products, environmental preservation and a lack of qualified labor. In this sense, this review paper surveys the main existing applications of agricultural robotic systems for the execution of land preparation before planting, sowing, planting, plant treatment, harvesting, yield estimation and phenotyping. In general, all robots were evaluated according to the following criteria: its locomotion system, what is the final application, if it has sensors, robotic arm and/or computer vision algorithm, what is its development stage and which country and continent they belong. After evaluating all similar characteristics, to expose the research trends, common pitfalls and the characteristics that hinder commercial development, and discover which countries are investing into Research and Development (R&D) in these technologies for the future, four major areas that need future research work for enhancing the state of the art in smart agriculture were highlighted: locomotion systems, sensors, computer vision algorithms and communication technologies. The results of this research suggest that the investment in agricultural robotic systems allows to achieve short—harvest monitoring—and long-term objectives—yield estimation.


Converting the ongoing advancement of abdominal aortic aneurysm (AAA) development and rebuilding information for prescient treatment needs a significant and computational perspective demonstration system. Also abdominal aortic aneurysm is fatal and rupture hence an effective treatment is needed. The aim of this research work is to develop an algorithm that focuses on the accurate detection of the AAA image. In this proposed work, the input AAA images preprocessed to transform the RGB format into gray scale image using adaptive filter, also the pixels which are corrupted by noise is too determined. Then watershed segmentation is applied before extracting the highlighted feature from AAA images. The features of AAA are extracted by genetic algorithm. After the extraction, the best features are selected by using particle swarm optimization and finally for classification and recognition, deep neural network classifier is applied. The proposed system is appropriate to accomplish our aim in foreseeing the AAA progress and in figuring the propagation vulnerability. The performance of our system is measured using accuracy, precision, f-score and computation time are utilized. The comparative analysis of the outcomes showed the significant performance of the proposed approach over the existing SVM and CNN classifier.


Sign in / Sign up

Export Citation Format

Share Document