scholarly journals Computer Vision Based Virtual Sketch Using Detection

Author(s):  
Santosh Dhaigude

Abstract: In todays world during this pandemic situation Online Learning is the only source where one could learn. Online learning makes students more curious about the knowledge and so they decide their learning path . But considering the academics as they have to pass the course or exam given, they need to take time to study, and have to be disciplined about their dedication. And there are many barriers for Online learning as well. Students are lowering their grasping power the reason for this is that each and every student was used to rely on their teacher and offline classes. Virtual writing and controlling system is challenging research areas in field of image processing and pattern recognition in the recent years. It contributes extremely to the advancement of an automation process and can improve the interface between man and machine in numerous applications. Several research works have been focusing on new techniques and methods that would reduce the processing time while providing higher recognition accuracy. Given the real time webcam data, this jambord like python application uses OpenCV library to track an object-of-interest (a human palm/finger in this case) and allows the user to draw bymoving the finger, which makes it both awesome and interesting to draw simple thing. Keyword: Detection, Handlandmark , Keypoints, Computer vision, OpenCV

Data ◽  
2021 ◽  
Vol 6 (2) ◽  
pp. 12
Author(s):  
Helder F. Castro ◽  
Jaime S. Cardoso ◽  
Maria T. Andrade

The ever-growing capabilities of computers have enabled pursuing Computer Vision through Machine Learning (i.e., MLCV). ML tools require large amounts of information to learn from (ML datasets). These are costly to produce but have received reduced attention regarding standardization. This prevents the cooperative production and exploitation of these resources, impedes countless synergies, and hinders ML research. No global view exists of the MLCV dataset tissue. Acquiring it is fundamental to enable standardization. We provide an extensive survey of the evolution and current state of MLCV datasets (1994 to 2019) for a set of specific CV areas as well as a quantitative and qualitative analysis of the results. Data were gathered from online scientific databases (e.g., Google Scholar, CiteSeerX). We reveal the heterogeneous plethora that comprises the MLCV dataset tissue; their continuous growth in volume and complexity; the specificities of the evolution of their media and metadata components regarding a range of aspects; and that MLCV progress requires the construction of a global standardized (structuring, manipulating, and sharing) MLCV “library”. Accordingly, we formulate a novel interpretation of this dataset collective as a global tissue of synthetic cognitive visual memories and define the immediately necessary steps to advance its standardization and integration.


2015 ◽  
Vol 4 (8) ◽  
pp. 46 ◽  
Author(s):  
Pedro Ortiz Coder

<p>New techniques in graphical heritage documentation have been improving recently. Modern photogrammetry and laser scanner constitute techniques with a good quality for those purposes. In this document, we will explain an easy photogrammetric method which permits to obtain accurate results. It is important to separate it from other methods based on computer vision with less accuracy. 4e photogrammetry solution is applied in this test through pictures taken from UAV (Unmanned Aerial Vehicles) and used on an archaeological site in Extremadura.</p>


Sensors ◽  
2019 ◽  
Vol 19 (3) ◽  
pp. 690 ◽  
Author(s):  
Jinsong Zhu ◽  
Wei Li ◽  
Da Lin ◽  
Ge Zhao

A novel method of near-field computer vision (NFCV) was developed to monitor the jet trajectory during the jetting process, which was used to precisely predict the falling point position of the jet trajectory. By means of a high-resolution webcam, the NFCV sensor device collected near-field images of the jet trajectory. Preprocessing of collected images was carried out, which included squint image correction, noise elimination, and jet trajectory extraction. The features of the jet trajectory in the processed image were extracted, including: start-point slope (SPS), end-point slope (EPS), and overall trajectory slope (OTS) based on the proposed mean position method. A multiple regression jet trajectory range prediction model was established based on these trajectory characteristics and the reliability of the model was verified. The results show that the accuracy of the prediction model is not less than 94% and the processing time is less than 0.88s, which satisfy the requirements of real-time online jet trajectory monitoring.


Author(s):  
Sasa Zhu

In this study, behavior sequence analysis was applied to analyze behavior transformation pattern of students’ online study after the introduction of animation design in the course of International Trade Practice. As an effective method to study online study behavior, behavior sequence analysis can be used to track learners’ online learning path and describe their behavior sequence so as to explore their learning habits. Based on the analysis of 240 students’ online learning log data, it is found that multiple behavior sequences are significantly related to learning effect. In this paper, the theory of behavior sequence analysis was adopted to analyze the learning effect of International Trade Practice after the introduction of animation design. The result shows that the introduction of animation design can enhance students’ learning initiative.


2020 ◽  
Author(s):  
Rafael Costa Fernandes ◽  
Paulo Sergio Silva ◽  
Felipe Ieda Fazanaro ◽  
Diego Paolo Ferruzzo Correa

This work discusses the development of a hybrid estimation algorithm based on computer vision and microelectromechanical system sensors. A mathematical enviroment was developed to simulate the dynamics of the quadrotor and its sensors, a 3D simulation software was also developed, simulating a on-board camera. The results obtained were compared to a TRIAD/MEMS attitude and position estimation technique. A fourty times increase in precision was shown, at the cost of five times additional computational processing time.


Author(s):  
Yung-Sheng Chen ◽  
Kun-Li Lin

Eye–hand coordination (EHC) is of great importance in the research areas of human visual perception, computer vision and robotic vision. A computer-using robot (CUBot) is designed for investigating the EHC mechanism and its implementation is presented in this paper. The CUBot possesses the ability of operating a computer with a mouse like a human being. Based on the three phases of people using computer with a mouse, i.e. watching the screen, recognizing the graphical objects on the screen as well as controlling the mouse to let the cursor approach to the target, our CUBot can also perceive information merely through its vision and control the mouse by its robotic hand without any physical data communication connected to the operated computer. The CUBot is mainly composed of “Mouse-Hand” for operating the mouse, “mind” for realizing the object perception, cursor tracking, and EHC. Two experiments used for testing the ability of our EHC algorithm and the perception of CUBot confirm the feasibility of the proposed approach.


2019 ◽  
Author(s):  
David Herzig ◽  
Christos T Nakas ◽  
Janine Stalder ◽  
Christophe Kosinski ◽  
Céline Laesser ◽  
...  

BACKGROUND Quantification of dietary intake is key to the prevention and management of numerous metabolic disorders. Conventional approaches are challenging, laborious, and, suffer from lack of accuracy. The recent advent of depth-sensing smartphones in conjunction with computer vision has the potential to facilitate reliable quantification of food intake. OBJECTIVE To evaluate the accuracy of a novel smartphone application combining depth-sensing hardware with computer vision to quantify meal macronutrient content. METHODS The application ran on a smartphone with built-in depth sensor applying structured light (iPhone X) and estimated weight, macronutrient (carbohydrate, protein, fat) and energy content of 48 randomly chosen meals (type of meals: breakfast, cooked meals, snacks) encompassing 128 food items. Reference weight was generated by weighing individual food items using a precision scale. The study endpoints were fourfold: i) error of estimated meal weight; ii) error of estimated meal macronutrient content and energy content; iii) segmentation performance; and iv) processing time. RESULTS Mean±SD absolute error of the application’s estimate was 35.1±42.8g (14.0±12.2%) for weight, 5.5±5.1g (14.8±10.9%) for carbohydrate content, 2.4±5.6g (13.0±13.8%), 1.3±1.7g (12.3±12.8%) for fat content and 41.2±42.5kcal (12.7±10.8%) for energy content. While estimation accuracy was not affected by the viewing angle, the type of meal mattered with slightly worse performance for cooked meals compared to breakfast and snack. Segmentation required adjustment for 7 out of 128 items. Mean±SD processing time across all meals was 22.9±8.6s. CONCLUSIONS The present study evaluated the accuracy of a novel smartphone application with integrated depth-sensing camera and found a high accuracy in food estimation across all macronutrients. This was paralleled by a high segmentation performance and low processing time corroborating the high usability of this system.


Author(s):  
Sangamesh Hosgurmath ◽  
Viswanatha Vanjre Mallappa ◽  
Nagaraj B. Patil ◽  
Vishwanath Petli

Face recognition is one of the important biometric authentication research areas for security purposes in many fields such as pattern recognition and image processing. However, the human face recognitions have the major problem in machine learning and deep learning techniques, since input images vary with poses of people, different lighting conditions, various expressions, ages as well as illumination conditions and it makes the face recognition process poor in accuracy. In the present research, the resolution of the image patches is reduced by the max pooling layer in convolutional neural network (CNN) and also used to make the model robust than other traditional feature extraction technique called local multiple pattern (LMP). The extracted features are fed into the linear collaborative discriminant regression classification (LCDRC) for final face recognition. Due to optimization using CNN in LCDRC, the distance ratio between the classes has maximized and the distance of the features inside the class reduces. The results stated that the CNN-LCDRC achieved 93.10% and 87.60% of mean recognition accuracy, where traditional LCDRC achieved 83.35% and 77.70% of mean recognition accuracy on ORL and YALE databases respectively for the training number 8 (i.e. 80% of training and 20% of testing data).


2011 ◽  
Vol 403-408 ◽  
pp. 13-19 ◽  
Author(s):  
Sonali Bhadoria ◽  
Meenakshi Madugunki ◽  
C.G. Dethe ◽  
Preeti Aggarwal

Image retrieval has been one of the most interesting and vivid research areas in the field of computer vision over the last decades. Content-Based Image Retrieval (CBIR) systems are used in order to automatically index, search, retrieve, and browse image databases. There are various features which can be extracted from the image which gives different performance in retrieving the image.al systems. In this paper we have tried to compare the effect of using different features on the same data base to implement CBIR system. We have tried to analyse the retrieval performance for each feature. We have compared different features as well as the combinations of them to improve the performance. We have also compared the effect of different matching techniques on the retrieval process.


Sign in / Sign up

Export Citation Format

Share Document