scholarly journals Development of GIS Switch State Judgment System Based on Image Recognition

2021 ◽  
Vol 2065 (1) ◽  
pp. 012009
Author(s):  
WenHan Zhao ◽  
Feng Wen ◽  
Chen Han ◽  
Zhoujian Chu ◽  
Qingyue Yao ◽  
...  

Abstract Aiming at the fast opening and closing speed of the GIS isolation/grounding switch, manual observation is more difficult, so it is difficult to judge the current switch status. This paper proposes an OpenCV-based image identification algorithm to identify the position of the switch movable contact during the opening and closing process of the isolating switch, thereby judging the state of the isolating switch. This system uses Raspberry Pi as the main hardware core, the server drives the CMOS camera through Raspberry Pi 4B, collects image information in the GIS optical observation window, and performs simple processing, and transmits it to the Raspberry Pi 4B based on the UDP protocol as the main core. In the upper computer and adopt the target detection algorithm based on OpenCV to track the current isolation/grounding switch contact position and determine the current opening and closing state.

Electronics ◽  
2021 ◽  
Vol 10 (14) ◽  
pp. 1665
Author(s):  
Jakub Suder ◽  
Kacper Podbucki ◽  
Tomasz Marciniak ◽  
Adam Dąbrowski

The aim of the paper was to analyze effective solutions for accurate lane detection on the roads. We focused on effective detection of airport runways and taxiways in order to drive a light-measurement trailer correctly. Three techniques for video-based line extracting were used for specific detection of environment conditions: (i) line detection using edge detection, Scharr mask and Hough transform, (ii) finding the optimal path using the hyperbola fitting line detection algorithm based on edge detection and (iii) detection of horizontal markings using image segmentation in the HSV color space. The developed solutions were tuned and tested with the use of embedded devices such as Raspberry Pi 4B or NVIDIA Jetson Nano.


2021 ◽  
Vol 4 (3) ◽  
pp. 11-18
Author(s):  
Khakimjon Zaynidinov ◽  
◽  
Odilbek Askaraliyev

The article discusses the selection of parameters for the algorithm for determining binary data arrays included in the control system, developed by the authors using independent substitution methods. Based on the analysis of the algorithms of non-cryptographic hash functions, the hash function based on the linear matching method was selected as the basis for independent substitution methods. Simplified schemes of algorithms developed for creating and comparing identifiers using a set of basic hash functions are given. An array of binary data was selected and based on the appropriate values for the size of the divisible blocks and the number of basic hashfunctions used for independent substitutions. The selection of binary data arrays in information systems integrated into the management system was done for the purpose of intellectual processing of incoming data. The properties of the array of data entering integrated systems are studied. The authors conducted experimental tests in the selected direction and presented the results of similarity assessment measurements for various parameters of the identification algorithm. In addition, the article conductedexperiments on the object of study using the selected mathematical model, based on the analytical conclusions. Initiator elements are studied and analyzed using a set of hash functions. An algorithm for comparison of selected identifiers has been developed. A generation algorithm has been developed to demonstrate and test the proposed solution. Algorithms based on analysis and experiments, and methods for selecting binary data arrays using the ash function have been experimentally tested. Based on the results, the indicators are shown. Based on the results obtained, the analytical conclusions and problem solutions of the research work were recognized


Author(s):  
Gautham G ◽  
Deepika Venkatesh ◽  
A. Kalaiselvi

In recent years, due to the increasing density of traffic every year, it is been a hassle for drivers in metropolitan cities to maintain lane and speeds on road. The drivers usually waste time and effort in idling their cars to maintain in traffic conditions. The drivers get easily frustrated when they tried to maintain the path because of the havoc created. Transportation Institute found that the odds of a crash(or near crash) more than doubled when the driver took his or her eyes off the road formore than two seconds. This tends to cause about 23% of accidents when not following their lane paths. In worst case the fuel economy often drops and tends to cause increase in pollution about 28% to 36% per vehicle annually. This corresponds to the wastage of fuel. Owing to this problem, we proposed an ingenious method by which the lane detection can be made affordable and applicable to existing automobiles. The proposed prototype of lane detection is carried over with a temporary autonomous bot which is interfaced with Raspberry pi processor, loaded with the lane detection algorithm. This prototype bot is made to get live video which is then processed by the algorithm. Also, the preliminary setups are carried over in such a way that it is easily implemented and accessible at low cost with better efficiency, providing a better impact on future automobiles.


2019 ◽  
Vol 109 (6) ◽  
pp. 416-425 ◽  
Author(s):  
Daniel E. Lidstone ◽  
Louise M. Porcher ◽  
Jessica DeBerardinis ◽  
Janet S. Dufek ◽  
Mohamed B. Trabia

Background: Monitoring footprints during walking can lead to better identification of foot structure and abnormalities. Current techniques for footprint measurements are either static or dynamic, with low resolution. This work presents an approach to monitor the plantar contact area when walking using high-speed videography. Methods: Footprint images were collected by asking the participants to walk across a custom-built acrylic walkway with a high-resolution digital camera placed directly underneath the walkway. This study proposes an automated footprint identification algorithm (Automatic Identification Algorithm) to measure the footprint throughout the stance phase of walking. This algorithm used coloration of the plantar tissue that was in contact with the acrylic walkway to distinguish the plantar contact area from other regions of the foot that were not in contact. Results: The intraclass correlation coefficient (ICC) demonstrated strong agreement between the proposed automated approach and the gold standard manual method (ICC = 0.939). Strong agreement between the two methods also was found for each phase of stance (ICC > 0.78). Conclusions: The proposed automated footprint detection technique identified the plantar contact area during walking with strong agreement with a manual gold standard method. This is the first study to demonstrate the concurrent validity of an automated identification algorithm to measure the plantar contact area during walking.


2017 ◽  
Author(s):  
Rohit Takhar ◽  
Tushar Sharma ◽  
Udit Arora ◽  
Sohit Verma

In recent years, with the improvement in imaging technology, the quality of small cameras have significantly improved. Coupled with the introduction of credit-card sized single-board computers such as Raspberry Pi, it is now possible to integrate a small camera with a wearable computer. This paper aims to develop a low cost product, using a webcam and Raspberry Pi, for visually-impaired people, which can assist them in detecting and recognising pedestrian crosswalks and staircases. There are two steps involved in detection and recognition of the obstacles i.e pedestrian crosswalks and staircases. In detection algorithm, we extract Haar features from the video frames and push these features to our Haar classifier. In recognition algorithm, we first convert the RGB image to HSV and apply histogram equalization to make the pixel intensity uniform. This is followed by image segmentation and contour detection. These detected contours are passed through a pre-processor which extracts the region of interests (ROI). We applied different statistical methods on these ROI to differentiate between staircases and pedestrian crosswalks. The detection and recognition results on our datasets demonstrate the effectiveness of our system.


Author(s):  
Mr. M. Senthil Murugan ◽  
Renuka E. ◽  
Vinodhini M.

One of the most critical subjects of embedded vision is color tracking in real time. Many computer vision applications begin by detecting and tracking moving objects in video scenes. Customers arriving at hypermarkets may benefit from this concept. A color detection algorithm locates pixels in an image that fit a predetermined color scheme. To differentiate detected pixels from the rest of the image, the color of the detected pixels can be modified. The robot is programmed to track objects by turning left and right to keep the target in view and driving forward and backward to keep the distance between the robot and the object steady. By maintaining a surrounding distance, detection of other objects of the same color pattern is ignored. By keeping a safe distance between the user and the robot, other objects of the same color pattern are not detected. The camera on an ARM11 Raspberry Pi computer attached to the robot is used to capture images. Using inbuilt python files, the acquired image is processed to locate the color using RGB varying pattern methodology. To make the product work smarter, this system also includes automatic billing via RFID reader and tag. The new concept of image processing domain is based on this device theory.


One of the issues that the human body faces is arrhythmia, a condition where the human heartbeat is either irregular, too slow or too fast. One of the ways to diagnose arrhythmia is by using ECG signals, the best diagnostic tool for detection of arrhythmia. This paper describes a deep learning approach to check whether signs of arrhythmia, in a given input signal, are present or not. A batch normalized CNN is used to classify the ECG signals based on the different types of arrhythmia. The model has achieved 96.39% training accuracy and 97% testing accuracy. The ECG signals are classified into five classes namely: Normal beats, Premature Ventricular Contraction (PVC) beats, Right Bundle Branch Block (RBBB) beats, Left Bundle Branch Block (LBBB) beats and Paced beats. A peak detection algorithm with six simple steps is designed to detect R-peaks from the ECG signals. A hardware device is built using Raspberry Pi to acquire ECG signals, which are then sent to the trained CNN for classification. The data-set for training is obtained from the MIT-BIH repository. Keras and Tensorflow libraries are used to design and develop the CNN and an application is designed using ’MEAN’ stack and ’Flask’ based servers.


2018 ◽  
Vol 7 (2.24) ◽  
pp. 42
Author(s):  
Amber Goel ◽  
Apaar Khurana ◽  
Pranav Sehgal ◽  
K Suganthi

The paper focuses on two areas, automation and security. Raspberry Pi is the heart of the project and it is fuelled by Machine Learning Algorithms using Open CV and Internet of Things. Face recognition uses Linear Binary Pattern and if an unknown person uses their workstation, a message will be sent to the respective person with the photo of the person who uses the workstation. Face recognition is also being used for uploading attendance and switching ON and OFF appliances automatically. During un-official hours, A Human Detection algorithm is being used to detect the human presence. If an unknown person enters the office, a photo of the person will be taken and sent to the authorities. This technology is a combination of Computer Vision, Machine learning and Internet of things, that serves to be an efficient tool for both automation and security.  


The primary issue faced by all types of visually challenged people around the globe is their self-independence. They feel dependent on every task they want to perform in their daily lives and this acts as an obstacle to the exciting things which they would otherwise want to do. This paper proposes a solution in the form of wearable smart spectacles which works on the Raspberry Pi Platform for making the visually challenged people self-sufficient and move freely in their known as well as unknown surroundings. This smart Spectacle uses a USB (Universal Serial Bus) camera and detects real-time objects in the vicinity using the SSD-MobileNets object detection algorithm and provides vision in the form of audio through the use of Headphones. The Smart Spectacles also combines the use of the OCR algorithm for Text Detection and proposes the module for quick and accurate detection of currency by the visually challenged.


Exact real-time pupil tracking is an essential step in a live eye gaze. Since pupil centre is a base point’s reference, eye centre localization is essential for many applications. In this research, we extract pupil eye features exactly within different intensity levels of eye images, mostly with localization of determined interest objects and where the human is looking for. Since it’s a digital world and digital transformation, everything is becoming virtual. Hence this concept has a huge scope in e-learning, class room training and analyzing human behaviour. This research covers eye tracking technology to track and analyze the learners' behavior and emotion on e-learning platform like level of attention and tiredness. Harr’s cascade classifier was used to first locate the eye’s area, and once found and support vector machine (SVM) for classification with the trained datasets. We also include the state of emotions, facial landmarks of the salient patches on face image using automated learning-free facial landmark detection technique.Experimental results help in developing learner eye gaze detection in system using Pycharm and hardware output using Raspberry Pi. In Raspberry Pi is given with the input image captured using external webcam and based on the engagement level of the learner content 1 or 2 would be displayed in the Raspbian OS environment


Sign in / Sign up

Export Citation Format

Share Document