Robotic arm- Automated real time object detection

In robotic industry, today, interactions between human and machine usually consists of programming and maintaining machine using human operator. Using a robotic system in any industry for work provides precision and a certain level of accuracy. A robotic entity such as a robotic arm will not ask for time out and can work efficiently day and night which will in turn increase efficiency in workplace. In this paper, we have explained about an arm created, which works in such a way that while the robotic arm is working, camera is able to identify any object it sees which is taken care by the worker looking over the arm. The major outcome and result is the increased efficiency in workplace, precision and accuracy in low cost which can also be used for house hold chores too.

Author(s):  
Vikram Jain ◽  
Ninad Jadhav ◽  
Marian Verhelst
Keyword(s):  

Brain-Computer Interface (BCI) is atechnology that enables a human to communicate with anexternal stratagem to achieve the desired result. This paperpresents a Motor Imagery (MI) – Electroencephalography(EEG) signal based robotic hand movements of lifting anddropping of an external robotic arm. The MI-EEG signalswere extracted using a 3-channel electrode system with theAD8232 amplifier. The electrodes were placed on threelocations, namely, C3, C4, and right mastoid. Signalprocessing methods namely, Butterworth filter and Sym-9Wavelet Packet Decomposition (WPD) were applied on theextracted EEG signals to de-noise the raw EEG signal.Statistical features like entropy, variance, standarddeviation, covariance, and spectral centroid were extractedfrom the de-noised signals. The statistical features werethen applied to train a Multi-Layer Perceptron (MLP) -Deep Neural Network (DNN) to classify the hand movementinto two classes; ‘No Hand Movement’ and ’HandMovement’. The resultant k-fold cross-validated accuracyachieved was 85.41% and other classification metrics, suchas precision, recall sensitivity, specificity, and F1 Score werealso calculated. The trained model was interfaced withArduino to move the robotic arm according to the classpredicted by the DNN model in a real-time environment.The proposed end to end low-cost deep learning frameworkprovides a substantial improvement in real-time BCI.


Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8381
Author(s):  
Duarte Fernandes ◽  
Tiago Afonso ◽  
Pedro Girão ◽  
Dibet Gonzalez ◽  
António Silva ◽  
...  

Recently released research about deep learning applications related to perception for autonomous driving focuses heavily on the usage of LiDAR point cloud data as input for the neural networks, highlighting the importance of LiDAR technology in the field of Autonomous Driving (AD). In this sense, a great percentage of the vehicle platforms used to create the datasets released for the development of these neural networks, as well as some AD commercial solutions available on the market, heavily invest in an array of sensors, including a large number of sensors as well as several sensor modalities. However, these costs create a barrier to entry for low-cost solutions for the performance of critical perception tasks such as Object Detection and SLAM. This paper explores current vehicle platforms and proposes a low-cost, LiDAR-based test vehicle platform capable of running critical perception tasks (Object Detection and SLAM) in real time. Additionally, we propose the creation of a deep learning-based inference model for Object Detection deployed in a resource-constrained device, as well as a graph-based SLAM implementation, providing important considerations, explored while taking into account the real-time processing requirement and presenting relevant results demonstrating the usability of the developed work in the context of the proposed low-cost platform.


2021 ◽  
Author(s):  
Zhujiang Wang ◽  
Zimo Wang ◽  
Woo-Hyun Ko ◽  
Ashif Sikandar Iquebal ◽  
Vu Nguyen ◽  
...  

Abstract We introduce an autonomous laser kirigami technique, a novel custom manufacturing machine system which functions somewhat similar to a photocopier. This technique is capable of creating functional freeform shell structures using cutting and folding (kirigami) operations on sheet precursors. Conventional laser kirigami techniques are operated manually and rely heavily on precise calibrations. However, it is unrealistic to design and plan out the process (open loop) to realize arbitrary geometric features from a wide variety of materials. In our work, we develop and demonstrate a completely autonomous system, which is composed of a laser system, a 4-axis robotic arm, a real-time vision-based surface deformation monitoring system, and an associated control system. The laser system is based on the Lasersaur, which is a 120-Watt CO2 open source laser cutter. The robotic arm is employed to precisely adjust the distance between a workpiece and the laser lens so that a focused and defocused laser beam can be used to cut and fold the workpiece respectively. The four-axis robotic arm provides flexibility for expanding the limits of possible shapes, compared to conventional laser machine setups where the workpiece is fixed on rigid holders. The real-time vision-based surface deformation monitoring system is composed of four low-cost cameras, an integrated AI-assisted algorithm, and the sensors (detachable planar markers) mounted on the polymer-based sheet precursors, and allows real-time monitoring of the sheet forming process and geometric evolution with a geometric feature estimation error less than 5 % and delay time around 100ms. The developed control system manages the laser power, the laser scanning speed, the motion of the robotic arm based on the designed plan as well as the close-loop feedback provided by the vision-based surface deformation monitoring system. This cyber-physical kirigami platform can operate a sequence of cutting and folding processes in order to create kirigami objects. Hence, complicated kirigami design products with various different polygonal structures can be realized by undergoing sequential designed laser cuts, and bends (at any folding angles within designed geometric tolerance) using this autonomous kirigami platform.


2019 ◽  
Vol 10 (1) ◽  
pp. 160-166 ◽  
Author(s):  
Vu Trieu Minh ◽  
Nikita Katushin ◽  
John Pumwa

AbstractThis project designs a smart glove, which can be used for motion tracking in real time to a 3D virtual robotic arm in a PC. The glove is low cost with the price of less than 100 € and uses only internal measurement unit for students to develop their projects on augmented and virtual reality applications. Movement data from the glove is transferred to the PC via UART DMA. The data is set as the motion reference path for the 3D virtual robotic arm to follow. APID feedback controller controls the 3D virtual robot to track exactly the haptic glove movement with zero error in real time. This glove can be used also for remote control, tele-robotics and tele-operation systems.


Author(s):  
Lauri O. Luostarinen ◽  
Rafael Åman ◽  
Heikki Handroos

The improvement of the energy efficiency is an important topic for off-highway working vehicle developers and manufacturers. New energy efficient technologies, e.g. a hybrid power transmission with an energy recovery feature, have been introduced. However, currently most of the working vehicles are using more conventional technologies. Human operators have an effect on the overall efficiency of the vehicles. The research of the human effect is difficult and expensive using the conventional research methods. A real-time simulation and virtual reality (VR) technology have developed fast recently. A VR-based real-time simulator is a powerful low-cost tool and enables a several novel research methods. The aim of this study is to find the suitability of the VR-based simulator to find the effect of a human operator on the energy consumption of the working hydraulics of off-highway working vehicles. Experimental tests are carried out using human-in-the-loop simulation in an immersive VR-environment. The vehicle used for the case study is an underground mining loader. The results show that the proposed method is valid to find the values for the energy consumption and energy efficiency of a working hydraulics. A variation in the energy efficiency of the working hydraulics was found. The variation correlates with the operator’s driving style. With a larger group of operators the effect of a human operator on the energy consumption can be defined.


2021 ◽  
pp. 1-26
Author(s):  
E. Çetin ◽  
C. Barrado ◽  
E. Pastor

Abstract The number of unmanned aerial vehicles (UAVs, also known as drones) has increased dramatically in the airspace worldwide for tasks such as surveillance, reconnaissance, shipping and delivery. However, a small number of them, acting maliciously, can raise many security risks. Recent Artificial Intelligence (AI) capabilities for object detection can be very useful for the identification and classification of drones flying in the airspace and, in particular, are a good solution against malicious drones. A number of counter-drone solutions are being developed, but the cost of drone detection ground systems can also be very high, depending on the number of sensors deployed and powerful fusion algorithms. We propose a low-cost counter-drone solution composed uniquely by a guard-drone that should be able to detect, locate and eliminate any malicious drone. In this paper, a state-of-the-art object detection algorithm is used to train the system to detect drones. Three existing object detection models are improved by transfer learning and tested for real-time drone detection. Training is done with a new dataset of drone images, constructed automatically from a very realistic flight simulator. While flying, the guard-drone captures random images of the area, while at the same time, a malicious drone is flying too. The drone images are auto-labelled using the location and attitude information available in the simulator for both drones. The world coordinates for the malicious drone position must then be projected into image pixel coordinates. The training and test results show a minimum accuracy improvement of 22% with respect to state-of-the-art object detection models, representing promising results that enable a step towards the construction of a fully autonomous counter-drone system.


Sign in / Sign up

Export Citation Format

Share Document