scholarly journals GPU Based Real-time Floating Object Detection System

Author(s):  
Jie Yang ◽  
Jian-min Meng
Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5279
Author(s):  
Dong-Hoon Kwak ◽  
Guk-Jin Son ◽  
Mi-Kyung Park ◽  
Young-Duk Kim

The consumption of seaweed is increasing year by year worldwide. Therefore, the foreign object inspection of seaweed is becoming increasingly important. Seaweed is mixed with various materials such as laver and sargassum fusiforme. So it has various colors even in the same seaweed. In addition, the surface is uneven and greasy, causing diffuse reflections frequently. For these reasons, it is difficult to detect foreign objects in seaweed, so the accuracy of conventional foreign object detectors used in real manufacturing sites is less than 80%. Supporting real-time inspection should also be considered when inspecting foreign objects. Since seaweed requires mass production, rapid inspection is essential. However, hyperspectral imaging techniques are generally not suitable for high-speed inspection. In this study, we overcome this limitation by using dimensionality reduction and using simplified operations. For accuracy improvement, the proposed algorithm is carried out in 2 stages. Firstly, the subtraction method is used to clearly distinguish seaweed and conveyor belts, and also detect some relatively easy to detect foreign objects. Secondly, a standardization inspection is performed based on the result of the subtraction method. During this process, the proposed scheme adopts simplified and burdenless calculations such as subtraction, division, and one-by-one matching, which achieves both accuracy and low latency performance. In the experiment to evaluate the performance, 60 normal seaweeds and 60 seaweeds containing foreign objects were used, and the accuracy of the proposed algorithm is 95%. Finally, by implementing the proposed algorithm as a foreign object detection platform, it was confirmed that real-time operation in rapid inspection was possible, and the possibility of deployment in real manufacturing sites was confirmed.


2019 ◽  
Author(s):  
Jimut Bahan Pal

It has been a real challenge for computers with low computing power and memory to detect objects in real time. After the invention of Convolution Neural Networks (CNN) it is easy for computers to detect images and recognize them. There are several technologies and models which can detect objects in real time, but most of them require high end technologies in terms of GPUs and TPUs. Though, recently many new algorithms and models have been proposed, which runs on low resources. In this paper we studied MobileNets to detect objects using webcam to successfully build a real time objectdetection system. We observed the pre trained model of the famous MS COCO dataset to achieve our purpose. Moreover, we applied Google’s open source TensorFlow as our back end. This real time object detection system may help in future to solve various complex vision problems.


2020 ◽  
Vol 17 (2) ◽  
pp. 803-813
Author(s):  
Emy Haryatmi ◽  
Aris Tito Sugihharto ◽  
Maulana Mujahidin

Autonomous robotic monitoring of human and animal used to secure the house or the flat. This robot can move freely from one side to another side of the house or the flat. Accordingly, this robot is sending the vision to android smartphone in real time. Therefore, the owner of the house or the flat can see inside the house while they were not around. The objective of this research is to develop autonomous robotic to be able to monitor the area from the existing of human and animal and send the vision to android smartphone in real time. This research used four leg robotic as an autonomous robotic monitoring. Four leg robotic is equipped with camera which connected to router based on wireless connection, PIR and ultrasonic sensor and GSM module to send short message service (SMS) notification if PIR sensor detect some movement. The experiment used cat as an animal object. Experiment showed that this robot can follow the object, either a human or an animal, within range of 10–45 cm from robot in 12 seconds and send the vision to android smartphone in real-time. The SMS is received in 5 seconds after camera capture the object. This robotic cannot determine who is the object on the camera because it is not equipped with object detection system.


Author(s):  
Chao-Yi Cho ◽  
Jen-Kuei Yang ◽  
Shau-Yin Tseng ◽  
I-Hsien Lee ◽  
Ming-Hwa Sheu

Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1066
Author(s):  
Peng Jia ◽  
Fuxiang Liu

At present, the one-stage detector based on the lightweight model can achieve real-time speed, but the detection performance is challenging. To enhance the discriminability and robustness of the model extraction features and improve the detector’s detection performance for small objects, we propose two modules in this work. First, we propose a receptive field enhancement method, referred to as adaptive receptive field fusion (ARFF). It enhances the model’s feature representation ability by adaptively learning the fusion weights of different receptive field branches in the receptive field module. Then, we propose an enhanced up-sampling (EU) module to reduce the information loss caused by up-sampling on the feature map. Finally, we assemble ARFF and EU modules on top of YOLO v3 to build a real-time, high-precision and lightweight object detection system referred to as the ARFF-EU network. We achieve a state-of-the-art speed and accuracy trade-off on both the Pascal VOC and MS COCO data sets, reporting 83.6% AP at 37.5 FPS and 42.5% AP at 33.7 FPS, respectively. The experimental results show that our proposed ARFF and EU modules improve the detection performance of the ARFF-EU network and achieve the development of advanced, very deep detectors while maintaining real-time speed.


2020 ◽  
Vol 12 (1) ◽  
pp. 182 ◽  
Author(s):  
Lingxuan Meng ◽  
Zhixing Peng ◽  
Ji Zhou ◽  
Jirong Zhang ◽  
Zhenyu Lu ◽  
...  

Unmanned aerial vehicle (UAV) remote sensing and deep learning provide a practical approach to object detection. However, most of the current approaches for processing UAV remote-sensing data cannot carry out object detection in real time for emergencies, such as firefighting. This study proposes a new approach for integrating UAV remote sensing and deep learning for the real-time detection of ground objects. Excavators, which usually threaten pipeline safety, are selected as the target object. A widely used deep-learning algorithm, namely You Only Look Once V3, is first used to train the excavator detection model on a workstation and then deployed on an embedded board that is carried by a UAV. The recall rate of the trained excavator detection model is 99.4%, demonstrating that the trained model has a very high accuracy. Then, the UAV for an excavator detection system (UAV-ED) is further constructed for operational application. UAV-ED is composed of a UAV Control Module, a UAV Module, and a Warning Module. A UAV experiment with different scenarios was conducted to evaluate the performance of the UAV-ED. The whole process from the UAV observation of an excavator to the Warning Module (350 km away from the testing area) receiving the detection results only lasted about 1.15 s. Thus, the UAV-ED system has good performance and would benefit the management of pipeline safety.


Sign in / Sign up

Export Citation Format

Share Document