Geo Tracking of Waste

Author(s):  
Sahil Pulikal,

Abstract: Littering of waste causes a lot of negative impact on the environment as well as it affects the economy of the nation. Because of unavailability of a proper waste tracking and detection system, littered waste remains uncleaned. The traditional way of waste management includes regular waste collected by the groups assigned such as municipal corporation trucks. To overcome this problem, we develop a system where the admin can capture images using a mobile camera and using artificial intelligence, it processes those images and identifies the type of waste present in the image. It also maps the waste locations along with its images on a map. At the end, the admin notifies the nearest garbage collector to clean the waste. The system uses the concepts of image processing, deep learning and object detection. Keywords: Object Detection, Image Processing, Coordinates, Deep Learning, Mapping

Author(s):  
Andreas Brandsæter ◽  
Ottar L Osen

The advent of artificial intelligence and deep learning has provided sophisticated functionality for sensor fusion and object detection and classification which have accelerated the development of highly automated and autonomous ships as well as decision support systems for maritime navigation. It is, however, challenging to assess how the implementation of these systems affects the safety of ship operation. We propose to utilize marine training simulators to conduct controlled, repeated experiments allowing us to compare and assess how functionality for autonomous navigation and decision support affects navigation performance and safety. However, although marine training simulators are realistic to human navigators, it cannot be assumed that the simulators are sufficiently realistic for testing the object detection and classification functionality, and hence this functionality cannot be directly implemented in the simulators. We propose to overcome this challenge by utilizing Cycle-Consistent Adversarial Networks (Cycle-GANs) to transform the simulator data before object detection and classification is performed. Once object detection and classification are completed, the result is transferred back to the simulator environment. Based on this result, decision support functionality with realistic accuracy and robustness can be presented and autonomous ships can make decisions and navigate in the simulator environment.


Author(s):  
Jesus Benito-Picazo ◽  
Enrique Dominguez ◽  
Esteban J. Palomo ◽  
Ezequiel Lopez-Rubio ◽  
Juan Miguel Ortiz-de-Lazcano-Lobato

2020 ◽  
Vol 10 (14) ◽  
pp. 4744
Author(s):  
Hyukzae Lee ◽  
Jonghee Kim ◽  
Chanho Jung ◽  
Yongchan Park ◽  
Woong Park ◽  
...  

The arena fragmentation test (AFT) is one of the tests used to design an effective warhead. Conventionally, complex and expensive measuring equipment is used for testing a warhead and measuring important factors such as the size, velocity, and the spatial distribution of fragments where the fragments penetrate steel target plates. In this paper, instead of using specific sensors and equipment, we proposed the use of a deep learning-based object detection algorithm to detect fragments in the AFT. To this end, we acquired many high-speed videos and built an AFT image dataset with bounding boxes of warhead fragments. Our method fine-tuned an existing object detection network named the Faster R-convolutional neural network (CNN) on this dataset with modification of the network’s anchor boxes. We also employed a novel temporal filtering method, which was demonstrated as an effective non-fragment filtering scheme in our recent previous image processing-based fragment detection approach, to capture only the first penetrating fragments from all detected fragments. We showed that the performance of the proposed method was comparable to that of a sensor-based system under the same experimental conditions. We also demonstrated that the use of deep learning technologies in the task of AFT significantly enhanced the performance via a quantitative comparison between our proposed method and our recent previous image processing-based method. In other words, our proposed method outperformed the previous image processing-based method. The proposed method produced outstanding results in terms of finding the exact fragment positions.


Mekatronika ◽  
2020 ◽  
Vol 2 (2) ◽  
pp. 49-54
Author(s):  
Arzielah Ashiqin Alwi ◽  
Ahmad Najmuddin Ibrahim ◽  
Muhammad Nur Aiman Shapiee ◽  
Muhammad Ar Rahim Ibrahim ◽  
Mohd Azraai Mohd Razman ◽  
...  

Dynamic gameplay, fast-paced and fast-changing gameplay, where angle shooting (top and bottom corner) has the best chance of a good goal, are the main aspects of handball. When it comes to the narrow-angle area, the goalkeeper has trouble blocked the goal. Therefore, this research discusses image processing to investigate the shooting precision performance analysis to detect the ball's accuracy at high speed. In the handball goal, the participants had to complete 50 successful shots at each of the four target locations. Computer vision will then be implemented through a camera to identify the ball, followed by determining the accuracy of the ball position of floating, net tangle and farthest or smallest using object detection as the accuracy marker. The model will be trained using Deep Learning (DL)  models of YOLOv2, YOLOv3, and Faster R-CNN and the best precision models of ball detection accuracy were compared. It was found that the best performance of the accuracy of the classifier Faster R-CNN produces 99% for all ball positions.


2020 ◽  
Vol 28 (S2) ◽  
Author(s):  
Asmida Ismail ◽  
Siti Anom Ahmad ◽  
Azura Che Soh ◽  
Mohd Khair Hassan ◽  
Hazreen Haizi Harith

The object detection system is a computer technology related to image processing and computer vision that detects instances of semantic objects of a certain class in digital images and videos. The system consists of two main processes, which are classification and detection. Once an object instance has been classified and detected, it is possible to obtain further information, including recognizes the specific instance, track the object over an image sequence and extract further information about the object and the scene. This paper presented an analysis performance of deep learning object detector by combining a deep learning Convolutional Neural Network (CNN) for object classification and applies classic object detection algorithms to devise our own deep learning object detector. MiniVGGNet is an architecture network used to train an object classification, and the data used for this purpose was collected from specific indoor environment building. For object detection, sliding windows and image pyramids were used to localize and detect objects at different locations, and non-maxima suppression (NMS) was used to obtain the final bounding box to localize the object location. Based on the experiment result, the percentage of classification accuracy of the network is 80% to 90% and the time for the system to detect the object is less than 15sec/frame. Experimental results show that there are reasonable and efficient to combine classic object detection method with a deep learning classification approach. The performance of this method can work in some specific use cases and effectively solving the problem of the inaccurate classification and detection of typical features.


This paper will suggest object localization module, an artificial intelligence powered driven system modeled to help visually challenged individuals or a person suffering from dementia. This model will help them in their daily routine tasks in locating misplaced objects. To accomplish this task we are going to implement artificial intelligence based techniques such as speech recognition, generating speech, object detection and image processing. This will help the system to understand the user's request and will respond accordingly. This system consists of four major units: a) a speech unit, b) a image processing unit, c) an object detection unit, and d) a logic unit. The speech unit interacts with user and will listen to user’s verbal query and verbally answers to the user. The image processing unit processes the image with the help of deep learning modules. The object detection unit detects all the target and non-target objects in a scene. Then, the logic unit sends the object location and description to the speech unit. This will help us to create most usable system to help visually impaired people.


Electronics ◽  
2021 ◽  
Vol 10 (16) ◽  
pp. 1932
Author(s):  
Malik Haris ◽  
Adam Glowacz

Automated driving and vehicle safety systems need object detection. It is important that object detection be accurate overall and robust to weather and environmental conditions and run in real-time. As a consequence of this approach, they require image processing algorithms to inspect the contents of images. This article compares the accuracy of five major image processing algorithms: Region-based Fully Convolutional Network (R-FCN), Mask Region-based Convolutional Neural Networks (Mask R-CNN), Single Shot Multi-Box Detector (SSD), RetinaNet, and You Only Look Once v4 (YOLOv4). In this comparative analysis, we used a large-scale Berkeley Deep Drive (BDD100K) dataset. Their strengths and limitations are analyzed based on parameters such as accuracy (with/without occlusion and truncation), computation time, precision-recall curve. The comparison is given in this article helpful in understanding the pros and cons of standard deep learning-based algorithms while operating under real-time deployment restrictions. We conclude that the YOLOv4 outperforms accurately in detecting difficult road target objects under complex road scenarios and weather conditions in an identical testing environment.


Sign in / Sign up

Export Citation Format

Share Document