scholarly journals A Decisive Object Detection using Deep Learning Techniques

Object detection is one of the essential features of computer vision and image processing techniques. In today's world, the computer can replicate or outperform the operation that a human can do. One such thing is object detection, and In the case of it, the machines must be trained in such a way that it can recognize the object equivalent to the human does with maximum accuracy. Several object detection techniques are used to train the machine to detect the objects. Some of the most common object detection techniques are R-CNN, Fast R-CNN, Faster R-CNN) Single Shot MultiBox Detector (SSD), and You Only Look Once(YOLO),. Each of these techniques has a different way of approach and accuracy of detecting the objects in real-time. These techniques are differentiated based on their performances, i.e., speed and accuracy. Some techniques may be very accurate in detecting the objects but may lack in the time taken for detecting the objects, whereas, on the other hand, some techniques may be very fast in figuring out the objects but not with greater accuracy. We have trained an object detection model based on the YOLO technique which gave the best performance out of all other existing techniques, though the accuracy of the model is less, the speed of detection is extremely high. So based on our research we have figured out the best performance object detection techniques and also the most accurate technique. A well-trained object detection model must be very optimistic in terms of their speed and accuracy.

Author(s):  
Mehmet Akif Cifci

The complication of people with diabetes causes an illness known as Diabetic Retinopathy (DR). It is very widespread among middle-aged and older people. As diabetes progresses, patients' vision may deteriorate and cause DR. People to lose their vision because of this illness. To cope with DR, early detection is needed. Patients will have to be checked by doctors regularly, which is a waste of time and energy. DR can be divided into two groups: non-proliferative (NPDR) while the other is proliferative (PDR). In this study, machine learning (ML) techniques are used to diagnose DR early. These are PNN, SVM, Bayesian Classification, and K-Means Clustering. These techniques will be evaluated and compared with each other to choose the best methodology. A total of 300 fundus photographs are processed for training and testing. The features are extracted from these raw images using image processing techniques. After an experiment, it is concluded that PNN has an accuracy of about 89%, Bayes Classifications 94%, SVM 97%, and K-Means Clustering 87%. The preliminary results prove that SVM is the best technique for early detection of DR.


2018 ◽  
Vol 8 (2) ◽  
pp. 181
Author(s):  
Oky Dwi Nurhayati ◽  
Isti Pudji Hastuti

Beef needs have increased every year. So as the need for expensive beef even at certain times tends to rise. This is used by cheat seller to mix beef with pork because pork is relatively cheaper. This is very detrimental to consumers. Visually, many peoples (consumers) couldn’t distinguish these two types of meat. Hence, we conduct research to distinguish both types of meat.  One way to overcome these problems is the use of complete image processing techniques. The aim of this research was establised an application prototype to distinguish beef and pork with image processing techniques. Image processing method is used to distinguish the types of meat done by pre-processing, segmentation, feature extraction with geometrical moment invariant and K-NN classification. Geometric moment invariant method proposed to analyze beef and pork is done by extracting unique values from each images. This method can be used as a description of the form based on the moment theory. The results showed that the image processing method and K-NN classification with a value of k = 3 used in the research could significantly  used to analyze the type of meat namely beef and pork. The other difference can be shown from the phi moment invariant value, especially the value of phi (1) and phi (2) 


2018 ◽  
Vol 8 (2) ◽  
pp. 67
Author(s):  
Oky Dwi Nurhayati ◽  
Isti Pudji Hastuti

Beef needs have increased every year. So as the need for expensive beef even at certain times tends to rise. This is used by cheat seller to mix beef with pork because pork is relatively cheaper. This is very detrimental to consumers. Visually, many peoples (consumers) couldn’t distinguish these two types of meat. Hence, we conduct research to distinguish both types of meat.  One way to overcome these problems is the use of complete image processing techniques. The aim of this research was establised an application prototype to distinguish beef and pork with image processing techniques. Image processing method is used to distinguish the types of meat done by pre-processing, segmentation, feature extraction with geometrical moment invariant and K-NN classification. Geometric moment invariant method proposed to analyze beef and pork is done by extracting unique values from each images. This method can be used as a description of the form based on the moment theory. The results showed that the image processing method and K-NN classification with a value of k = 3 used in the research could significantly  used to analyze the type of meat namely beef and pork. The other difference can be shown from the phi moment invariant value, especially the value of phi (1) and phi (2) 


2019 ◽  
Vol 3 (2) ◽  
pp. 140
Author(s):  
Yona Fransiska Dewi ◽  
Nurul Fadillah

The various knowledge and techniques of digital image processing currently available vary greatly. Research and development has been carried out towards object detection and tracking. Color is one of the parameters used to detect and track objects. Humans can distinguish a color, but a computer may not necessarily recognize that color. Digital image processing techniques that can recognize colors, one of which is color filtering. In this study, Color filtering is a technique of processing digital images based on specific colors, detecting and tracking colors by using a web camera (webcam) and red objects. Object Tracking is the process of following an object that moves and moves position, so that the colored object being tracked will draw in realtime with the results of the colors that can be selected.


2017 ◽  
Vol 27 (01) ◽  
pp. 1750006 ◽  
Author(s):  
Heewon Kim ◽  
Jiwon Seo ◽  
Bora Jeong ◽  
Chohong Min

We introduce a simple and efficient experimental setup for the Malkus–Lorenz waterwheel. Through a series of image processing techniques, our work is listed as one of the few experiments that measure not only the angular velocity but also the mass distribution. Our experiment is to observe qualitative changes on the waterwheel as the leakage rate changes, while the other physical parameters are fixed. We perform a bifurcation analysis for the qualitative changes, and the phase portraits from experiments are validated by the bifurcation analysis.


2019 ◽  
Vol 16 (4(Suppl.)) ◽  
pp. 1022
Author(s):  
Mosa Et al.

 Researchers used different methods such as image processing and machine learning techniques in addition to medical instruments such as Placido disc, Keratoscopy, Pentacam;to help diagnosing variety of diseases that affect the eye. Our paper aims to detect one of these diseases that affect the cornea, which is Keratoconus. This is done by using image processing techniques and pattern classification methods. Pentacam is the device that is used to detect the cornea’s health; it provides four maps that can distinguish the changes on the surface of the cornea which can be used for Keratoconus detection. In this study, sixteen features were extracted from the four refractive maps along with five readings from the Pentacam software. The classifiers utilized in our study are Support Vector Machine (SVM) and Decision Trees classification accuracy was achieved 90% and 87.5%, respectively of detecting Keratoconus corneas. The features were extracted by using the Matlab (R2011 and R 2017) and Orange canvas (Pythonw).       


Sign in / Sign up

Export Citation Format

Share Document