scholarly journals FGSC: Fuzzy Guided Scale Choice SSD Model for Edge AI Design on Real-Time Vehicle Detection and Class Counting

Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7399
Author(s):  
Ming-Hwa Sheu ◽  
S M Salahuddin Morsalin ◽  
Jia-Xiang Zheng ◽  
Shih-Chang Hsia ◽  
Cheng-Jian Lin ◽  
...  

The aim of this paper is to distinguish the vehicle detection and count the class number in each classification from the inputs. We proposed the use of Fuzzy Guided Scale Choice (FGSC)-based SSD deep neural network architecture for vehicle detection and class counting with parameter optimization. The 'FGSC' blocks are integrated into the convolutional layers of the model, which emphasize essential features while ignoring less important ones that are not significant for the operation. We created the passing detection lines and class counting windows and connected them with the proposed FGSC-SSD deep neural network model. The 'FGSC' blocks in the convolution layer emphasize essential features and find out unnecessary features by using the scale choice method at the training stage and eliminate that significant speedup of the model. In addition, FGSC blocks avoided many unusable parameters in the saturation interval and improved the performance efficiency. In addition, the Fuzzy Sigmoid Function (FSF) increases the activation interval through fuzzy logic. While performing operations, the FGSC-SSD model reduces the computational complexity of convolutional layers and their parameters. As a result, the model tested Frames Per Second (FPS) on edge artificial intelligence (AI) and reached a real-time processing speed of 38.4 and an accuracy rate of more than 94%. Therefore, this work might be considered an improvement to the traffic monitoring approach by using edge AI applications.

2021 ◽  
Vol 11 (15) ◽  
pp. 7148
Author(s):  
Bedada Endale ◽  
Abera Tullu ◽  
Hayoung Shi ◽  
Beom-Soo Kang

Unmanned aerial vehicles (UAVs) are being widely utilized for various missions: in both civilian and military sectors. Many of these missions demand UAVs to acquire artificial intelligence about the environments they are navigating in. This perception can be realized by training a computing machine to classify objects in the environment. One of the well known machine training approaches is supervised deep learning, which enables a machine to classify objects. However, supervised deep learning comes with huge sacrifice in terms of time and computational resources. Collecting big input data, pre-training processes, such as labeling training data, and the need for a high performance computer for training are some of the challenges that supervised deep learning poses. To address these setbacks, this study proposes mission specific input data augmentation techniques and the design of light-weight deep neural network architecture that is capable of real-time object classification. Semi-direct visual odometry (SVO) data of augmented images are used to train the network for object classification. Ten classes of 10,000 different images in each class were used as input data where 80% were for training the network and the remaining 20% were used for network validation. For the optimization of the designed deep neural network, a sequential gradient descent algorithm was implemented. This algorithm has the advantage of handling redundancy in the data more efficiently than other algorithms.


Author(s):  
Keyvan Kasiri ◽  
Mohammad Javad Shafiee ◽  
Francis Li ◽  
Alexander Wong ◽  
Justin Eichel

With the progress in intelligent transportation systems in smartcities, vision-based vehicle detection is becoming an important issuein the vision-based surveillance systems. With the advent ofthe big data era, deep learning methods have been increasinglyemployed in the detection, classification, and recognition applicationsdue to their performance accuracy, however, there are stillmajor concerns regarding deployment of such methods in embeddedapplications. This paper offers an efficient process leveragingthe idea of evolutionary deep intelligence on a state-of-the-art deepneural network. Using this approach, the deep neural network isevolved towards a highly sparse set of synaptic weights and clusters.Experimental results for the task of vehicle detection demonstratethat the evolved deep neural network can achieve a substantialimprovement in architecture efficiency adapting for GPUacceleratedapplications without significant sacrifices in detectionaccuracy. The architectural efficiency of ~4X-fold and ~2X-folddecrease is obtained in synaptic weights and clusters, respectively,while the accuracy of 92.8% (drop of less than 4% compared to theoriginal network model) is achieved. Detection results and networkefficiency for the vehicular application are promising, and opensthe door to a wider range of applications in deep learning.


2020 ◽  
pp. 1811-1822
Author(s):  
Mustafa Najm ◽  
Yossra Hussein Ali

Vehicle detection (VD) plays a very essential role in Intelligent Transportation Systems (ITS) that have been intensively studied within the past years. The need for intelligent facilities expanded because the total number of vehicles is increasing rapidly in urban zones. Traffic monitoring is an important element in the intelligent transportation system, which involves the detection, classification, tracking, and counting of vehicles. One of the key advantages of traffic video detection is that it provides traffic supervisors with the means to decrease congestion and improve highway planning. Vehicle detection in videos combines image processing in real-time with computerized pattern recognition in flexible stages. The real-time processing is very critical to keep the appropriate functionality of automated or continuously working systems. VD in road traffics has numerous applications in the transportation engineering field. In this review, different automated VD systems have been surveyed,  with a focus on systems where the rectilinear stationary camera is positioned above intersections in the road rather than being mounted on the vehicle. Generally, three steps are utilized to acquire traffic condition information, including background subtraction (BS), vehicle detection and vehicle counting. First, we illustrate the concept of vehicle detection and discuss background subtraction for acquiring only moving objects. Then a variety of algorithms and techniques developed to detect vehicles are discussed beside illustrating their advantages and limitations. Finally, some limitations shared between the systems are demonstrated, such as the definition of ROI, focusing on only one aspect of detection, and the variation of accuracy with quality of videos. At the point when one can detect and classify vehicles, then it is probable to more improve the flow of the traffic and even give enormous information that can be valuable for many applications in the future.


2020 ◽  
Author(s):  
Kirill Khazukov ◽  
Vladimir Shepelev ◽  
Tatiana Karpeta ◽  
Salavat Shabiev ◽  
Ivan Slobodin ◽  
...  

Abstract This study deals with the problem of rea-time obtaining quality data on the road traffic parameters based on the static street video surveillance camera data. The existing road traffic monitoring solutions are based on the use of traffic cameras located directly above the carriageways, which allows one to obtain fragmentary data on the speed and movement pattern of vehicles. The purpose of the study is to develop a system of high-quality and complete collection of real-time data, such as traffic flow intensity, driving directions, and average vehicle speed. At the same time, the data is collected within the entire functional area of intersections and adjacent road sections, which fall within the street video surveillance camera angle. Our solution is based on the use of the YOLOv3 neural network architecture and SORT open-source tracker. To train the neural network, we marked 6,000 images and performed augmentation, which allowed us to form a dataset of 4.3 million vehicles. The basic performance of YOLO was improved using an additional mask branch and optimizing the shape of anchors. To determine the vehicle speed, we used a method of perspective transformation of coordinates from the original image to geographical coordinates. Testing of the system at night and in the daytime at six intersections showed the absolute percentage accuracy of vehicle counting, of no less than 92%. The error in determining the vehicle speed by the projection method, taking into account the camera calibration, did not exceed 1.5 km/h.


2020 ◽  
Vol 7 (1) ◽  
Author(s):  
Kirill Khazukov ◽  
Vladimir Shepelev ◽  
Tatiana Karpeta ◽  
Salavat Shabiev ◽  
Ivan Slobodin ◽  
...  

Abstract This study deals with the problem of rea-time obtaining quality data on the road traffic parameters based on the static street video surveillance camera data. The existing road traffic monitoring solutions are based on the use of traffic cameras located directly above the carriageways, which allows one to obtain fragmentary data on the speed and movement pattern of vehicles. The purpose of the study is to develop a system of high-quality and complete collection of real-time data, such as traffic flow intensity, driving directions, and average vehicle speed. At the same time, the data is collected within the entire functional area of intersections and adjacent road sections, which fall within the street video surveillance camera angle. Our solution is based on the use of the YOLOv3 neural network architecture and SORT open-source tracker. To train the neural network, we marked 6000 images and performed augmentation, which allowed us to form a dataset of 4.3 million vehicles. The basic performance of YOLO was improved using an additional mask branch and optimizing the shape of anchors. To determine the vehicle speed, we used a method of perspective transformation of coordinates from the original image to geographical coordinates. Testing of the system at night and in the daytime at six intersections showed the absolute percentage accuracy of vehicle counting, of no less than 92%. The error in determining the vehicle speed by the projection method, taking into account the camera calibration, did not exceed 1.5 km/h.


Healthcare ◽  
2020 ◽  
Vol 8 (3) ◽  
pp. 234 ◽  
Author(s):  
Hyun Yoo ◽  
Soyoung Han ◽  
Kyungyong Chung

Recently, a massive amount of big data of bioinformation is collected by sensor-based IoT devices. The collected data are also classified into different types of health big data in various techniques. A personalized analysis technique is a basis for judging the risk factors of personal cardiovascular disorders in real-time. The objective of this paper is to provide the model for the personalized heart condition classification in combination with the fast and effective preprocessing technique and deep neural network in order to process the real-time accumulated biosensor input data. The model can be useful to learn input data and develop an approximation function, and it can help users recognize risk situations. For the analysis of the pulse frequency, a fast Fourier transform is applied in preprocessing work. With the use of the frequency-by-frequency ratio data of the extracted power spectrum, data reduction is performed. To analyze the meanings of preprocessed data, a neural network algorithm is applied. In particular, a deep neural network is used to analyze and evaluate linear data. A deep neural network can make multiple layers and can establish an operation model of nodes with the use of gradient descent. The completed model was trained by classifying the ECG signals collected in advance into normal, control, and noise groups. Thereafter, the ECG signal input in real time through the trained deep neural network system was classified into normal, control, and noise. To evaluate the performance of the proposed model, this study utilized a ratio of data operation cost reduction and F-measure. As a result, with the use of fast Fourier transform and cumulative frequency percentage, the size of ECG reduced to 1:32. According to the analysis on the F-measure of the deep neural network, the model had 83.83% accuracy. Given the results, the modified deep neural network technique can reduce the size of big data in terms of computing work, and it is an effective system to reduce operation time.


Sign in / Sign up

Export Citation Format

Share Document