Autonomous driving has become a prevalent research topic in recent years, arousing the attention of many academic universities and commercial companies. As human drivers rely on visual information to discern road conditions and make driving decisions, autonomous driving calls for vision systems such as vehicle detection models. These vision models require a large amount of labeled data while collecting and annotating the real traffic data are time-consuming and costly. Therefore, we present a novel vehicle detection framework based on the parallel vision to tackle the above issue, using the specially designed virtual data to help train the vehicle detection model. We also propose a method to construct large-scale artificial scenes and generate the virtual data for the vision-based autonomous driving schemes. Experimental results verify the effectiveness of our proposed framework, demonstrating that the combination of virtual and real data has better performance for training the vehicle detection model than the only use of real data.
The existing target detection and recognition technology has the problem of fuzzy features of moving vehicles, which leads to poor detection effect. A moving car detection and recognition technology based on artificial intelligence is designed. The point operation is adopted to enhance the high frequency information of the image, increase the image contrast, and delineate the video image tracking target. The motion vector similarity is used to predict the moving target area in the next frame of the image. The texture features of the moving car are extracted by artificial intelligence, and the center moment is calculated by the gray histogram distribution curve, the edge feature extraction algorithm is used to set the detection and recognition mode. Experimental results: under complex conditions, this design technology, compared with the other two kinds of moving vehicle detection and recognition technology, detected three more moving vehicles, which proved that the application prospect of the moving vehicle detection and recognition technology integrated with artificial intelligence is broader.
In order to ensure the detection accuracy, an improved adaptive weighted (IAW) method is proposed in this paper to fuse the data of images and lidar sensors for the vehicle object’s detection. Firstly, the IAW method is proposed in this paper and the first simulation is conducted. The unification of two sensors’ time and space should be completed at first. The traditional adaptive weighted average method (AWA) will amplify the noise in the fusion process, so the data filtered with Kalman Filter (KF) algorithm instead of with the AWA method. The proposed IAW method is compared with the AWA method and the Distributed Weighted fusion KF algorithm in the data fusion simulation to verify the superiority of the proposed algorithm. Secondly, the second simulation is conducted to verify the robustness and accuracy of the IAW algorithm. In the two experimental scenarios of sparse and dense vehicles, the vehicle detection based on image and lidar is completed, respectively. The detection data is correlated and merged through the IAW method, and the results show that the IAW method can correctly associate and fuse the data of the two sensors. Finally, the real vehicle test of object vehicle detection in different environments is carried out. The IAW method, the KF algorithm, and the Distributed Weighted fusion KF algorithm are used to complete the target vehicle detection in the real vehicle, respectively. The advantages of the two sensors can give full play, and the misdetection of the target objects can be reduced with proposed method. It has great potential in the application of object acquisition.