scholarly journals Survey Analysis of Robust and Real-Time Multi-Lane and Single Lane Detection in Indian Highway Scenarios

2021 ◽  
Vol 309 ◽  
pp. 01117
Author(s):  
A. Sai Hanuman ◽  
G. Prasanna Kumar

Studies on lane detection Lane identification methods, integration, and evaluation strategies square measure all examined. The system integration approaches for building a lot of strong detection systems are then evaluated and analyzed, taking into account the inherent limits of camera-based lane detecting systems. Present deep learning approaches to lane detection are inherently CNN's semantic segmentation network the results of the segmentation of the roadways and the segmentation of the lane markers are fused using a fusion method. By manipulating a huge number of frames from a continuous driving environment, we examine lane detection, and we propose a hybrid deep architecture that combines the convolution neural network (CNN) and the continuous neural network (CNN) (RNN). Because of the extensive information background and the high cost of camera equipment, a substantial number of existing results concentrate on vision-based lane recognition systems. Extensive tests on two large-scale datasets show that the planned technique outperforms rivals' lane detection strategies, particularly in challenging settings. A CNN block in particular isolates information from each frame before sending the CNN choices of several continuous frames with time-series qualities to the RNN block for feature learning and lane prediction.

2021 ◽  
Vol 40 (3) ◽  
pp. 1-13
Author(s):  
Lumin Yang ◽  
Jiajie Zhuang ◽  
Hongbo Fu ◽  
Xiangzhi Wei ◽  
Kun Zhou ◽  
...  

We introduce SketchGNN , a convolutional graph neural network for semantic segmentation and labeling of freehand vector sketches. We treat an input stroke-based sketch as a graph with nodes representing the sampled points along input strokes and edges encoding the stroke structure information. To predict the per-node labels, our SketchGNN uses graph convolution and a static-dynamic branching network architecture to extract the features at three levels, i.e., point-level, stroke-level, and sketch-level. SketchGNN significantly improves the accuracy of the state-of-the-art methods for semantic sketch segmentation (by 11.2% in the pixel-based metric and 18.2% in the component-based metric over a large-scale challenging SPG dataset) and has magnitudes fewer parameters than both image-based and sequence-based methods.


2021 ◽  
Vol 13 (16) ◽  
pp. 3065
Author(s):  
Libo Wang ◽  
Rui Li ◽  
Dongzhi Wang ◽  
Chenxi Duan ◽  
Teng Wang ◽  
...  

Semantic segmentation from very fine resolution (VFR) urban scene images plays a significant role in several application scenarios including autonomous driving, land cover classification, urban planning, etc. However, the tremendous details contained in the VFR image, especially the considerable variations in scale and appearance of objects, severely limit the potential of the existing deep learning approaches. Addressing such issues represents a promising research field in the remote sensing community, which paves the way for scene-level landscape pattern analysis and decision making. In this paper, we propose a Bilateral Awareness Network which contains a dependency path and a texture path to fully capture the long-range relationships and fine-grained details in VFR images. Specifically, the dependency path is conducted based on the ResT, a novel Transformer backbone with memory-efficient multi-head self-attention, while the texture path is built on the stacked convolution operation. In addition, using the linear attention mechanism, a feature aggregation module is designed to effectively fuse the dependency features and texture features. Extensive experiments conducted on the three large-scale urban scene image segmentation datasets, i.e., ISPRS Vaihingen dataset, ISPRS Potsdam dataset, and UAVid dataset, demonstrate the effectiveness of our BANet. Specifically, a 64.6% mIoU is achieved on the UAVid dataset.


2019 ◽  
Author(s):  
Elizabeth Behrman ◽  
Nam Nguyen ◽  
James Steck

<p>Noise and decoherence are two major obstacles to the implementation of large-scale quantum computing. Because of the no-cloning theorem, which says we cannot make an exact copy of an arbitrary quantum state, simple redundancy will not work in a quantum context, and unwanted interactions with the environment can destroy coherence and thus the quantum nature of the computation. Because of the parallel and distributed nature of classical neural networks, they have long been successfully used to deal with incomplete or damaged data. In this work, we show that our model of a quantum neural network (QNN) is similarly robust to noise, and that, in addition, it is robust to decoherence. Moreover, robustness to noise and decoherence is not only maintained but improved as the size of the system is increased. Noise and decoherence may even be of advantage in training, as it helps correct for overfitting. We demonstrate the robustness using entanglement as a means for pattern storage in a qubit array. Our results provide evidence that machine learning approaches can obviate otherwise recalcitrant problems in quantum computing. </p> <p> </p>


2019 ◽  
Vol 8 (2S8) ◽  
pp. 1967-1974

In today’s world, the conditions of road is drastically improved as compared with past decade. Most of the express highways are made up of cement concrete and equipped with increased lane size. Apparently speed of the vehicle will increase. Therefore there are more chances for accidents. To avoid the accidents in recent days driver assistance systems are designed to detect the various lane. The detected information of lane path is used for controlling the vehicles and giving alerts to drivers. In this paper the entropy based fusion approach is presents for detecting multi-lanes. The Earth Worm- Crow Search Algorithm (EW-CSA) which is based on Deep Convolution Neural Network(DCNN) is utilized for consolidating the outcomes. At first, the deep learning approaches for path location is prepared using an optimization algorithm and EW-CSA, which focus on characterizing every pixel accurately and require post preparing activities to surmise path data. Correspondingly, the region based segmentation approach is utilizing for the multi-lane detection. An entropy based fusion model is used because this method preserved all the information in the image and reduces the noise effects. The performance of proposed model is analyzed in terms of accuracy, sensitivity, and specificity, providing superior results with values 0.991, 0.992, and 0.887, respectively


2021 ◽  
Vol 13 (2) ◽  
pp. 275
Author(s):  
Michael Meadows ◽  
Matthew Wilson

Given the high financial and institutional cost of collecting and processing accurate topography data, many large-scale flood hazard assessments continue to rely instead on freely-available global Digital Elevation Models, despite the significant vertical biases known to affect them. To predict (and thereby reduce) these biases, we apply a fully-convolutional neural network (FCN), a form of artificial neural network originally developed for image segmentation which is capable of learning from multi-variate spatial patterns at different scales. We assess its potential by training such a model on a wide variety of remote-sensed input data (primarily multi-spectral imagery), using high-resolution, LiDAR-derived Digital Terrain Models published by the New Zealand government as the reference topography data. In parallel, two more widely used machine learning models are also trained, in order to provide benchmarks against which the novel FCN may be assessed. We find that the FCN outperforms the other models (reducing root mean square error in the testing dataset by 71%), likely due to its ability to learn from spatial patterns at multiple scales, rather than only a pixel-by-pixel basis. Significantly for flood hazard modelling applications, corrections were found to be especially effective along rivers and their floodplains. However, our results also suggest that models are likely to be biased towards the land cover and relief conditions most prevalent in their training data, with further work required to assess the importance of limiting training data inputs to those most representative of the intended application area(s).


2019 ◽  
Vol 8 (12) ◽  
pp. 582 ◽  
Author(s):  
Gang Zhang ◽  
Tao Lei ◽  
Yi Cui ◽  
Ping Jiang

Semantic segmentation on high-resolution aerial images plays a significant role in many remote sensing applications. Although the Deep Convolutional Neural Network (DCNN) has shown great performance in this task, it still faces the following two challenges: intra-class heterogeneity and inter-class homogeneity. To overcome these two problems, a novel dual-path DCNN, which contains a spatial path and an edge path, is proposed for high-resolution aerial image segmentation. The spatial path, which combines the multi-level and global context features to encode the local and global information, is used to address the intra-class heterogeneity challenge. For inter-class homogeneity problem, a Holistically-nested Edge Detection (HED)-like edge path is employed to detect the semantic boundaries for the guidance of feature learning. Furthermore, we improve the computational efficiency of the network by employing the backbone of MobileNetV2. We enhance the performance of MobileNetV2 with two modifications: (1) replacing the standard convolution in the last four Bottleneck Residual Blocks (BRBs) with atrous convolution; and (2) removing the convolution stride of 2 in the first layer of BRBs 4 and 6. Experimental results on the ISPRS Vaihingen and Potsdam 2D labeling dataset show that the proposed DCNN achieved real-time inference speed on a single GPU card with better performance, compared with the state-of-the-art baselines.


Every year in India, most of the car accidents are occurs and affects on number of lives. Most of the road accidents are occurs due to driver’s inattention and fatigue. Drivers require to focus on different circumstances, together with vehicle speed and path, the separation between vehicles, passing vehicles, and potential risky or uncommon events ahead. Also the accident occurs due to the who bring into play cell phones at the same time as driving, drink and drive, etc. Due to this, most of the companies of automobiles tries to make available best Advanced Driver Assistance System (ADAS) to the customer to avoid the accidents. The lane detection approach is one of the method provided by automobile companies in ADAS, in which the vehicle must follows the lane. Therefore, there is less chance to get an accident. The information obtained from the lane is used to alert the driver. Therefore most of the researchers are attracted towards this field. But, due to the varying road circumstances, it is very difficult to detect the lane. The computer apparition and machine learning approaches are presents in most of the articles. In this article, we presents the deep learning scheme for identification of lane. There are two phases are presents in this work. In a first phase the image transformation is done and in second phase lane detection is occurred. At first, the proposed model gets the numerous lane pictures and changes the picture into its relating Bird's eye view picture by using Inverse perspective mapping transformation. The Deep Convolutional Neural Network (DCNN) classifier to identify the lane from the bird’s eye view image. The Earth Worm- Crow Search Algorithm (EW-CSA) is designed to help DCNN with the optimal weights. The DCNN classifier gets trained with the view picture from the bird’s eye image and the optimal weights are selected through newly developed EW-CSA algorithm. All these algorithms are performed in MATLAB. The simulation results shows that the exact detection of lane of road. Also, the accuracy, sensitivity, and specificity are calculated and its values are 0.99512, 0.9925, and 0.995 respectively.


Now a days, a multi-lane recognition technique that uses the ridge features and the inverse perspective mapping (IPM) is generally used to distinguish lanes since it can evacuate the perspective distortion on lines that lie in parallel in reality. The lane detection is one of the approach to design the ADAS, if the vehicles follows the lane then there is less chance to get an accident. The detected information of lane path is used for controlling the vehicles and giving alerts to drivers. Therefore most of the researchers are attracted towards this field. But, due to the varying road conditions, it is very difficult to detect the lane. The computer vision and machine learning approaches are presents in most of the articles. In this paper, a survey of different method is presents for the road picture segmentation for the multi-lane detection. The Lane Departure Warning (LDW) system can help to reduce vehicle crashes that are caused by careless or drowsy driving. There has been much research on vision based lane detection for the LDW system. In these lane detection methods, color or edge information is utilized as a feature of the lane. The feature-based methods are usually applied to localize the lanes in the road images by extracting low-level features. On the other hand, the model-based methods use several geometrical elements to describe the lanes, including parabolic curves, hyperbola and straight lines. Feature-based methods require a dataset containing several thousand images of the roads with well-painted and prominent lane markings that are subsequently converted to features. Moreover, these methods may suffer from noise.


Author(s):  
S. Jiang ◽  
W. Yao ◽  
M. Heurich

<p><strong>Abstract.</strong> The assessment of the forests’ health conditions is an important task for biodiversity, forest management, global environment monitoring, and carbon dynamics. Several research works were proposed to evaluate the state condition of a forest based on remote sensing technology. Concerning existing technologies, employing traditional machine learning approaches to detect the dead wood in aerial colour-infrared (CIR) imagery is one of the major trends due to its spectral capability to explicitly capturing vegetation health conditions. However, the complicated scene with background noise restricted the accuracy of existing approaches as those detectors normally utilized hand-crafted features. Currently, deep neural networks are widely used in computer vision tasks and prove that features learnt by the model itself perform much better than the hand-crafted features. The semantic image segmentation is a pixel-level classification task, which is best suitable to dead wood detection in very high resolution (VHR) mode because it enables the model to identify and classify very dense and detailed components on the tree objects. In this paper, an optimized FCN-DenseNet is proposed to detect dead wood (i.e. standing dead tree and fallen tree) in a complicated temperate forest environment. Since the appearance of dead trees generally occupies greatly different scales and sizes; several pooling procedures are employed to extract multi-scale features and dense connection is employed to enhance the inline connection among the scales. Our proposed deep neural network is evaluated over VHR CIR imagery (GSD-10cm) captured in a natural temperate forest in Bavarian national forest park, Germany, which has undergone on-site bark beetle attack. The results show that the boundary of dead trees can be accurately segmented, and the classification are performed with a high accuracy, even though only one labelled image with moderate size is used for training the deep neural network.</p>


Sign in / Sign up

Export Citation Format

Share Document