scholarly journals Toward a More Complete, Flexible, and Safer Speed Planning for Autonomous Driving via Convex Optimization

Author(s):  
Yu Zhang ◽  
Huiyan Chen ◽  
Steven L. Waslander ◽  
Tian Yang ◽  
Sheng Zhang ◽  
...  

In this paper, we present a complete, flexible and safe convex-optimization-based method to solve speed planning problems over a fixed path for autonomous driving in both static and dynamic environments. Our contributions are five fold. First, we summarize the most common constraints raised in various autonomous driving scenarios as the requirements for speed planner developments and metrics to measure the capacity of existing speed planners roughly for autonomous driving. Second, we introduce a more general, flexible and complete speed planning mathematical model including all the summarized constraints compared to the state-of-the-art speed planners, which addresses limitations of existing methods and is able to provide smooth, safety-guaranteed, dynamic-feasible, and time-efficient speed profiles. Third, we emphasize comfort while guaranteeing fundamental motion safety without sacrificing the mobility of cars by treating the comfort box constraint as a semi-hard constraint in optimization via slack variables and penalty functions, which distinguishes our method from existing ones. Fourth, we demonstrate that our problem preserves convexity with the added constraints, thus global optimality of solutions is guaranteed. Fifth, we showcase how our formulation can be used in various autonomous driving scenarios by providing several challenging case studies in both static and dynamic environments. A range of numerical experiments and challenging realistic speed planning case studies have depicted that the proposed method outperforms existing speed planners for autonomous driving in terms of constraint type covered, optimality, safety, mobility and flexibility.

Author(s):  
Yu Zhang ◽  
Huiyan Chen ◽  
Steven L. Waslander ◽  
Tian Yang ◽  
Sheng Zhang ◽  
...  

In this paper, we present a complete, flexible and safe convex-optimization-based method to solve speed planning problems over a fixed path for autonomous driving in both static and dynamic environments. Our contributions are five fold. First, we summarize the most common constraints raised in various autonomous driving scenarios as the requirements for speed planner developments and metrics to measure the capacity of existing speed planners roughly for autonomous driving. Second, we introduce a more general, flexible and complete speed planning mathematical model including all the summarized constraints compared to the state-of-the-art speed planners, which addresses limitations of existing methods and is able to provide smooth, safety-guaranteed, dynamic-feasible, and time-efficient speed profiles. Third, we emphasize comfort while guaranteeing fundamental motion safety without sacrificing the mobility of cars by treating the comfort box constraint as a semi-hard constraint in optimization via slack variables and penalty functions, which distinguishes our method from existing ones. Fourth, we demonstrate that our problem preserves convexity with the added constraints, thus global optimality of solutions is guaranteed. Fifth, we showcase how our formulation can be used in various autonomous driving scenarios by providing several challenging case studies in both static and dynamic environments. A range of numerical experiments and challenging realistic speed planning case studies have depicted that the proposed method outperforms existing speed planners for autonomous driving in terms of constraint type covered, optimality, safety, mobility and flexibility.


Author(s):  
Erik Paul ◽  
Holger Herzog ◽  
Sören Jansen ◽  
Christian Hobert ◽  
Eckhard Langer

Abstract This paper presents an effective device-level failure analysis (FA) method which uses a high-resolution low-kV Scanning Electron Microscope (SEM) in combination with an integrated state-of-the-art nanomanipulator to locate and characterize single defects in failing CMOS devices. The presented case studies utilize several FA-techniques in combination with SEM-based nanoprobing for nanometer node technologies and demonstrate how these methods are used to investigate the root cause of IC device failures. The methodology represents a highly-efficient physical failure analysis flow for 28nm and larger technology nodes.


2021 ◽  
Vol 11 (12) ◽  
pp. 5656
Author(s):  
Yufan Zeng ◽  
Jiashan Tang

Graph neural networks (GNNs) have been very successful at solving fraud detection tasks. The GNN-based detection algorithms learn node embeddings by aggregating neighboring information. Recently, CAmouflage-REsistant GNN (CARE-GNN) is proposed, and this algorithm achieves state-of-the-art results on fraud detection tasks by dealing with relation camouflages and feature camouflages. However, stacking multiple layers in a traditional way defined by hop leads to a rapid performance drop. As the single-layer CARE-GNN cannot extract more information to fix the potential mistakes, the performance heavily relies on the only one layer. In order to avoid the case of single-layer learning, in this paper, we consider a multi-layer architecture which can form a complementary relationship with residual structure. We propose an improved algorithm named Residual Layered CARE-GNN (RLC-GNN). The new algorithm learns layer by layer progressively and corrects mistakes continuously. We choose three metrics—recall, AUC, and F1-score—to evaluate proposed algorithm. Numerical experiments are conducted. We obtain up to 5.66%, 7.72%, and 9.09% improvements in recall, AUC, and F1-score, respectively, on Yelp dataset. Moreover, we also obtain up to 3.66%, 4.27%, and 3.25% improvements in the same three metrics on the Amazon dataset.


Algorithms ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 146
Author(s):  
Aleksei Vakhnin ◽  
Evgenii Sopov

Modern real-valued optimization problems are complex and high-dimensional, and they are known as “large-scale global optimization (LSGO)” problems. Classic evolutionary algorithms (EAs) perform poorly on this class of problems because of the curse of dimensionality. Cooperative Coevolution (CC) is a high-performed framework for performing the decomposition of large-scale problems into smaller and easier subproblems by grouping objective variables. The efficiency of CC strongly depends on the size of groups and the grouping approach. In this study, an improved CC (iCC) approach for solving LSGO problems has been proposed and investigated. iCC changes the number of variables in subcomponents dynamically during the optimization process. The SHADE algorithm is used as a subcomponent optimizer. We have investigated the performance of iCC-SHADE and CC-SHADE on fifteen problems from the LSGO CEC’13 benchmark set provided by the IEEE Congress of Evolutionary Computation. The results of numerical experiments have shown that iCC-SHADE outperforms, on average, CC-SHADE with a fixed number of subcomponents. Also, we have compared iCC-SHADE with some state-of-the-art LSGO metaheuristics. The experimental results have shown that the proposed algorithm is competitive with other efficient metaheuristics.


Author(s):  
Yu Zhang ◽  
Huiyan Chen ◽  
Steven L. Waslander ◽  
Tian Yang ◽  
Sheng Zhang ◽  
...  

Sensors ◽  
2018 ◽  
Vol 18 (7) ◽  
pp. 2185 ◽  
Author(s):  
Yu Zhang ◽  
Huiyan Chen ◽  
Steven Waslander ◽  
Tian Yang ◽  
Sheng Zhang ◽  
...  

Author(s):  
Bingqian Lu ◽  
Jianyi Yang ◽  
Weiwen Jiang ◽  
Yiyu Shi ◽  
Shaolei Ren

Convolutional neural networks (CNNs) are used in numerous real-world applications such as vision-based autonomous driving and video content analysis. To run CNN inference on various target devices, hardware-aware neural architecture search (NAS) is crucial. A key requirement of efficient hardware-aware NAS is the fast evaluation of inference latencies in order to rank different architectures. While building a latency predictor for each target device has been commonly used in state of the art, this is a very time-consuming process, lacking scalability in the presence of extremely diverse devices. In this work, we address the scalability challenge by exploiting latency monotonicity --- the architecture latency rankings on different devices are often correlated. When strong latency monotonicity exists, we can re-use architectures searched for one proxy device on new target devices, without losing optimality. In the absence of strong latency monotonicity, we propose an efficient proxy adaptation technique to significantly boost the latency monotonicity. Finally, we validate our approach and conduct experiments with devices of different platforms on multiple mainstream search spaces, including MobileNet-V2, MobileNet-V3, NAS-Bench-201, ProxylessNAS and FBNet. Our results highlight that, by using just one proxy device, we can find almost the same Pareto-optimal architectures as the existing per-device NAS, while avoiding the prohibitive cost of building a latency predictor for each device.


2019 ◽  
Author(s):  
Mehrdad Shoeiby ◽  
Mohammad Ali Armin ◽  
Sadegh Aliakbarian ◽  
Saeed Anwar ◽  
Lars petersson

<div>Advances in the design of multi-spectral cameras have</div><div>led to great interests in a wide range of applications, from</div><div>astronomy to autonomous driving. However, such cameras</div><div>inherently suffer from a trade-off between the spatial and</div><div>spectral resolution. In this paper, we propose to address</div><div>this limitation by introducing a novel method to carry out</div><div>super-resolution on raw mosaic images, multi-spectral or</div><div>RGB Bayer, captured by modern real-time single-shot mo-</div><div>saic sensors. To this end, we design a deep super-resolution</div><div>architecture that benefits from a sequential feature pyramid</div><div>along the depth of the network. This, in fact, is achieved</div><div>by utilizing a convolutional LSTM (ConvLSTM) to learn the</div><div>inter-dependencies between features at different receptive</div><div>fields. Additionally, by investigating the effect of different</div><div>attention mechanisms in our framework, we show that a</div><div>ConvLSTM inspired module is able to provide superior at-</div><div>tention in our context. Our extensive experiments and anal-</div><div>yses evidence that our approach yields significant super-</div><div>resolution quality, outperforming current state-of-the-art</div><div>mosaic super-resolution methods on both Bayer and multi-</div><div>spectral images. Additionally, to the best of our knowledge,</div><div>our method is the first specialized method to super-resolve</div><div>mosaic images, whether it be multi-spectral or Bayer.</div><div><br></div>


2018 ◽  
Vol 37 (13-14) ◽  
pp. 1632-1672 ◽  
Author(s):  
Sanjiban Choudhury ◽  
Mohak Bhardwaj ◽  
Sankalp Arora ◽  
Ashish Kapoor ◽  
Gireeja Ranade ◽  
...  

Robot planning is the process of selecting a sequence of actions that optimize for a task=specific objective. For instance, the objective for a navigation task would be to find collision-free paths, whereas the objective for an exploration task would be to map unknown areas. The optimal solutions to such tasks are heavily influenced by the implicit structure in the environment, i.e. the configuration of objects in the world. State-of-the-art planning approaches, however, do not exploit this structure, thereby expending valuable effort searching the action space instead of focusing on potentially good actions. In this paper, we address the problem of enabling planners to adapt their search strategies by inferring such good actions in an efficient manner using only the information uncovered by the search up until that time. We formulate this as a problem of sequential decision making under uncertainty where at a given iteration a planning policy must map the state of the search to a planning action. Unfortunately, the training process for such partial-information-based policies is slow to converge and susceptible to poor local minima. Our key insight is that if we could fully observe the underlying world map, we would easily be able to disambiguate between good and bad actions. We hence present a novel data-driven imitation learning framework to efficiently train planning policies by imitating a clairvoyant oracle: an oracle that at train time has full knowledge about the world map and can compute optimal decisions. We leverage the fact that for planning problems, such oracles can be efficiently computed and derive performance guarantees for the learnt policy. We examine two important domains that rely on partial-information-based policies: informative path planning and search-based motion planning. We validate the approach on a spectrum of environments for both problem domains, including experiments on a real UAV, and show that the learnt policy consistently outperforms state-of-the-art algorithms. Our framework is able to train policies that achieve up to [Formula: see text] more reward than state-of-the art information-gathering heuristics and a [Formula: see text] speedup as compared with A* on search-based planning problems. Our approach paves the way forward for applying data-driven techniques to other such problem domains under the umbrella of robot planning.


Sign in / Sign up

Export Citation Format

Share Document