scholarly journals Intermittent Information-Driven Multi-Agent Area-Restricted Search

Entropy ◽  
2020 ◽  
Vol 22 (6) ◽  
pp. 635
Author(s):  
Branko Ristic ◽  
Alex Skvortsov

The problem is a two-dimensional area-restricted search for a target using a coordinated team of autonomous mobile sensing platforms (agents). Sensing is characterised by a range-dependent probability of detection, with a non-zero probability of false alarms. The context is underwater surveillance using a swarm of amphibious drones equipped with active sonars. The paper develops an intermittent information-driven search strategy, which alternates between two phases: the fast and non-receptive displacement phase (called the ballistic phase) with a slow displacement and sensing phase (called the diffusive phase). The proposed multi-agent search strategy is carried out in a decentralised manner, which means that all computations (estimation and motion control) are done locally. Coordination of agents is achieved by exchanging the data with the neighbours only, in a manner which does not require global knowledge of the communication network topology.

Author(s):  
Evan S. Bentley ◽  
Richard L. Thompson ◽  
Barry R. Bowers ◽  
Justin G. Gibbs ◽  
Steven E. Nelson

AbstractPrevious work has considered tornado occurrence with respect to radar data, both WSR-88D and mobile research radars, and a few studies have examined techniques to potentially improve tornado warning performance. To date, though, there has been little work focusing on systematic, large-sample evaluation of National Weather Service (NWS) tornado warnings with respect to radar-observable quantities and the near-storm environment. In this work, three full years (2016–2018) of NWS tornado warnings across the contiguous United States were examined, in conjunction with supporting data in the few minutes preceding warning issuance, or tornado formation in the case of missed events. The investigation herein examines WSR-88D and Storm Prediction Center (SPC) mesoanalysis data associated with these tornado warnings with comparisons made to the current Warning Decision Training Division (WDTD) guidance.Combining low-level rotational velocity and the significant tornado parameter (STP), as used in prior work, shows promise as a means to estimate tornado warning performance, as well as relative changes in performance as criteria thresholds vary. For example, low-level rotational velocity peaking in excess of 30 kt (15 m s−1), in a near-storm environment which is not prohibitive for tornadoes (STP > 0), results in an increased probability of detection and reduced false alarms compared to observed NWS tornado warning metrics. Tornado warning false alarms can also be reduced through limiting warnings with weak (<30 kt), broad (>1nm) circulations in a poor (STP=0) environment, careful elimination of velocity data artifacts like sidelobe contamination, and through greater scrutiny of human-based tornado reports in otherwise questionable scenarios.


2018 ◽  
Vol 33 (6) ◽  
pp. 1501-1511 ◽  
Author(s):  
Harold E. Brooks ◽  
James Correia

Abstract Tornado warnings are one of the flagship products of the National Weather Service. We update the time series of various metrics of performance in order to provide baselines over the 1986–2016 period for lead time, probability of detection, false alarm ratio, and warning duration. We have used metrics (mean lead time for tornadoes warned in advance, fraction of tornadoes warned in advance) that work in a consistent way across the official changes in policy for warning issuance, as well as across points in time when unofficial changes took place. The mean lead time for tornadoes warned in advance was relatively constant from 1986 to 2011, while the fraction of tornadoes warned in advance increased through about 2006, and the false alarm ratio slowly decreased. The largest changes in performance take place in 2012 when the default warning duration decreased, and there is an apparent increased emphasis on reducing false alarms. As a result, the lead time, probability of detection, and false alarm ratio all decrease in 2012. Our analysis is based, in large part, on signal detection theory, which separates the quality of the warning system from the threshold for issuing warnings. Threshold changes lead to trade-offs between false alarms and missed detections. Such changes provide further evidence for changes in what the warning system as a whole considers important, as well as highlighting the limitations of measuring performance by looking at metrics independently.


2017 ◽  
Vol 14 ◽  
pp. 187-194 ◽  
Author(s):  
Stefano Federico ◽  
Marco Petracca ◽  
Giulia Panegrossi ◽  
Claudio Transerici ◽  
Stefano Dietrich

Abstract. This study investigates the impact of the assimilation of total lightning data on the precipitation forecast of a numerical weather prediction (NWP) model. The impact of the lightning data assimilation, which uses water vapour substitution, is investigated at different forecast time ranges, namely 3, 6, 12, and 24 h, to determine how long and to what extent the assimilation affects the precipitation forecast of long lasting rainfall events (> 24 h). The methodology developed in a previous study is slightly modified here, and is applied to twenty case studies occurred over Italy by a mesoscale model run at convection-permitting horizontal resolution (4 km). The performance is quantified by dichotomous statistical scores computed using a dense raingauge network over Italy. Results show the important impact of the lightning assimilation on the precipitation forecast, especially for the 3 and 6 h forecast. The probability of detection (POD), for example, increases by 10 % for the 3 h forecast using the assimilation of lightning data compared to the simulation without lightning assimilation for all precipitation thresholds considered. The Equitable Threat Score (ETS) is also improved by the lightning assimilation, especially for thresholds below 40 mm day−1. Results show that the forecast time range is very important because the performance decreases steadily and substantially with the forecast time. The POD, for example, is improved by 1–2 % for the 24 h forecast using lightning data assimilation compared to 10 % of the 3 h forecast. The impact of the false alarms on the model performance is also evidenced by this study.


Author(s):  
V. M. Artemiev ◽  
S. M. Kostromitsky ◽  
A. O. Naumov

To increase the efficiency of detecting moving objects in radiolocation, additional features are used, associated with the characteristics of trajectories. The authors assumed that trajectories are correlated, that allows extrapolation of the coordinate values taking into account their increments over the scanning period. The detection procedure consists of two stages. At the first, detection is carried out by the classical threshold method with a low threshold level, which provides a high probability of detection with high values of the probability of false alarms. At the same time uncertainty in the selection of object trajectory embedded in false trajectories arises. Due to the statistical independence of the coordinates of the false trajectories in comparison with the correlated coordinates of the object, the average duration of the first of them is less than the average duration of the second ones. This difference is used to solve the detection problem at the second stage based on the time-selection method. The obtained results allow estimation of the degree of gain in the probability of detection when using the proposed method.


Author(s):  
V. Rybakov

Our paper studies a logic UIALTL, which is a combination of the linear temporal logic LTL, a multi-agent logic with operation for passing knowledge via agents’ interaction, and a suggested logic based on operation of logical uncertainty. The logical operations of UIALTL also include (together with operations from LTL) operations of strong and weak until, agents’ knowledge operations, operation of knowledge via interaction, operation of logical uncertainty, the operations for environmental and global knowledge. UIALTL is defined as a set of all formulas valid at all Kripke-Hintikka like models NC. Any frame NC represents possible unbounded (in time) computation with multi-processors (parallel computational units) and agents’ channels for connections between computational units. The main aim of our paper is to determine possible ways for computation logical laws of UIALTL. Principal problems we are dealing with are decidability and the satisfiability problems for UIALTL. We find an algorithm which recognizes theorems of UIALTL (so we show that UIALTL is decidable) and solves satisfiability problem for UIALTL. As an instrument we use reduction of formulas to rules in the reduced normal form and a technique to contract models NC to special non-UIALTL-models, and, then, verification of validity these rules in models of bounded size. The paper uses standard results from non-classical logics based on Kripke-Hintikka models.


Author(s):  
Alphonse Vial ◽  
Winnie Daamen ◽  
Aaron Yi Ding ◽  
Bart van Arem ◽  
Serge Hoogendoorn

2019 ◽  
Vol 11 (11) ◽  
pp. 4439-4451 ◽  
Author(s):  
Francisco Laport ◽  
Emilio Serrano ◽  
Javier Bajo

Author(s):  
Daniel A. Sierra ◽  
Paul McCullough ◽  
Nejat Olgac ◽  
Eldridge Adams

We consider hostile conflicts between two multi-agent swarms. First, we investigate the complex nature of a single pursuer attempting to intercept a single evader (1P-1E), and establish some rudimentary rules of engagement. We elaborate on the stability repercussions of these rules. Second, we extend the modelling and stability analysis between multi-agent swarms of pursuers and evaders. The present document considers only swarms with equal membership strengths for simplicity. This effort is based on a set of suggested momenta deployed on individual agents. Due to the strong nonlinearities, Lyapunov-based stability analysis is used. The control of a group pursuit is divided into two phases: the approach phase during which the two swarms act like individuals in the 1P-1E interaction; and the assigned pursuit phase where each pursuer is assigned to an evader. A dissipative control momentum was suggested in an earlier publication, which caused undesirable control chatter. This study introduces a distributed control logic which ameliorates the chatter problems considerably.


2020 ◽  
Vol 148 (11) ◽  
pp. 4453-4466
Author(s):  
Cong Wang ◽  
Ping Wang ◽  
Di Wang ◽  
Jinyi Hou ◽  
Bing Xue

AbstractShort-term intense precipitation (SIP; i.e., convective precipitation exceeding 20 mm h−1) nowcasting is important for urban flood warning and natural hazards management. This paper presents an algorithm for coupling automatic weather station data and single-polarization S-band radar data with a graph model and a random forest for the nowcasting of SIP. Different from the pixel-by-pixel precipitation nowcasting algorithm, this algorithm takes the convective cells as the basic units to consider their interactions and focuses on multicell convective systems. In particular, the following question could be addressed: Will a multicell convective system cause SIP events in the next hour? First, a method based on spatiotemporal superposition between cells is proposed for multicell systems identification. Then, the graph model is used to represent cell physical attributes and the spatial distribution of the entire system. For each graph model, a fusion operation is used to form a 42-dimensional graph feature vector. Finally, combined with the machine learning approaches, a random forest classifier is trained with the graph feature vector to predict the precipitation. In the experiment, this algorithm achieves a probability of detection (POD) of 79.2% and a critical success index (CSI) of 68.3% with the data between 2015 and 2016 in North China. Compared with other precipitation nowcasting algorithms, the graph model and random forest could predict SIP events more accurately and produce fewer false alarms.


Sign in / Sign up

Export Citation Format

Share Document