asynchronous network
Recently Published Documents


TOTAL DOCUMENTS

80
(FIVE YEARS 19)

H-INDEX

10
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Benru Yu ◽  
Tiancheng Li ◽  
Hong Gu

This paper concentrates on tracking multiple targets using an asynchronous network of sensors with different sampling rates. First, a timely fusion approach is proposed for handling measurements from asynchronous sensors. In the proposed approach, the arithmetic average fusion of the estimates provided by local cardinalized probability hypothesis density filters is recursively carried out according to the network-wide sampling time sequence. The corresponding intersensor communication is conducted by a partial flooding protocol, in which either cardinality distributions or intensity functions pertinent to local posteriors are disseminated among sensors. Moreover, both feedback and non-feedback fusion-filtering modes are provided to meet the performance and real-time requirements, respectively. Second, an extension of the timely fusion approach referred to as robust bootstrap approach is presented, which can deal with unknown clutter and detection parameters by exploiting a local bootstrap filtering scheme. Finally, numerical simulations are performed to test the proposed approaches. <br>


2021 ◽  
Author(s):  
Benru Yu ◽  
Tiancheng Li ◽  
Hong Gu

This paper concentrates on tracking multiple targets using an asynchronous network of sensors with different sampling rates. First, a timely fusion approach is proposed for handling measurements from asynchronous sensors. In the proposed approach, the arithmetic average fusion of the estimates provided by local cardinalized probability hypothesis density filters is recursively carried out according to the network-wide sampling time sequence. The corresponding intersensor communication is conducted by a partial flooding protocol, in which either cardinality distributions or intensity functions pertinent to local posteriors are disseminated among sensors. Moreover, both feedback and non-feedback fusion-filtering modes are provided to meet the performance and real-time requirements, respectively. Second, an extension of the timely fusion approach referred to as robust bootstrap approach is presented, which can deal with unknown clutter and detection parameters by exploiting a local bootstrap filtering scheme. Finally, numerical simulations are performed to test the proposed approaches. <br>


2021 ◽  
Author(s):  
Benru Yu ◽  
Tiancheng Li ◽  
Hong Gu

This paper concentrates on tracking multiple targets using an asynchronous network of sensors with different sampling rates. First, a timely fusion approach is proposed for handling measurements from asynchronous sensors. In the proposed approach, the arithmetic average fusion of the estimates provided by local cardinalized probability hypothesis density filters is recursively carried out according to the network-wide sampling time sequence. The corresponding intersensor communication is conducted by a partial flooding protocol, in which either cardinality distributions or intensity functions pertinent to local posteriors are disseminated among sensors. Moreover, both feedback and non-feedback fusion-filtering modes are provided to meet the performance and real-time requirements, respectively. Second, an extension of the timely fusion approach referred to as robust bootstrap approach is presented, which can deal with unknown clutter and detection parameters by exploiting a local bootstrap filtering scheme. Finally, numerical simulations are performed to test the proposed approaches. <br>


2021 ◽  
Author(s):  
Erulappan Sakthivel ◽  
Rengaraj Madavan

A real-time multiprocessor chip model is also called a Network-on-Chip (NoC), and deals a promising architecture for future systems-on-chips. Even though a lot of Double Tail Sense Amplifiers are used in architectural approach, the existing DTSA with transceiver exhibits a difficulty of consuming more energy than its gouged design during various traffic condition. Novel Low Power pulse Triggered Flip Flop with DTSA is designed in this research to eliminate the difficulty. The Traffic Aware Sense amplifier MAS consists of Sense amplifiers (SA’s), Traffic Generator, and Estimator. Among various SA’S suitable (DTSA and NLPTF -DTSA) SA are selected and information transferred to the receiver. The performance of both DTSA with Transceiver and NLPTF-DTSA with transceiver compared under various traffic conditions. The proposed design (NLPTF-DTSA) is observed on TSMC 90 nm technology, showing 5.92 Gb/s data rate and 0.51 W total link power.


Author(s):  
Kemelbekova Zhanar Satibaldievna ◽  
Sembiyev O.Z ◽  
Umarova Zh.R

It is often necessary to determine statistical parameters that characterize the quality of service on the network by managing when designing computer networks using the concept of virtual connections with bypass directions.  In many ways, the attainable level of quality of the services provided is determined at the stage of network design, when decisions was made regarding the subscriber capacity of stations, the capacity of bundles of trunk lines, the composition and volume of telecommunication services provided. Despite constant progress in the field of network technologies, the problem of determining the necessary amount of network resources and ensuring the quality of user service remains relevant. In this regard, this article discusses a broadband digital network with service integration, based on an asynchronous network in which an iterative method implemented. Here the flow distribution is determined by the route matrix, and the load distribution between the nodes of each pair of nodes made through the path tree obtained on the matrix of routes when calculating this pair. At the same time, an algorithm has been built for allow optimal allocation of channel resources between circuit switching and packet switching subnets within an asynchronous network.


2020 ◽  
Vol 4 (4) ◽  
pp. 32
Author(s):  
Tamas Foldi ◽  
Chris von Csefalvay ◽  
Nicolas A. Perez

The new barrier mode in Apache Spark allows for embedding distributed deep learning training as a Spark stage to simplify the distributed training workflow. In Spark, a task in a stage does not depend on any other tasks in the same stage, and hence it can be scheduled independently. However, several algorithms require more sophisticated inter-task communications, similar to the MPI paradigm. By combining distributed message passing (using asynchronous network IO), OpenJDK’s new auto-vectorization and Spark’s barrier execution mode, we can add non-map/reduce-based algorithms, such as Cannon’s distributed matrix multiplication to Spark. We document an efficient distributed matrix multiplication using Cannon’s algorithm, which significantly improves on the performance of the existing MLlib implementation. Used within a barrier task, the algorithm described herein results in an up to 24% performance increase on a 10,000 × 10,000 square matrix with a significantly lower memory footprint. Applications of efficient matrix multiplication include, among others, accelerating the training and implementation of deep convolutional neural network-based workloads, and thus such efficient algorithms can play a ground-breaking role in the faster and more efficient execution of even the most complicated machine learning tasks.


2020 ◽  
Vol 224 (1) ◽  
pp. 401-415
Author(s):  
Valérie Maupin

SUMMARY Regional body-wave tomography is a very popular tomographic method consisting in inverting relative traveltime residuals of teleseismic body waves measured at regional networks. It is well known that the resulting inverse seismic model is relative to an unknown vertically varying reference model. If jointly inverting data obtained with networks in the vicinity of each other but operating at different times, the relative velocity anomalies in different areas of the model may have different reference levels, possibly introducing large-scale biases in the model that may compromise the interpretation. This is very unfortunate as we have numerous examples of asynchronous network deployments which would benefit from a joint analysis. We show here how a simple improvement in the formulation of the sensitivity kernels allows us to mitigate this problem. Using sensitivity kernels that take into account that data processing implies a zero mean residual for each event, the large-scale biases that otherwise arise in the inverse model using data from asynchronous station deployment are largely removed. We illustrate this first with a very simple 3-station example, and then compare the results obtained using the usual and the relative kernels in synthetic tests with more realistic station coverage, simulating data acquisition at two neighbouring asynchronous networks.


Author(s):  
Tamas Foldi ◽  
Chris von Csefalvay ◽  
Nicolas A. Perez

The new barrier mode in Apache Spark allows embedding distributed deep learning training as a Spark stage to simplify the distributed training workflow. In Spark, a task in a stage doesn&rsquo;t depend on any other tasks in the same stage, and hence it can be scheduled independently. However, several algorithms require more sophisticated inter-task communications, similar to the MPI paradigm. By combining distributed message passing (using asynchronous network IO), OpenJDK&rsquo;s new auto-vectorization and Spark&rsquo;s barrier execution mode, we can add non-map/reduce based algorithms, such as Cannon&rsquo;s distributed matrix multiplication to Spark. We document an efficient distributed matrix multiplication using Cannon&rsquo;s algorithm, which improves significantly on the performance of the existing MLlib implementation. Used within a barrier task, the algorithm described herein results in an up to 24% performance increase on a 10,000x10,000 square matrix with a significantly lower memory footprint. Applications of efficient matrix multiplication include, among others, accelerating the training and implementation of deep convolutional neural network based workloads, and thus such efficient algorithms can play a ground-breaking role in faster, more efficient execution of even the most complicated machine learning tasks


Sign in / Sign up

Export Citation Format

Share Document