stochastic selection
Recently Published Documents


TOTAL DOCUMENTS

39
(FIVE YEARS 12)

H-INDEX

8
(FIVE YEARS 2)

2022 ◽  
Author(s):  
Maninderpal Singh ◽  
Gagangeet Singh Aujla ◽  
Rasmeet Singh Bali

AbstractInternet of Drones (IoD) facilitates the autonomous operations of drones into every application (warfare, surveillance, photography, etc) across the world. The transmission of data (to and fro) related to these applications occur between the drones and the other infrastructure over wireless channels that must abide to the stringent latency restrictions. However, relaying this data to the core cloud infrastructure may lead to a higher round trip delay. Thus, we utilize the cloud close to the ground, i.e., edge computing to realize an edge-envisioned IoD ecosystem. However, as this data is relayed over an open communication channel, it is often prone to different types of attacks due to it wider attack surface. Thus, we need to find a robust solution that can maintain the confidentiality, integrity, and authenticity of the data while providing desired services. Blockchain technology is capable to handle these challenges owing to the distributed ledger that store the data immutably. However, the conventional block architecture pose several challenges because of limited computational capabilities of drones. As the size of blockchain increases, the data flow also increases and so does the associated challenges. Hence, to overcome these challenges, in this work, we have proposed a derived blockchain architecture that decouples the data part (or block ledger) from the block header and shifts it to off-chain storage. In our approach, the registration of a new drone is performed to enable legitimate access control thus ensuring identity management and traceability. Further, the interactions happen in the form of transactions of the blockchain. We propose a lightweight consensus mechanism based on the stochastic selection followed by a transaction signing process to ensure that each drone is in control of its block. The proposed scheme also handles the expanding storage requirements with the help of data compression using a shrinking block mechanism. Lastly, the problem of additional delay anticipated due to drone mobility is handled using a multi-level caching mechanism. The proposed work has been validated in a simulated Gazebo environment and the results are promising in terms of different metrics. We have also provided numerical validations in context of complexity, communication overheads and computation costs.


2021 ◽  
Author(s):  
P. Jean-Jacques Herings ◽  
Harold Houba

AbstractWe study bargaining models in discrete time with a finite number of players, stochastic selection of the proposing player, endogenously determined sets and orders of responders, and a finite set of feasible alternatives. The standard optimality conditions and system of recursive equations may not be sufficient for the existence of a subgame perfect equilibrium in stationary strategies (SSPE) in case of costless delay. We present a characterization of SSPE that is valid for both costly and costless delay. We address the relationship between an SSPE under costless delay and the limit of SSPEs under vanishing costly delay. An SSPE always exists when delay is costly, but not necessarily so under costless delay, even when mixed strategies are allowed for. This is surprising as a quasi SSPE, a solution to the optimality conditions and the system of recursive equations, always exists. The problem is caused by the potential singularity of the system of recursive equations, which is intimately related to the possibility of perpetual disagreement in the bargaining process.


Author(s):  
Anuradha, Et. al.

Network reliability is an important measure of how well the network meets its design aim and mathematically is the probability that a network will perform communication efficiently for at least a given period of time. The reliability analysis is used for determining the probability of correct functioning of a system in terrain conditions. Estimating the all-terminal network reliability is an NP-hard problem, the press forward Genetic Algorithm is also an alternative to solve the problem of all-terminal network reliability. This paper presents a reliability evaluation method by using a feed-forward genetic algorithm with a random selection of nodes and links i.e., estimate the network reliability such that the flow is not less than a given threshold. To verify the efficiency of an algorithm, various sets of runs are applied to identify the most reliable network and the algorithm is tested on the network size of 2,4,6,8, --------256 nodes with link N(N-1)/2. The results of the algorithm are obtained for all-terminal reliability problem based on close analysis of the complexity of the network. Our assumption for reliability evaluation of node 1 to node N, with link reliability of 50%. Another result of the proposed GA approach is the significant reduction in computational time.


Sensors ◽  
2020 ◽  
Vol 20 (6) ◽  
pp. 1626 ◽  
Author(s):  
Loris Nanni ◽  
Alessandra Lumini ◽  
Stefano Ghidoni ◽  
Gianluca Maguolo

In recent years, the field of deep learning has achieved considerable success in pattern recognition, image segmentation, and many other classification fields. There are many studies and practical applications of deep learning on images, video, or text classification. Activation functions play a crucial role in discriminative capabilities of the deep neural networks and the design of new “static” or “dynamic” activation functions is an active area of research. The main difference between “static” and “dynamic” functions is that the first class of activations considers all the neurons and layers as identical, while the second class learns parameters of the activation function independently for each layer or even each neuron. Although the “dynamic” activation functions perform better in some applications, the increased number of trainable parameters requires more computational time and can lead to overfitting. In this work, we propose a mixture of “static” and “dynamic” activation functions, which are stochastically selected at each layer. Our idea for model design is based on a method for changing some layers along the lines of different functional blocks of the best performing CNN models, with the aim of designing new models to be used as stand-alone networks or as a component of an ensemble. We propose to replace each activation layer of a CNN (usually a ReLU layer) by a different activation function stochastically drawn from a set of activation functions: in this way, the resulting CNN has a different set of activation function layers.


Author(s):  
Deb Sankar Banerjee ◽  
Shiladitya Banerjee

How cells regulate the size of intracellular structures and organelles, despite continuous turnover in their component parts, is a longstanding question. Recent experiments suggest that size control of many intracellular assemblies is achieved through the depletion of a limiting subunit pool in the cytoplasm. While the limiting pool model ensures organelle size scaling with cell size, it does not provide a mechanism for robust size control of multiple co-existing structures. Here we propose a kinetic theory for size regulation of multiple structures that are assembled from a shared pool of subunits. We demonstrate that a negative feedback between the growth rate and the size of individual structures underlies size regulation of a wide variety of intracellular assemblies, from cytoskeletal filaments to three-dimensional organelles such as centrosomes and the nucleus. We identify the feedback motifs for size control in these structures, based on known molecular interactions, and quantitatively compare our theory with available experimental data. Furthermore, we show that a positive feedback between structure size and growth rate can lead to bistable size distributions arising from autocatalytic growth. In the limit of high subunit concentration, autocatalytic growth of multiple structures leads to stochastic selection of a single structure, elucidating a mechanism for polarity establishment.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Eita Nakamura ◽  
Kunihiko Kaneko

Abstract If a cultural feature is transmitted over generations and exposed to stochastic selection when spreading in a population, its evolution may be governed by statistical laws and be partly predictable, as in the case of genetic evolution. Music exhibits steady changes of styles over time, with new characteristics developing from traditions. Recent studies have found trends in the evolution of music styles, but little is known about their relations to the evolution theory. Here we analyze Western classical music data and find statistical evolutionary laws. For example, distributions of the frequencies of some rare musical events (e.g. dissonant intervals) exhibit steady increase in the mean and standard deviation as well as constancy of their ratio. We then study an evolutionary model where creators learn their data-generation models from past data and generate new data that will be socially selected by evaluators according to the content dissimilarity (novelty) and style conformity (typicality) with respect to the past data. The model reproduces the observed statistical laws and can make non-trivial predictions for the evolution of independent musical features. In addition, the same model with different parameterization can predict the evolution of Japanese enka music, which is developed in a different society and has a qualitatively different tendency of evolution. Our results suggest that the evolution of musical styles can partly be explained and predicted by the evolutionary model incorporating statistical learning, which can be important for other cultures and future music technologies.


Sign in / Sign up

Export Citation Format

Share Document