scholarly journals Entropy-Driven Stochastic Federated Learning in Non-IID 6G Edge-RAN

Author(s):  
Brahim Aamer ◽  
Hatim Chergui ◽  
Mustapha Benjillali ◽  
Christos Verikoukis

Scalable and sustainable AI-driven analytics are necessary to enable large-scale and heterogeneous service deployment in sixth-generation (6G) ultra-dense networks. This implies that the exchange of raw monitoring data should be minimized across the network by bringing the analysis functions closer to the data collection points. While federated learning (FL) is an efficient tool to implement such a decentralized strategy, real networks are generally characterized by time- and space-varying traffic patterns and channel conditions, making thereby the data collected in different points non independent and identically distributed (non-IID), which is challenging for FL. To sidestep this issue, we first introduce a new a priori metric that we call dataset entropy, whose role is to capture the distribution, the quantity of information, the unbalanced structure and the “non-IIDness” of a dataset independently of the models. This a priori entropy is calculated using a multi-dimensional spectral clustering scheme over both the features and the supervised output spaces, and is suitable for classification as well as regression tasks. The FL aggregation operations support system (OSS) server then uses the reported dataset entropies to devise 1) an entropy-based federated averaging scheme, and 2) a stochastic participant selection policy to significantly stabilize the training, minimize the convergence time, and reduce the corresponding computation cost. Numerical results are provided to show the superiority of these novel approaches.

Author(s):  
Ting-Hsuan Wang ◽  
Cheng-Ching Huang ◽  
Jui-Hung Hung

Abstract Motivation Cross-sample comparisons or large-scale meta-analyses based on the next generation sequencing (NGS) involve replicable and universal data preprocessing, including removing adapter fragments in contaminated reads (i.e. adapter trimming). While modern adapter trimmers require users to provide candidate adapter sequences for each sample, which are sometimes unavailable or falsely documented in the repositories (such as GEO or SRA), large-scale meta-analyses are therefore jeopardized by suboptimal adapter trimming. Results Here we introduce a set of fast and accurate adapter detection and trimming algorithms that entail no a priori adapter sequences. These algorithms were implemented in modern C++ with SIMD and multithreading to accelerate its speed. Our experiments and benchmarks show that the implementation (i.e. EARRINGS), without being given any hint of adapter sequences, can reach comparable accuracy and higher throughput than that of existing adapter trimmers. EARRINGS is particularly useful in meta-analyses of a large batch of datasets and can be incorporated in any sequence analysis pipelines in all scales. Availability and implementation EARRINGS is open-source software and is available at https://github.com/jhhung/EARRINGS. Supplementary information Supplementary data are available at Bioinformatics online.


Author(s):  
CHENGGUANG ZHU ◽  
zhongpai Gao ◽  
Jiankang Zhao ◽  
Haihui Long ◽  
Chuanqi Liu

Abstract The relative pose estimation of a space noncooperative target is an attractive yet challenging task due to the complexity of the target background and illumination, and the lack of a priori knowledge. Unfortunately, these negative factors have a grave impact on the estimation accuracy and the robustness of filter algorithms. In response, this paper proposes a novel filter algorithm to estimate the relative pose to improve the robustness based on a stereovision system. First, to obtain a coarse relative pose, the weighted total least squares (WTLS) algorithm is adopted to estimate the relative pose based on several feature points. The resulting relative pose is fed into the subsequent filter scheme as observation quantities. Second, the classic Bayes filter is exploited to estimate the relative state except for moment-of-inertia ratios. Additionally, the one-step prediction results are used as feedback for WTLS initialization. The proposed algorithm successfully eliminates the dependency on continuous tracking of several fixed points. Finally, comparison experiments demonstrate that the proposed algorithm presents a better performance in terms of robustness and convergence time.


2019 ◽  
Vol 18 (12) ◽  
pp. 2842-2855 ◽  
Author(s):  
Hanshang Li ◽  
Ting Li ◽  
Weichao Wang ◽  
Yu Wang

2020 ◽  
Author(s):  
Long Zhang ◽  
Guobin Zhang ◽  
Xiaofang Zhao ◽  
Yali Li ◽  
Chuntian Huang ◽  
...  

A coupling of wireless access via non-orthogonal multiple access and wireless backhaul via beamforming is a promising way for downlink user-centric ultra-dense networks (UDNs) to improve system performance. However, ultra-dense deployment of radio access points in macrocell and user-centric view of network design in UDNs raise important concerns about resource allocation and user association, among which notably is energy efficiency (EE) balance. To overcome this challenge, we develop a framework to investigate the resource allocation problem for energy efficient user association in such a scenario. The joint optimization framework aiming at the system EE maximization is formulated as a large-scale non-convex mixed-integer nonlinear programming problem, which is NP-hard to solve directly with lower complexity. Alternatively, taking advantages of sum-of-ratios decoupling and successive convex approximation methods, we transform the original problem into a series of convex optimization subproblems. Then we solve each subproblem through Lagrangian dual decomposition, and design an iterative algorithm in a distributed way that realizes the joint optimization of power allocation, sub-channel assignment, and user association simultaneously. Simulation results demonstrate the effectiveness and practicality of our proposed framework, which achieves the rapid convergence speed and ensures a beneficial improvement of system-wide EE.<br>


2021 ◽  
Author(s):  
Florence Matutini ◽  
Jacques Baudry ◽  
Marie-Josée Fortin ◽  
Guillaume Pain ◽  
Joséphine Pithon

Abstract Context – Species distribution modelling is a common tool in conservation biology but two main criticisms remain: (1) the use of simplistic variables that do not account for species movements and/or connectivity and (2) poor consideration of multi-scale processes driving species distributions. Objectives – We aimed to determine if including multi-scale and fine-scale movement processes in SDM predictors would improve accuracy of SDM for low-mobility amphibian species over species-level analysis.Methods – We tested and compared different SDMs for nine amphibian species with four different sets of predictors: (1) simple distance-based predictors; (2) single-scale compositional predictors; (3) multi-scale compositional predictors with a priori selection of scale based on knowledge of species mobility and scale-of-effect (4) multi-scale compositional predictors calculated using a friction-based functional grain to account for resource accessibility with landscape resistance to movement.Results - Using friction-based functional grain predictors produced slight to moderate improvements of SDM performance at large scale. The multi-scale approach, with a priori scale selection led to ambiguous results depending on the species studied, in particular for generalist species.Conclusion - We underline the potential of using a friction-based functional grain to improve SDM predictions for species-level analysis.


Author(s):  
Rodion V. Savinov ◽  

The Article is devoted to the Representative of the Early Neo-Scholasticism, Span­ish Thinker Jaume Balmes. The Focus of Attention is the Interpretation of the Kan­tian Doctrine of Knowledge, which Balmes proposed in the Fourth Book of his “Filosofia Fundamental”(1846). It is shown that contrary to the generally negative attitude towards Kant and the Philosophy of Criticism that prevailed by the 1830s in Catholic Intellectual Culture, Balmes not only seriously studies and evaluates the Results of Kantian Criticism, but also he finds many points of contact between Criticism and Scholasticism, for which he undertakes a large-scale rewriting of the Kantian Theory of Knowledge in Terms of Scholasticism. At the same time, he of­fers Criticism of Kantian philosophy based on the Resources of the Scholastic Tra­dition, which allows integrating the Transcendental Analysis of Cognition devel­oped by Kant into the Methods of Scholastic Philosophy. Balmes sought to restore the Possibility of Metaphysical Knowledge, as a Result of which he excluded a number of Important Points of the Kantian Concept, he changed idea of a priori, setting the Boundaries of Sensuality and Reason, to a moving and dynamic “Agent Intellect” (entendimento agente), and Balmes replaced a transcendental subject by a “Universal Reason” (razón universal). In Conclusion, it is shown that Balmes’ Interpretation had a profound Influence on the Development of Understanding of Kantian Philosophy in Neo-Scholasticism and Neo-Thomism.


Author(s):  
Zahid Raza ◽  
Deo P. Vidyarthi

Computational Grid attributed with distributed load sharing has evolved as a platform to large scale problem solving. Grid is a collection of heterogeneous resources, offering services of varying natures, in which jobs are submitted to any of the participating nodes. Scheduling these jobs in such a complex and dynamic environment has many challenges. Reliability analysis of the grid gains paramount importance because grid involves a large number of resources which may fail anytime, making it unreliable. These failures result in wastage of both computational power and money on the scarce grid resources. It is normally desired that the job should be scheduled in an environment that ensures maximum reliability to the job execution. This work presents a reliability based scheduling model for the jobs on the computational grid. The model considers the failure rate of both the software and hardware grid constituents like application demanding execution, nodes executing the job, and the network links supporting data exchange between the nodes. Job allocation using the proposed scheme becomes trusted as it schedules the job based on a priori reliability computation.


2019 ◽  
Vol 28 (03) ◽  
pp. 1950058 ◽  
Author(s):  
Salvatore Capozziello ◽  
Konstantinos F. Dialektopoulos ◽  
Orlando Luongo

The accelerating behavior of cosmic fluid opposes gravitational attraction at present epoch, whereas standard gravity is dominant at small scales. As a consequence, there exists a point where the effects are counterbalanced, dubbed turnaround radius, [Formula: see text]. By construction, it provides a bound on maximum structure sizes of the observed universe. Once an upper bound on [Formula: see text] is provided, i.e. [Formula: see text], one can check whether cosmological models guarantee structure formation. Here, we focus on [Formula: see text] gravity, without imposing a priori the form of [Formula: see text]. We thus provide an analytic expression for the turnaround radius in the framework of [Formula: see text] models. To figure this out, we compute the turnaround radius in two distinct cases: (1) under the hypothesis of static and spherically symmetric spacetime, and (2) by using the cosmological perturbation theory. We thus find a criterion to enable large scale structures to be stable in [Formula: see text] models, circumscribing the class of [Formula: see text] theories as suitable alternative to dark energy. In particular, we get that for constant curvature, the viability condition becomes [Formula: see text], with [Formula: see text] and [Formula: see text], respectively, the observed cosmological constant and the Ricci curvature. This prescription rules out models which do not pass the aforementioned [Formula: see text] limit.


Sign in / Sign up

Export Citation Format

Share Document