data vector
Recently Published Documents


TOTAL DOCUMENTS

60
(FIVE YEARS 27)

H-INDEX

7
(FIVE YEARS 1)

Author(s):  
Eduard E. Zijlstra

Precision medicine and precision global health in visceral leishmaniasis (VL) have not yet been described and could take into account how all known determinants improve diagnostics and treatment for the individual patient. Precision public health would lead to the right intervention in each VL endemic population for control, based on relevant population-based data, vector exposures, reservoirs, socio-economic factors and other determinants. In anthroponotic VL caused by L. donovani, precision may currently be targeted to the regional level in nosogeographic entities that are defined by the interplay of the circulating parasite, the reservoir and the sand fly vector. From this 5 major priorities arise: diagnosis, treatment, PKDL, asymptomatic infection and transmission. These 5 priorities share the immune responses of infection with L. donovani as an important final common pathway, for which innovative new genomic and non-genomic tools in various disciplines have become available that provide new insights in clinical management and in control. From this, further precision may be defined for groups (e.g. children, women, pregnancy, HIV-VL co-infection), and eventually targeted to the individual level.


Author(s):  
Georgiy Teplov ◽  
Almira Galeeva ◽  
Aleksey Kuzovkov

This work explored the impact of input data structure to improve the neural network training. The impact of two variants of the input data vector on the training accuracy of the network was studied. The first version of the input vector included the intensity of the exposure radiation map. The second version of the input vector included the intensity of the exposure radiation map and IC topology.


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6471
Author(s):  
Ji-An Luo ◽  
Chang-Cheng Xue ◽  
Ying-Jiao Rong ◽  
Shen-Tu Han

This paper considers the problem of robust bearing-only source localization in impulsive noise with symmetric α-stable distribution based on the Lp-norm minimization criterion. The existing Iteratively Reweighted Pseudolinear Least-Squares (IRPLS) method can be used to solve the least LP-norm optimization problem. However, the IRPLS algorithm cannot reduce the bias attributed to the correlation between system matrices and noise vectors. To reduce this kind of bias, a Total Lp-norm Optimization (TLPO) method is proposed by minimizing the errors in all elements of system matrix and data vector based on the minimum dispersion criterion. Subsequently, an equivalent form of TLPO is obtained, and two algorithms are developed to solve the TLPO problem by using Iterative Generalized Eigenvalue Decomposition (IGED) and Generalized Lagrange Multiplier (GLM), respectively. Numerical examples demonstrate the performance advantage of the IGED and GLM algorithms over the IRPLS algorithm.


2021 ◽  
Vol 508 (1) ◽  
pp. 637-664 ◽  
Author(s):  
S Samuroff ◽  
R Mandelbaum ◽  
J Blazek

ABSTRACT We use galaxies from the illustristng, massiveblack-ii, and illustris-1 hydrodynamic simulations to investigate the behaviour of large scale galaxy intrinsic alignments. Our analysis spans four redshift slices over the approximate range of contemporary lensing surveys z = 0−1. We construct comparable weighted samples from the three simulations, which we then analyse using an alignment model that includes both linear and quadratic alignment contributions. Our data vector includes galaxy–galaxy, galaxy–shape, and shape–shape projected correlations, with the joint covariance matrix estimated analytically. In all of the simulations, we report non-zero IAs at the level of several σ. For a fixed lower mass threshold, we find a relatively strong redshift dependence in all three simulations, with the linear IA amplitude increasing by a factor of ∼2 between redshifts z = 0 and z = 1. We report no significant evidence for non-zero values of the tidal torquing amplitude, A2, in TNG, above statistical uncertainties, although MBII favours a moderately negative A2 ∼ −2. Examining the properties of the TATT model as a function of colour, luminosity and galaxy type (satellite or central), our findings are consistent with the most recent measurements on real data. We also outline a novel method for constraining the TATT model parameters directly from the pixelized tidal field, alongside a proof-of-concept exercise using TNG. This technique is shown to be promising, although comparison with previous results obtained via other methods is non-trivial.


Author(s):  
E.S. Goryachkin ◽  
V.N. Matveev ◽  
G.M. Popov ◽  
O.V. Baturin ◽  
Yu.D. Novikova

The paper presents an algorithm for seeking an optimal blade configuration for multistage axial-flow compressors. The primary tool behind the algorithm is 3D CFD simulation, augmented by commercial optimisation software. The core of the algorithm involves feeding an initial data vector to the parametric simulation module so as to form a "new" blade geometry, which is then transferred to 3D computational software. The results obtained are further processed in a program that implements the algorithm for seeking the optimum and forms a new input data vector to achieve the set goal. We present a method of parametrically simulation the blade shape, implemented in a software package, making it possible to describe the shape of the compressor blade profiles using a minimum number of variables and to automatically change the shape in the optimisation cycle. The algorithm developed allows the main parameters of compressor operation (efficiency, pressure ratio, air flow rate, etc.) to be improved by correcting the profile shape and relative position of the blades. The algorithm takes into account various possible constraints. We used the method developed to solve practical problems of optimising multistage axial compressors of gas turbine engines for various purposes, with the number of compressor stages ranging from 3 to 15. As a result, the efficiency, pressure ratio and stability margin of gas turbine engines were increased


Author(s):  
M Tafaquh Fiddin Al Islami ◽  
Ali Ridho Barakbah ◽  
Tri Harsono

A company maintains and improves its quality services by paying attention to reviews and complaints from users. The complaints from users are commonly written using human natural language expression so that their messages are computationally difficult to extract and proceed. To overcome this difficulty, in this study, we presented a new system for issues feature extraction from users’ reviews and complaints from social media data. This system consists of four main functions: (1) Data Crawling and Preprocessing, (2) Categorization Knowledge Modelling, (3) Rule-based Sentiment Analysis, and (4) Application Environment. Data Crawling and Preprocessing provides data acquisition from users’ tweets on social media, crawls the data and applies the data preprocessing. Categorization Knowledge Modelling provides text mining of textual data, vector space transformation to create knowledge metadata, context recognition of keyword queries to the knowledge metadata, and similarity measurement for categorization. In the Rule-based Sentiment Analysis, we developed our own rules of computatioal linguistics to measure polarity of sentiment. Application Environment consists of 3 layers: database management, back-end services and front-end services. For applicability of our proposed system, we conducted two kinds of experimental study: (1) categorization performance, and (2) sentiment analysis performance. For categorization performance, we used 8743 tweet data and performed 82% of accuracy. For categorization performance, we made experiments on 217 tweet data and performed 92% of accuracy.


2021 ◽  
Vol 503 (2) ◽  
pp. 2688-2705
Author(s):  
C Doux ◽  
E Baxter ◽  
P Lemos ◽  
C Chang ◽  
A Alarcon ◽  
...  

ABSTRACT Beyond ΛCDM, physics or systematic errors may cause subsets of a cosmological data set to appear inconsistent when analysed assuming ΛCDM. We present an application of internal consistency tests to measurements from the Dark Energy Survey Year 1 (DES Y1) joint probes analysis. Our analysis relies on computing the posterior predictive distribution (PPD) for these data under the assumption of ΛCDM. We find that the DES Y1 data have an acceptable goodness of fit to ΛCDM, with a probability of finding a worse fit by random chance of p = 0.046. Using numerical PPD tests, supplemented by graphical checks, we show that most of the data vector appears completely consistent with expectations, although we observe a small tension between large- and small-scale measurements. A small part (roughly 1.5 per cent) of the data vector shows an unusually large departure from expectations; excluding this part of the data has negligible impact on cosmological constraints, but does significantly improve the p-value to 0.10. The methodology developed here will be applied to test the consistency of DES Year 3 joint probes data sets.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1478
Author(s):  
Chong Song ◽  
Bingnan Wang ◽  
Maosheng Xiang ◽  
Wei Li

A generalized likelihood ratio test (GLRT) with the constant false alarm rate (CFAR) property was recently developed for adaptive detection of moving targets in focusing synthetic aperture radar (SAR) images. However, in the multichannel SAR-ground moving-target indication (SAR-GMTI) system, image defocus is inevitable, which will remarkably degrade the performance of the GLRT detector, especially for the lower radar cross-section (RCS) and slower radial velocity moving targets. To address this issue, based on the generalized steering vector (GSV), an extended GLRT detector is proposed and its performance is evaluated by the optimum likelihood ratio test (LRT) in the Neyman-Pearson (NP) criterion. The joint data vector formulated by the current cell and its adjacent cells is used to obtain the GSV, and then the extended GLRT is derived, which coherently integrates signal and accomplishes moving-target detection and parameter estimation. Theoretical analysis and simulated SAR data demonstrate the effectiveness and robustness of the proposed detector in the defocusing SAR images.


Author(s):  
D.V. Efanov ◽  
◽  
G.V. Osadchii ◽  
M.V. Zueva ◽  
◽  
...  

The article deals with the previously unknown characteristics of the error detection by using classical Berger codes based on their multiplicities and types (unidirectional, symmetrical and asymmetrical), which can be applied in the concurrent error-detection (CED) systems synthesis, for example, through the use of Boolean complement method. The article shows that Berger codes do not detect a certain amount of both symmetrical, unidirectional and asymmetrical errors in code words. This differs from the previously identified characteristics of the error detection only in data vectors of Berger codes (in this case, any symmetrical errors are not detected, and any unidirectional and asymmetrical errors are detected, which is used in the synthesis of systems with fault detection). The share of undetectable er-rors from their total number for Berger codes with data vector lengths r = 4,…,7 is less than 2%, and for Berger codes with data vector lengths r = 8,…,15 it is less than 0.5%. The use of classical sum codes is effective in the CED systems synthesis, including the Boolean complement method, in which both data and check bits of code words are calculated using the diagnostic object itself


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-13
Author(s):  
Ji-An Luo ◽  
Chang-Cheng Xue ◽  
Dong-Liang Peng

Robust techniques critically improve bearing-only target localization when the relevant measurements are being corrupted by impulsive noise. Resistance to isolated gross errors refers to the conventional least absolute residual (LAR) method, and its estimate can be determined by linear programming when pseudolinear equations are set. The LAR approach, however, cannot reduce the bias attributed to the correlation between system matrices and noise vectors. In the present study, perturbations are introduced into the elements of the system matrix and the data vector simultaneously, and the total optimization problem is formulated based on least absolute deviations. Subsequently, an equivalent form of total least absolute residuals (TLAR) is obtained, and an algorithm is developed to calculate the robust estimate by dual ascent algorithms. Moreover, the performance of the proposed method is verified through the numerical simulations by using two types of localization geometries, i.e., random and linear. As revealed from the results, the TLAR algorithm is capable of exhibiting significantly higher localization accuracy as compared with the LAR method.


Sign in / Sign up

Export Citation Format

Share Document