probabilistic algorithms
Recently Published Documents


TOTAL DOCUMENTS

142
(FIVE YEARS 19)

H-INDEX

21
(FIVE YEARS 2)

2022 ◽  
Vol 309 ◽  
pp. 13-31
Author(s):  
Stefano Barbero ◽  
Emanuele Bellini ◽  
Carlo Sanna ◽  
Javier Verbel

2021 ◽  
Author(s):  
Julie Magat ◽  
Maxime Yon ◽  
Yann Bihan-Poudec ◽  
Valéry Ozenne

AbstractBackgroundKnowledge of the normal myocardial–myocyte orientation could theoretically allow the definition of relevant quantitative biomarkers in clinical routine to diagnose heart pathologies. A whole heart diffusion tensor template representative of the global myofiber organization over species is therefore crucial for comparisons across populations. In this study, we developed a template-based and tractography framework to resolve the global myofiber arrangement of large mammalian hearts. To demonstrate the potential application of the proposed method, a novel description of sub-regions in the intraventricular septum is presented.MethodsThree explanted sheep (ovine) hearts (size ~12×8×6 cm3, heart weight ~ 150 g) were perfused with contrast agent and fixative and imaged in a 9.4T magnet. A group-wise registration of high-resolution anatomical and diffusion-weighted images were performed to generated anatomical and diffusion tensor templates. Diffusion tensor metrics (eigenvalues, eigenvectors, fractional anisotropy …) were computed to provide a quantitative and spatially-resolved analysis of cardiac microstructure. Then tractography was performed using deterministic and probabilistic algorithms and used for different purposes: i) Visualization of myofiber architecture, ii) Segmentation of sub-area depicting the same fiber organization, iii) Seeding and Tract Editing. Finally, dissection was performed to confirm the existence of macroscopic structures identified in the diffusion tensor template.ResultsThe template creation takes advantage of high-resolution anatomical and diffusion-weighted images obtained at an isotropic resolution of 150 μm and 600 μm respectively, covering ventricles and atria and providing information on the normal myocardial architecture. The diffusion metric distributions from the template were found close to the one of the individual samples validating the registration procedure. Small new sub-regions exhibiting spatially sharp variations in fiber orientation close to the junctions of the septum and ventricles were identified. Each substructure was defined and represented using streamlines. The existence of a bundle of fibers in the posterior junction was validated by anatomical dissection. A complex structural organization of the anterior junction in comparison to the posterior junction was evidenced by the high-resolution acquisition.ConclusionsA new framework combining cardiac template generation and tractography was applied on the whole sheep heart. The framework can be used for anatomical investigation, characterization of microstructure and visualization of myofiber orientation across samples. Finally, a novel description of the ventricular junction in large mammalian hearts was proposed.


2021 ◽  
Author(s):  
Dúnia Marchiori ◽  
Ricardo Custódio ◽  
Daniel Panario ◽  
Lucia Moura

In code-based cryptography, deterministic algorithms are used in the root-finding step of the decryption process. However, probabilistic algorithms are more time efficient than deterministic ones for large fields. These algorithms can be useful for long-term security where larger parameters are relevant. Still, current probabilistic root-finding algorithms suffer from time variations making them susceptible to timing side-channel attacks. To prevent these attacks, we propose a countermeasure to a probabilistic root-finding algorithm so that its execution time does not depend on the degree of the input polynomial but on the cryptosystem parameters. We compare the performance of our proposed algorithm to other root-finding algorithms already used in code-based cryptography. In general, our method is faster than the straightforward algorithm in Classic McEliece. The results also show the range of degrees in larger finite fields where our proposed algorithm is faster than the Additive Fast Fourier Transform algorithm.


2021 ◽  
pp. 47-60
Author(s):  
Yaroslav Shevchenko

The study is devoted to substantiating the tactics of choosing the signs of the patient's condition for diagnostic decision-making on corrective medical intervention in mobile medicine. The aim of the research: to study a creation of a methodology for determining the integral informativeness of the patient's symptoms during remote monitoring of his condition. Materials and methods: this article is based on search results in PubMed, Scopus, MEDLINE, EMBASE, PsycINFO, Global Health, Web of Science, Cochrane Library, UK NHS HTA articles published between January 1991 and January 2021 and containing the search terms “information technology”, “Mobile medicine”, “digital pathology” and “deep learning”, as well as the results of the authors' own research. The authors independently extracted data on concealment of distribution, consistency of distribution, blindness, completeness of follow-up, and interventions. Results: concluded that to determine the Informativeness of symptoms in mobile monitoring of patients, it is possible to use risk indicators of predicted conditions as a universal method. Given that the Informativeness of the patient's condition changes constantly, for online diagnosis of conditions during remote monitoring of the patient it is recommended to use the function of informative symptoms from time to time and use a set of approaches to assess the Informativeness of patient symptoms. It is proposed to use the strategy of diagnosis and treatment using probabilistic algorithms based on the values of the risk of complications of the pathological process, as well as the formulas of Kulbach and Shannon to determine individual trends in the pathological patient process. Conclusion: there was proposed to use risk indicators of predicted conditions as a universal method for determining the informational content of symptoms in mobile monitoring of patients.


2021 ◽  
Author(s):  
James Aimone ◽  
Alexander Safonov

Author(s):  
Andrei Khrennikov

AbstractThe recent claim of Google to have brought forth a breakthrough in quantum computing represents a major impetus to further analyze the foundations for any claims of superiority regarding quantum algorithms. This note attempts to present a conceptual step in this direction. I start with a critical analysis of what is commonly referred to as entanglement and quantum nonlocality and whether or not these concepts may be the basis of quantum superiority. Bell-type experiments are then interpreted as statistical tests of Bohr’s principle of complementarity (PCOM), which is, thus, given a foothold within the area of quantum informatics and computation. PCOM implies (by its connection to probability) that probabilistic algorithms may proceed without the knowledge of joint probability distributions (jpds). The computation of jpds is exponentially time consuming. Consequently, classical probabilistic algorithms, involving the computation of jpds for n random variables, can be outperformed by quantum algorithms (for large values of n). Quantum probability theory (QPT) modifies the classical formula for the total probability (FTP). Inference based on the quantum version of FTP leads to a constructive interference that increases the probability of some events and reduces that of others. The physical realization of this probabilistic advantage is based on the discreteness of quantum phenomena (as opposed to the continuity of classical phenomena).


Symmetry ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 70
Author(s):  
Sayed A. Mohsin ◽  
Ahmed Younes ◽  
Saad M. Darwish

A distributed database model can be effectively optimized through using query optimization. In such a model, the optimizer attempts to identify the most efficient join order, which minimizes the overall cost of the query plan. Successful query processing largely relies on the methodology implemented by the query optimizer. Many researches are concerned with the fact that query processing is considered an NP-hard problem especially when the query becomes bigger. Regarding large queries, it has been found that heuristic methods cannot cover all search spaces and may lead to falling in a local minimum. This paper examines how quantum-inspired ant colony algorithm, a hybrid strategy of probabilistic algorithms, can be devised to improve the cost of query joins in distributed databases. Quantum computing has the ability to diversify and expand, and thus covering large query search spaces. This enables the selection of the best trails, which speeds up convergence and helps avoid falling into a local optimum. With such a strategy, the algorithm aims to identify an optimal join order to reduce the total execution time. Experimental results show that the proposed quantum-inspired ant colony offers a faster convergence with better outcome when compared with the classic model.


2021 ◽  
Vol 30 (1) ◽  
pp. 836-854
Author(s):  
Mustafa Kamal Pasha ◽  
Syed Fasih Ali Gardazi ◽  
Fariha Imtiaz ◽  
Asma Talib Qureshi ◽  
Rabia Afrasiab

Abstract Soon after the first COVID-19 positive case was detected in Wuhan, China, the virus spread around the globe, and in no time, it was declared as a global pandemic by the WHO. Testing, which is the first step in identifying and diagnosing COVID-19, became the first need of the masses. Therefore, testing kits for COVID-19 were manufactured for efficiently detecting COVID-19. However, due to limited resources in the densely populated countries, testing capacity even after a year is still a limiting factor for COVID-19 diagnosis on a larger scale and contributes to a lag in disease tracking and containment. Due to this reason, we started this study to provide a better cost-effective solution for enhancing the testing capacity so that the maximum number of people could get tested for COVID-19. For this purpose, we utilized the approach of artificial neural networks (ANN) to acquire the relevant data on COVID-19 and its testing. The data were analyzed by using Machine Learning, and probabilistic algorithms were applied to obtain a statistically proven solution for COVID-19 testing. The results obtained through ANN indicated that sample pooling is not only an effective way but also regarded as a “Gold standard” for testing samples when the prevalence of the disease is low in the population and the chances of getting a positive result are less. We further demonstrated through algorithms that pooling samples from 16 individuals is better than pooling samples of 8 individuals when there is a high likelihood of getting negative test results. These findings provide ground to the fact that if sample pooling will be employed on a larger scale, testing capacity will be considerably increased within limited available resources without compromising the test specificity. It will provide healthcare units and enterprises with solutions through scientifically proven algorithms, thus, saving a considerable amount of time and finances. This will eventually help in containing the spread of the pandemic in densely populated areas including vulnerably confined groups, such as nursing homes, hospitals, cruise ships, and military ships.


Author(s):  
Themba Mutemaringa ◽  
Alexa Heekes ◽  
Mariette Smith ◽  
Nicki Tiffin ◽  
Andrew Boulle

IntroductionIncreasing use of digital medical records creates disparate data resources for the same health care client population; and harnessing the benefits of real-time health data requires effective data linkage. A South African Health Information Exchange (HIE) collates and links routine health data from multiple sources, running daily updates through an automated ETL process. Many existing deterministic and probabilistic algorithms link person-level data using demographic identifiers, and can be combined in an optimised methodological pipeline. The performance of such pipelines must be validated against known matched pairs. The HIE uses current algorithms for record linkage, but methods that rely on similar spelling, name frequency and phonetic matching have been optimised for non-African names, and are not as effective. ObjectivesWe assessed common problems arising in the linkage process in the HIE, using this information to compile a curated representative African validation database for optimising existing and new linkage pipelines. ResultsUsing current linkage algorithms, we have identified the proportion of duplicates in the last five years, ranging from 25% in 2015 and stabilising at 10% by 2019. Common causes of duplicates across the whole database include mismatch in first name (37%), surname (17%), date of birth (13%), sex (8%) and South African Identification Number (0.2%). Complications from new-born naming and records of twins affect >8% of all records, and temporary health identifiers assigned at birth, during emergency response, and during poor connectivity of facilities to the provincial patient master index affect 2% of records. ConclusionsBased on these data, we have constructed a South African-specific, representative validation dataset that contains linkage pairs that represent placeholder phrases for newborns prior to naming (e.g. “baby of”), language variations; twins; character insertions, substitution and omissions in names with similar spellings; frequencies of names in the general population; and similar-sounding names.


2020 ◽  
pp. 325-346
Author(s):  
Kenric P. Nelson

This chapter introduces a simple, intuitive approach to the assessment of probabilistic inferences. The Shannon information metrics are translated to the probability domain. The translation shows that the negative logarithmic score and the geometric mean are equivalent measures of the accuracy of a probabilistic inference. The geometric mean of forecasted probabilities is thus a measure of forecast accuracy and represents the central tendency of the forecasts. The reciprocal of the geometric mean is referred to as the perplexity and defines the number of independent choices needed to resolve the uncertainty. The assessment method introduced in this chapter is intended to reduce the ‘qualitative’ perplexity relative to the potpourri of scoring rules currently used to evaluate machine learning and other probabilistic algorithms. Utilization of this assessment will provide insight into designing algorithms with reduced the ‘quantitative’ perplexity and thus improved the accuracy of probabilistic forecasts. The translation of information metrics to the probability domain is incorporating the generalized entropy functions developed Rényi and Tsallis. Both generalizations translate to the weighted generalized mean. The generalized mean of probabilistic forecasts forms a spectrum of performance metrics referred to as a Risk Profile. The arithmetic mean is used to measure the decisiveness, while the –2/3 mean is used to measure the robustness.


Sign in / Sign up

Export Citation Format

Share Document