approximate answers
Recently Published Documents


TOTAL DOCUMENTS

36
(FIVE YEARS 2)

H-INDEX

9
(FIVE YEARS 0)

2021 ◽  
pp. 125-142
Author(s):  
Trevor Davis Lipscombe

This chapter describes techniques to estimate division and multiplication by certain numbers. The purpose is to help in test taking. By knowing some ways to approximate answers rapidly, some potential answers can be eliminated, enhancing the chance of choosing the correct answer from the smaller number of remaining options. It presents easy ways to determine whether a number is divisible by 11, 17, and 19. It introduces sphenic numbers, cannonball (or square pyramidal numbers) in relation to the Kepler Conjecture, as well as the Kaprekar number. Examples are drawn from the Azerbaijan Grand Prix and two triple dead heats in dog racing.


PLoS ONE ◽  
2021 ◽  
Vol 16 (1) ◽  
pp. e0244026
Author(s):  
John Golden ◽  
Daniel O’Malley

It was recently shown that quantum annealing can be used as an effective, fast subroutine in certain types of matrix factorization algorithms. The quantum annealing algorithm performed best for quick, approximate answers, but performance rapidly plateaued. In this paper, we utilize reverse annealing instead of forward annealing in the quantum annealing subroutine for nonnegative/binary matrix factorization problems. After an initial global search with forward annealing, reverse annealing performs a series of local searches that refine existing solutions. The combination of forward and reverse annealing significantly improves performance compared to forward annealing alone for all but the shortest run times.


2016 ◽  
Vol 33 (S1) ◽  
pp. S363-S363
Author(s):  
R. Faria ◽  
D. Brandão ◽  
T. Novo ◽  
L. Quintela ◽  
A. Fonte

IntroductionFirst described by Sigbert Ganser in 1987, Ganser syndrome consists in a rare condition, characterized by the following four clinical features: approximate answers, dulling of consciousness, conversion symptoms and hallucinations.ObjectivesTo present a case suggestive of Ganser Syndrome and to review the literature with particular regard to the aetiology of this condition.MethodsLiterature review, using computerized databases (MEDLINE®, Medscape®). Articles were selected based on the content of their abstract and their relevance.ResultsA 58-year-old woman was admitted to a Psychiatric Unit of a General Hospital for presenting behavioural abnormalities of acute onset. During hospitalization, the patient displayed indifference, incoherent speech with approximate answers, motor abnormalities and auditory psedudohallucinations. The patient was evaluated by a neurologist and various exams were performed (blood tests, CT, MRI, EEG) that showed no significant abnormalities. Pharmacological treatment consisted of antidepressant and antipsychotic medications. During the follow-up, there was a slow but gradual improvement of symptoms. Six months after hospitalization the patient decide to end up the follow-up.ConclusionsLittle is still known about Ganser Syndrome. The four aetiological perspectives consider: hysterical origin, malingering or factitious disorder, psychotic origin and organic origin. The lack of reports and information about Ganser syndrome made it worthwhile reporting this case.Disclosure of interestThe authors have not supplied their declaration of competing interest.


2014 ◽  
Vol 26 (10) ◽  
pp. 2561-2573 ◽  
Author(s):  
Bettina Fazzinga ◽  
Sergio Flesca ◽  
Andrea Pugliese
Keyword(s):  

Author(s):  
Mirjana Mazuran ◽  
Elisa Quintarelli ◽  
Angelo Rauseo ◽  
Letizia Tanca

In this work we describe the TreeRuler tool, which makes it possible for inexperienced users to access huge XML (or relational) datasets. TreeRuler encompasses two main features: (1) it mines all the frequent association rules from input documents without any a-priori specification of the desired results, and (2) it provides quick, summarized, thus often approximate answers to user’s queries, by using the previously mined knowledge. TreeRuler has been developed in the scenario of the Odyssey EU project dealing with information about crimes, both for the relational and XML data model. In this chapter we mainly focus on the objectives, strategies, and difficulties encountered in the XML context.


2011 ◽  
pp. 2203-2217
Author(s):  
Qing Zhang

In this article we investigate how approximate query processing (AQP) can be used in medical multidatabase systems. We identify two areas where this estimation technique will be of use. First, approximate query processing can be used to preprocess medical record linking in the multidatabase. Second, approximate answers can be given for aggregate queries. In the case of multidatabase systems used to link health and health related data sources, preprocessing can be used to find records related to the same patient. This may be the first step in the linking strategy. If the aim is to gather aggregate statistics, then the approximate answers may be enough to provide the required answers. At least they may provide initial answers to encourage further investigation. This estimation may also be used for general query planning and optimization, important in multidatabase systems. In this article we propose two techniques for the estimation. These techniques enable synopses of component local databases to be precalculated and then used for obtaining approximate results for linking records and for aggregate queries. The synopses are constructed with restrictions on the storage space. We report on experiments which show that good approximate results can be obtained in a much shorter time than performing the exact query.


Author(s):  
Alfredo Cuzzocrea

Since the size of the underlying data warehouse server (DWS) is usually very large, response time needed for computing queries is the main issue in decision support systems (DSS). Business analysis is the main application field in the context of DSS, as well as OLAP queries being the most useful ones; in fact, these queries allow us to support different kinds of analysis based on a multi-resolution and a multi-dimensional view of the data. By performing OLAP queries, business analysts can efficiently extract summarized knowledge, by means of SQL aggregation operators, from very large repositories of data like those stored in massive DWSs. Then, the extracted knowledge is exploited to support decisions in strategic fields of the target business, thus efficiently taking advantage from the amenity of exploring and mining massive data via OLAP technologies. The negative aspect of such an approach is just represented by the size of the data, which is enormous, currently being tera-bytes and peta-bytes the typical orders of data magnitude for enterprise DWSs, and, as a consequence, data processing costs are explosive. Despite the complexity and the resource-intensiveness of processing OLAP queries against massive DWSs, client-side systems performing OLAP and data mining, the most common application interfaces versus DWSs, are often characterized by small amount of memory, small computational capability, and customized tools with interactive, graphical user interface supporting qualitative, trend analysis. For instance, consider the context of retail systems. Here, managers and analysts are very often more interested in the product-sale plot in a fixed time window rather than to know the sale of a particular product in a particular day of the year. In others words, managers and analysts are more interested in the trend analysis rather than in the punctual, quantitative analysis, which is, indeed, more proper for OLTP systems. This consideration makes it more convent and efficient to compute approximate answers rather than exact answers. In fact, typical decision-support queries can be very resource intensive in terms of spatial and temporal computational needs. Obviously, the other issue that must be faced is the accuracy of the answers, as providing fast and totally wrong answers is deleterious. All considering, the key is proving fast, exploratory answers with some guarantees on their degree of approximation. On the other hand, in the last few years, DSS have become very popular: for example, sales transaction databases, call detail repositories, customer services historical data, and so forth. As a consequence, providing fast, even if approximate, answers to aggregate queries has become a tight requirement to make DSS-based applications efficient, and, thus, has been addressed in research in the vest of the so-called approximate query answering (AQA) techniques. Furthermore, in such data warehousing environments, executing multi-steps, query-processing algorithms is particularly hard because the computational cost for accessing multi-dimensional data would be enormous. Therefore, the most important issues for enabling DSS-based applications are: (1) minimizing the time complexity of query processing algorithms by decreasing the number of the needed disk I/Os, and (2) ensuring the quality of the approximate answers with respect to the exact ones by providing some guarantees on the accuracy of the approximation. Nevertheless, proposals existent in literature devote little attention to the point (2), which is indeed critical for the investigated context.


Sign in / Sign up

Export Citation Format

Share Document