complete accuracy
Recently Published Documents


TOTAL DOCUMENTS

22
(FIVE YEARS 5)

H-INDEX

6
(FIVE YEARS 0)

2021 ◽  
Vol 22 (8) ◽  
pp. 968-968
Author(s):  
M. Chalusov

Witter (Surg., Gyn. A. Obst., 1925, 1), on the basis of a study of 30 cases, comes to the conclusion that 1) postoperative leukocytosis reaches its inaximum a on the 4th hour after the operation and returns to normal on the 5th day , 2) its height is directly proportional to the duration of the operation and the degree of the surgical trauma, the other factors are not; are significant, 3) it is expressed mainly by polynucleosis, 4) its curve generally corresponds to the temperature curve, but not with complete accuracy.


2021 ◽  
Vol 9 (1) ◽  
pp. 28-38
Author(s):  
Pavle Dakić ◽  
Jelena Savić ◽  
Vladimir Todorović

Creating the need for continuous growth and progress of all involved and connected business entities. They require compliance with certain web standards and support for multiple different browsers and devices. To be convinced of the version of the published code, we need a team that loves challenges, has the creativity and a desire for constant learning. Code, as a basis for a successful business, must be written appropriately with minimal deficiencies in logic and writing. The correctness and validity of the production code mostly depend on the program team itself and its responsibility for the written code. Vital code with errors can produce serious problems and unforeseen consequences. To achieve complete accuracy of all parts of the written code, it is necessary to use software testing and QA technique - quality assurance. The focus of the work is on the application and writing of the necessary programming code that uses QA/QC and the black-box method of testing the existing webshop. After analysis, the authors conclude that software quality control will be a constant challenge for many companies to survive in the short term of the adjustment process and that the crisis is an opportunity for courageous companies to invest ambitiously and timely in their internet business to become market leaders.


Author(s):  
L.N. Shapovalova ◽  
◽  
S.N. Medvedko ◽  

In the article, the authors determine the reliability of the results of operational and technological evaluation when testing agricultural machines. The analysis of the accuracy of the results is based on the methods of the classical error theory. A complete accuracy analysis allows you to track the impact of measurement errors and methods for obtaining various indicators on the accuracy of calculation results at different stages of mathematical processing of experimental data.


Philosophies ◽  
2019 ◽  
Vol 4 (2) ◽  
pp. 21
Author(s):  
Harry Collins

This paper draws an earlier book (with Evans and Higgins) entitled Bad Call: Technology’s Attack on Referees and Umpires and How to Fix It (hereafter Bad Call) and its various precursor papers. These show why it is that current match officiating aids are unable to provide the kind of accuracy that is often claimed for them and that sports aficianados have been led to expect from them. Accuracy is improving all the time but the notion of perfect accuracy is a myth because, for example, lines drawn on sports fields and the edges of balls are not perfectly defined. The devices meant to report the exact position of a ball—for instance ‘in’ or ‘out’ at tennis—work with the mathematically perfect world of virtual reality, not the actuality of an imperfect physical world. Even if ball-trackers could overcome the sort of inaccuracies related to fast ball speeds and slow camera frame-rates the goal of complete accuracy will always be beyond reach. Here it is suggested that the purpose of technological aids to umpires and referees be looked at in a new way that takes the viewers into account.


Author(s):  
Harry Collins

This paper draws an earlier book (with Evans and Higgins) entitled Bad Call: Technology’s Attack on Referees and Umpires and How to Fix It (hereafter Bad Call) and its various precursor papers. These show why it is that current match officiating aids are unable to provide the kind of accuracy that is often claimed for them and that sports aficianados have been led to expect from them. Accuracy is improving all the time but the notion of perfect accuracy is a myth because, for example, lines drawn on sports fields and the edges of balls are not perfectly defined. The devices meant to report the exact position of a ball – for instance ‘in’ or ‘out’ at tennis – work with the mathematically perfect world of virtual reality, not the actuality of an imperfect physical world. Even if ball-trackers could overcome the sort of inaccuracies related to fast ball speeds and slow camera frame-rates the goal of complete accuracy will always be beyond reach. Here it is suggested that the purpose of technological aids to umpires and referees be looked at in a new way that takes the viewers into account.


2017 ◽  
Vol 16 (3) ◽  
pp. 279-283 ◽  
Author(s):  
Ken-Ichi Tanno ◽  
Ayaka Takeuchi ◽  
Eri Akahori ◽  
Keiko Kobayashi ◽  
Taihachi Kawahara ◽  
...  

AbstractWe developed a multiplex PCR DNA marker for quick and easy identification of the AAGG-genome timopheevii lineage, including Triticum timopheevii, Triticum araraticum and hexaploid Triticum zhukovskyi (AAAmAmGG), and the AABB-genome emmer wheat lineage, including Triticum durum, Triticum dicoccum and Triticum dicoccoides. Distinguishing between tetraploid AAGG- and AABB-genome wheat species based on morphology is known to be difficult. This multiplex PCR system is based on the simultaneous PCR amplification of two chloroplast regions, matK and rbcL. The matK region molecularly distinguishes the two lineages, whereas the rbcL region is a positive control amplicon. We also examined whether the simple sequence repeat is a fixed mutation within species, using genetic resources in the collection of KOMUGI, Kyoto University, which comprises accessioned species collected across diverse geographical areas. The multiplex PCR marker distinguished AAGG from AABB species with complete accuracy.


2017 ◽  
Author(s):  
Rami Eitan ◽  
Ron Shamir

AbstractBackgroundDuring cancer progression genomes undergo point mutations as well as larger segmental changes. The latter include, among others, segmental deletions duplications, translocations and inversions. The result is a highly complex, patient-specific cancer karyotype. Using high-throughput technologies of deep sequencing and microarrays it is possible to interrogate a cancer genome and produce chromosomal copy number profiles and a list of breakpoints (“jumps”) relative to the normal genome. This information is very detailed but local, and does not give the overall picture of the cancer genome. One of the basic challenges in cancer genome research is to use such information to infer the cancer karyotype.We present here an algorithmic approach, based on graph theory and integer linear programming, that receives segmental copy number and breakpoint data as input and produces a cancer karyotype that is most concordant with them. We used simulations to evaluate the utility of our approach, and applied it to real data.ResultsBy using a simulation model, we were able to estimate the correctness and robustness of the algorithm in a spectrum of scenarios. Under our base scenario, designed according to observations in real data, the algorithm correctly inferred 69% of the karyotypes. However, when using less stringent correctness metrics that account for incomplete and noisy data, 87% of the reconstructed karyotypes were correct. Furthermore, in scenarios where the data were very clean and complete, accuracy rose to 90%-100%. Some examples of analysis of real data, and the karyotypes reconstructed by our algorithm, are also presented.ConclusionWhile reconstruction of complete, perfect karyotype based on short read data is very hard, a large portion of the reconstruction will still be correct and can provide useful information.


Author(s):  
Nayem Rahman

Incremental load is an important factor for successful data warehousing. Lack of standardized incremental refresh methodologies can lead to poor analytical results, which can be unacceptable to an organization’s analytical community. Successful data warehouse implementation depends on consistent metadata as well as incremental data load techniques. If consistent load timestamps are maintained and efficient transformation algorithms are used, it is possible to refresh databases with complete accuracy and with little or no manual checking. This paper proposes an Extract-Transform-Load (ETL) metadata model that archives load observation timestamps and other useful load parameters. The author also recommends algorithms and techniques for incremental refreshes that enable table loading while ensuring data consistency, integrity, and improving load performance. In addition to significantly improving quality in incremental load techniques, these methods will save a substantial amount of data warehouse systems resources.


Author(s):  
Dilys Morgan ◽  
Ruth Lysons ◽  
Hilary Kirkbride

Surveillance (derived from the French word surveiller, meaning to watch over) is the ‘ongoing scrutiny, generally using methods distinguished by their practicability, uniformity, and frequently their rapidity, rather than for complete accuracy. Its main purpose is to detect changes in trend or distribution in order to initiate investigative, (preventive) or control or measures’ (Last 1988).Understanding the burden and detecting changes in the incidence of human and animal infections utilises a number of surveillance mechanisms, which rely on voluntary and/or statutory reporting systems. These include international as well as national surveillance schemes for outbreaks of infectious disease and laboratory-confirmed infections, enhanced surveillance schemes for specificzoonoses and notification of specified infectious diseases.


Author(s):  
Nayem Rahman

Incremental load is an important factor for successful data warehousing. Lack of standardized incremental refresh methodologies can lead to poor analytical results, which can be unacceptable to an organization’s analytical community. Successful data warehouse implementation depends on consistent metadata as well as incremental data load techniques. If consistent load timestamps are maintained and efficient transformation algorithms are used, it is possible to refresh databases with complete accuracy and with little or no manual checking. This paper proposes an Extract-Transform-Load (ETL) metadata model that archives load observation timestamps and other useful load parameters. The author also recommends algorithms and techniques for incremental refreshes that enable table loading while ensuring data consistency, integrity, and improving load performance. In addition to significantly improving quality in incremental load techniques, these methods will save a substantial amount of data warehouse systems resources.


Sign in / Sign up

Export Citation Format

Share Document