Outbreaks Detection at the Beginning of Monitoring Process: The CUSUM Test Modification

Author(s):  
Julia Bondarenko
Author(s):  
Valian Yoga Pudya Ardhana ◽  
Ahmad Wilda Yulianto

Blog as one of the media applicationson the Internethas been used all aroundIndonesia. The user wasnot limited  by age,ranging from children to the elderly. A lot of people notrealize that blogs can beoptimizedso thatthe bloggettingtoppositionsin search engines. Metatagwasone ofoptimization techniquesinSearch Engine Optimization (SEO).The main target washow to increaseblogtraffi requests. Afteroptimization, the next stepwasmonitoring, whichaims to determinethe extent to whichthe success ofoptimizationhas been done onSEO.The resultwas ablog sitegettingtoppositionsinthe search enginesandthe monitoring process resultsindicatethat thetitleand content was veryappropriatethat was 100%, description and contentwere alsoappropriatethat was 91%.


2021 ◽  
Vol 23 (1) ◽  
Author(s):  
Lisa Lindner ◽  
Anja Weiß ◽  
Andreas Reich ◽  
Siegfried Kindler ◽  
Frank Behrens ◽  
...  

Abstract Background Clinical data collection requires correct and complete data sets in order to perform correct statistical analysis and draw valid conclusions. While in randomized clinical trials much effort concentrates on data monitoring, this is rarely the case in observational studies- due to high numbers of cases and often-restricted resources. We have developed a valid and cost-effective monitoring tool, which can substantially contribute to an increased data quality in observational research. Methods An automated digital monitoring system for cohort studies developed by the German Rheumatism Research Centre (DRFZ) was tested within the disease register RABBIT-SpA, a longitudinal observational study including patients with axial spondyloarthritis and psoriatic arthritis. Physicians and patients complete electronic case report forms (eCRF) twice a year for up to 10 years. Automatic plausibility checks were implemented to verify all data after entry into the eCRF. To identify conflicts that cannot be found by this approach, all possible conflicts were compiled into a catalog. This “conflict catalog” was used to create queries, which are displayed as part of the eCRF. The proportion of queried eCRFs and responses were analyzed by descriptive methods. For the analysis of responses, the type of conflict was assigned to either a single conflict only (affecting individual items) or a conflict that required the entire eCRF to be queried. Results Data from 1883 patients was analyzed. A total of n = 3145 eCRFs submitted between baseline (T0) and T3 (12 months) had conflicts (40–64%). Fifty-six to 100% of the queries regarding eCRFs that were completely missing were answered. A mean of 1.4 to 2.4 single conflicts occurred per eCRF, of which 59–69% were answered. The most common missing values were CRP, ESR, Schober’s test, data on systemic glucocorticoid therapy, and presence of enthesitis. Conclusion Providing high data quality in large observational cohort studies is a major challenge, which requires careful monitoring. An automated monitoring process was successfully implemented and well accepted by the study centers. Two thirds of the queries were answered with new data. While conventional manual monitoring is resource-intensive and may itself create new sources of errors, automated processes are a convenient way to augment data quality.


Agronomy ◽  
2021 ◽  
Vol 11 (5) ◽  
pp. 952
Author(s):  
Lia Duarte ◽  
Ana Cláudia Teodoro ◽  
Joaquim J. Sousa ◽  
Luís Pádua

In a precision agriculture context, the amount of geospatial data available can be difficult to interpret in order to understand the crop variability within a given terrain parcel, raising the need for specific tools for data processing and analysis. This is the case for data acquired from Unmanned Aerial Vehicles (UAV), in which the high spatial resolution along with data from several spectral wavelengths makes data interpretation a complex process regarding vegetation monitoring. Vegetation Indices (VIs) are usually computed, helping in the vegetation monitoring process. However, a crop plot is generally composed of several non-crop elements, which can bias the data analysis and interpretation. By discarding non-crop data, it is possible to compute the vigour distribution for a specific crop within the area under analysis. This article presents QVigourMaps, a new open source application developed to generate useful outputs for precision agriculture purposes. The application was developed in the form of a QGIS plugin, allowing the creation of vigour maps, vegetation distribution maps and prescription maps based on the combination of different VIs and height information. Multi-temporal data from a vineyard plot and a maize field were used as case studies in order to demonstrate the potential and effectiveness of the QVigourMaps tool. The presented application can contribute to making the right management decisions by providing indicators of crop variability, and the outcomes can be used in the field to apply site-specific treatments according to the levels of vigour.


2021 ◽  
Vol 11 (14) ◽  
pp. 6370
Author(s):  
Elena Quatrini ◽  
Francesco Costantino ◽  
David Mba ◽  
Xiaochuan Li ◽  
Tat-Hean Gan

The water purification process is becoming increasingly important to ensure the continuity and quality of subsequent production processes, and it is particularly relevant in pharmaceutical contexts. However, in this context, the difficulties arising during the monitoring process are manifold. On the one hand, the monitoring process reveals various discontinuities due to different characteristics of the input water. On the other hand, the monitoring process is discontinuous and random itself, thus not guaranteeing continuity of the parameters and hindering a straightforward analysis. Consequently, further research on water purification processes is paramount to identify the most suitable techniques able to guarantee good performance. Against this background, this paper proposes an application of kernel principal component analysis for fault detection in a process with the above-mentioned characteristics. Based on the temporal variability of the process, the paper suggests the use of past and future matrices as input for fault detection as an alternative to the original dataset. In this manner, the temporal correlation between process parameters and machine health is accounted for. The proposed approach confirms the possibility of obtaining very good monitoring results in the analyzed context.


Symmetry ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 133
Author(s):  
Juan A. Rojas ◽  
Helbert E. Espitia ◽  
Lilian A. Bejarano

Currently, in Colombia, different problems in education exist; one of them is the inconvenience in tracing and controlling the learning trajectories that decide the topics taught in the country’s educational institutions. This work aims to implement a logic-based system that allows teachers and educational institutions to carry out a continuous monitoring process of students’ academic performance, facilitating early corrections of errors or failures in teaching methods, to promote educational support spaces within the educational institution.


2014 ◽  
Vol 912-914 ◽  
pp. 1189-1192
Author(s):  
Hai Yu Wang

This article discusses robustness to non-normality of EWMA charts for dispersion. Comparison analysis of run length of four kinds of EWMA charts to monitoring process dispersion is provided to evaluate control charts performance and robustness. At last robust EWMA dispersion charts for non-normal processes are proposed by this way.


2006 ◽  
Vol 18 (7) ◽  
pp. 1181-1197 ◽  
Author(s):  
Marieke van Herten ◽  
Dorothee J. Chwilla ◽  
Herman H. J. Kolk

Monitoring refers to a process of quality control designed to optimize behavioral outcome. Monitoring for action errors manifests itself in an error-related negativity in event-related potential (ERP) studies and in an increase in activity of the anterior cingulate in functional magnetic resonance imaging studies. Here we report evidence for a monitoring process in perception, in particular, language perception, manifesting itself in a late positivity in the ERP. This late positivity, the P600, appears to be triggered by a conflict between two interpretations, one delivered by the standard syntactic algorithm and one by a plausibility heuristic which combines individual word meanings in the most plausible way. To resolve this conflict, we propose that the brain reanalyzes the memory trace of the perceptual input to check for the possibility of a processing error. Thus, as in Experiment 1, when the reader is presented with semantically anomalous sentences such as, “The fox that shot the poacher…,” full syntactic analysis indicates a semantic anomaly, whereas the word-based heuristic leads to a plausible interpretation, that of a poacher shooting a fox. That readers actually pursue such a word-based analysis is indicated by the fact that the usual ERP index of semantic anomaly, the so-called N400 effect, was absent in this case. A P600 effect appeared instead. In Experiment 2, we found that even when the word-based heuristic indicated that only part of the sentence was plausible (e.g., “…that the elephants pruned the trees”), a P600 effect was observed and the N400 effect of semantic anomaly was absent. It thus seems that the plausibility of part of the sentence (e.g., that of pruning trees) was sufficient to create a conflict with the implausible meaning of the sentence as a whole, giving rise to a monitoring response.


Sign in / Sign up

Export Citation Format

Share Document