Statistical testing of finite sequences based on algorithmic complexity

Author(s):  
Ivan Kramosil
2018 ◽  
Vol 1 (1) ◽  
pp. 9-28
Author(s):  
Dessy Sumanty ◽  
Deden Sudirman ◽  
Diah Puspasari

This research attempts to relate the body image phenomenon with the level of subject religiosity. This research used correlational research design that was involving 332 respondents. The statistical testing which is used to test the hypothesis Rank Spearman. The calculation result with the significance level of trust 95% (a = 0.05) show that the correlation coefficient is 0.083 and p-value is 0.129. It means that Ho is accepted and H1 is rejected. It can be concluded that there is no relationship between religiosity with body image.


Author(s):  
Amandeep Kaur ◽  
Sushma Jain ◽  
Shivani Goel ◽  
Gaurav Dhiman

Context: Code smells are symptoms, that something may be wrong in software systems that can cause complications in maintaining software quality. In literature, there exists many code smells and their identification is far from trivial. Thus, several techniques have also been proposed to automate code smell detection in order to improve software quality. Objective: This paper presents an up-to-date review of simple and hybrid machine learning based code smell detection techniques and tools. Methods: We collected all the relevant research published in this field till 2020. We extracted the data from those articles and classified them into two major categories. In addition, we compared the selected studies based on several aspects like, code smells, machine learning techniques, datasets, programming languages used by datasets, dataset size, evaluation approach, and statistical testing. Results: Majority of empirical studies have proposed machine- learning based code smell detection tools. Support vector machine and decision tree algorithms are frequently used by the researchers. Along with this, a major proportion of research is conducted on Open Source Softwares (OSS) such as, Xerces, Gantt Project and ArgoUml. Furthermore, researchers paid more attention towards Feature Envy and Long Method code smells. Conclusion: We identified several areas of open research like, need of code smell detection techniques using hybrid approaches, need of validation employing industrial datasets, etc.


2011 ◽  
Vol 412 (22) ◽  
pp. 2387-2392 ◽  
Author(s):  
Yancai Zhao ◽  
Liying Kang ◽  
Moo Young Sohn

2020 ◽  
Vol 22 (Supplement_2) ◽  
pp. ii175-ii175
Author(s):  
Ramya Tadipatri ◽  
Amir Azadi ◽  
Madison Cowdrey ◽  
Samuel Fongue ◽  
Paul Smith ◽  
...  

Abstract BACKGROUND Early access to palliative care is a critical component of treating patients with advanced cancer, particularly for glioblastoma patients who have low rates of survival despite optimal therapies. Additionally, there are unique considerations for primary brain tumor patients given the need for management of headaches, seizures, and focal neurological deficits. METHODS We conducted a survey of 109 physicians in Sub-Saharan Africa in order to determine level of understanding and skill in providing palliative care, types of palliative care therapies provided, role of cultural beliefs, availability of resources, and challenges faced. Demographic data including age, gender, and prior training was collected and analyzed using ANOVA statistical testing. RESULTS Among the participants, 48% felt comfortable in providing palliative care consultations, 62% have not had prior training, 52% believed that palliative care is only appropriate when there is irreversible deterioration, 62% expressed having access to palliative care, 49% do not have access to liquid opioid agents, 50% stated that cultural beliefs held by the patient or family prevented them from receiving, palliative care, and 23% stated that their own beliefs affected palliative care delivery. Older providers (age > 30) had a clearer understanding of palliative care (p = 0.004), were more comfortable providing consultation (p = 0.052), and were more likely to address mental health (p < 0.001). CONCLUSIONS Palliative care delivery to glioblastoma patients in Sub-Saharan Africa is often delayed until late in the disease course. Barriers to adequate palliative care treatment identified in this survey study include lack of training, limited access to liquid opioid agents, and cultural beliefs. Challenges most often identified by participants were pain management and end-of-life communication skills, but also included patient spirituality and psychological support, anxiety and depression, terminal dyspnea, ethics, and intravenous hydration and non-oral feeding.


2021 ◽  
Vol 13 (13) ◽  
pp. 2494
Author(s):  
Gaël Kermarrec ◽  
Niklas Schild ◽  
Jan Hartmann

T-splines have recently been introduced to represent objects of arbitrary shapes using a smaller number of control points than the conventional non-uniform rational B-splines (NURBS) or B-spline representatizons in computer-aided design, computer graphics and reverse engineering. They are flexible in representing complex surface shapes and economic in terms of parameters as they enable local refinement. This property is a great advantage when dense, scattered and noisy point clouds are approximated using least squares fitting, such as those from a terrestrial laser scanner (TLS). Unfortunately, when it comes to assessing the goodness of fit of the surface approximation with a real dataset, only a noisy point cloud can be approximated: (i) a low root mean squared error (RMSE) can be linked with an overfitting, i.e., a fitting of the noise, and should be correspondingly avoided, and (ii) a high RMSE is synonymous with a lack of details. To address the challenge of judging the approximation, the reference surface should be entirely known: this can be solved by printing a mathematically defined T-splines reference surface in three dimensions (3D) and modeling the artefacts induced by the 3D printing. Once scanned under different configurations, it is possible to assess the goodness of fit of the approximation for a noisy and potentially gappy point cloud and compare it with the traditional but less flexible NURBS. The advantages of T-splines local refinement open the door for further applications within a geodetic context such as rigorous statistical testing of deformation. Two different scans from a slightly deformed object were approximated; we found that more than 40% of the computational time could be saved without affecting the goodness of fit of the surface approximation by using the same mesh for the two epochs.


2021 ◽  
Vol 5 (1) ◽  
pp. 59
Author(s):  
Gaël Kermarrec ◽  
Niklas Schild ◽  
Jan Hartmann

Terrestrial laser scanners (TLS) capture a large number of 3D points rapidly, with high precision and spatial resolution. These scanners are used for applications as diverse as modeling architectural or engineering structures, but also high-resolution mapping of terrain. The noise of the observations cannot be assumed to be strictly corresponding to white noise: besides being heteroscedastic, correlations between observations are likely to appear due to the high scanning rate. Unfortunately, if the variance can sometimes be modeled based on physical or empirical considerations, the latter are more often neglected. Trustworthy knowledge is, however, mandatory to avoid the overestimation of the precision of the point cloud and, potentially, the non-detection of deformation between scans recorded at different epochs using statistical testing strategies. The TLS point clouds can be approximated with parametric surfaces, such as planes, using the Gauss–Helmert model, or the newly introduced T-splines surfaces. In both cases, the goal is to minimize the squared distance between the observations and the approximated surfaces in order to estimate parameters, such as normal vector or control points. In this contribution, we will show how the residuals of the surface approximation can be used to derive the correlation structure of the noise of the observations. We will estimate the correlation parameters using the Whittle maximum likelihood and use comparable simulations and real data to validate our methodology. Using the least-squares adjustment as a “filter of the geometry” paves the way for the determination of a correlation model for many sensors recording 3D point clouds.


Materials ◽  
2021 ◽  
Vol 14 (4) ◽  
pp. 778
Author(s):  
Yingli Niu ◽  
Xiangyu Bu ◽  
Xinghua Zhang

The application of single chain mean-field theory (SCMFT) on semiflexible chain brushes is reviewed. The worm-like chain (WLC) model is the best mode of semiflexible chain that can continuously recover to the rigid rod model and Gaussian chain (GC) model in rigid and flexible limits, respectively. Compared with the commonly used GC model, SCMFT is more applicable to the WLC model because the algorithmic complexity of the WLC model is much higher than that of the GC model in self-consistent field theory (SCFT). On the contrary, the algorithmic complexity of both models in SCMFT are comparable. In SCMFT, the ensemble average of quantities is obtained by sampling the conformations of a single chain or multi-chains in the external auxiliary field instead of solving the modified diffuse equation (MDE) in SCFT. The precision of this calculation is controlled by the number of bonds Nm used to discretize the chain contour length L and the number of conformations M used in the ensemble average. The latter factor can be well controlled by metropolis Monte Carlo simulation. This approach can be easily generalized to solve problems with complex boundary conditions or in high-dimensional systems, which were once nightmares when solving MDEs in SCFT. Moreover, the calculations in SCMFT mainly relate to the assemble averages of chain conformations, for which a portion of conformations can be performed parallel on different computing cores using a message-passing interface (MPI).


Sign in / Sign up

Export Citation Format

Share Document