adaptive kernel
Recently Published Documents


TOTAL DOCUMENTS

340
(FIVE YEARS 87)

H-INDEX

26
(FIVE YEARS 4)

2021 ◽  
Vol 5 (2) ◽  
pp. 208-220
Author(s):  
Ulfie Safitri ◽  
Luthfatul Amaliana

Model Geographically Weighted Regression (GWR) merupakan pengembangan dari model regresi linier berganda yang dapat menghasilkan penduga parameter model yang bersifat lokal untuk setiap titik atau lokasi di mana data diamati. Model GWR dapat digunakan apabila data memenuhi asumsi heterogenitas spasial yang diakibatkan oleh perbedaan kondisi data antara satu lokasi dengan lokasi lain. Penelitian ini bertujuan untuk menentukan model GWR terbaik dengan pembobot adaptive kernel dan fixed kernel pada kasus kematian ibu di Jawa Timur tahun 2018. Data yang digunakan pada penelitian ini adalah data kematian ibu sebagai variabel respon dan rumah tangga berperilaku hidup bersih sehat, kunjungan ibu hamil dengan K4, ibu hamil mendapat tablet Fe3, persalinan yang ditolong tenaga kesehatan, serta jumlah fasilitas kesehatan sebagai variabel prediktor. Berdasarkan kriteria pemilihan model terbaik yang dilihat dari nilai AIC terkecil dapat disimpulkan bahwa model GWR dengan fungsi pembobot adaptive bi- square kernel merupakan model terbaik untuk data kematian ibu. Faktor yang mempengaruhi kasus kematian ibu berdasarkan pengujian parameter secara parsial yaitu kunjungan ibu hamil dengan K4 dan jumlah fasilitas kesehatan.


PLoS ONE ◽  
2021 ◽  
Vol 16 (11) ◽  
pp. e0259266
Author(s):  
Anny K. G. Rodrigues ◽  
Raydonal Ospina ◽  
Marcelo R. P. Ferreira

Many machine learning procedures, including clustering analysis are often affected by missing values. This work aims to propose and evaluate a Kernel Fuzzy C-means clustering algorithm considering the kernelization of the metric with local adaptive distances (VKFCM-K-LP) under three types of strategies to deal with missing data. The first strategy, called Whole Data Strategy (WDS), performs clustering only on the complete part of the dataset, i.e. it discards all instances with missing data. The second approach uses the Partial Distance Strategy (PDS), in which partial distances are computed among all available resources and then re-scaled by the reciprocal of the proportion of observed values. The third technique, called Optimal Completion Strategy (OCS), computes missing values iteratively as auxiliary variables in the optimization of a suitable objective function. The clustering results were evaluated according to different metrics. The best performance of the clustering algorithm was achieved under the PDS and OCS strategies. Under the OCS approach, new datasets were derive and the missing values were estimated dynamically in the optimization process. The results of clustering under the OCS strategy also presented a superior performance when compared to the resulting clusters obtained by applying the VKFCM-K-LP algorithm on a version where missing values are previously imputed by the mean or the median of the observed values.


2021 ◽  
Vol 8 (4) ◽  
pp. 309-332
Author(s):  
Efosa Michael Ogbeide ◽  
Joseph Erunmwosa Osemwenkhae

Density estimation is an important aspect of statistics. Statistical inference often requires the knowledge of observed data density. A common method of density estimation is the kernel density estimation (KDE). It is a nonparametric estimation approach which requires a kernel function and a window size (smoothing parameter H). It aids density estimation and pattern recognition. So, this work focuses on the use of a modified intersection of confidence intervals (MICIH) approach in estimating density. The Nigerian crime rate data reported to the Police as reported by the National Bureau of Statistics was used to demonstrate this new approach. This approach in the multivariate kernel density estimation is based on the data. The main way to improve density estimation is to obtain a reduced mean squared error (MSE), the errors for this approach was evaluated. Some improvements were seen. The aim is to achieve adaptive kernel density estimation. This was achieved under a sufficiently smoothing technique. This adaptive approach was based on the bandwidths selection. The quality of the estimates obtained of the MICIH approach when applied, showed some improvements over the existing methods. The MICIH approach has reduced mean squared error and relative faster rate of convergence compared to some other approaches. The approach of MICIH has reduced points of discontinuities in the graphical densities the datasets. This will help to correct points of discontinuities and display adaptive density. Keywords: approach, bandwidth, estimate, error, kernel density


2021 ◽  
Author(s):  
◽  
Christian Stock

<p>For the development of earthquake occurrence models, historical earthquake catalogues and compilations of mapped, active faults are often used. The goal of this study is to develop new methodologies for the generation of an earthquake occurrence model for New Zealand that is consistent with both data sets. For the construction of a seismological earthquake occurrence model based on the historical earthquake record, 'adaptive kernel estimation' has been used in this study. Based on this method a technique has been introduced to filter temporal sequences (e.g. aftershocks). Finally, a test has been developed for comparing different earthquake occurrence models. It has been found that the adaptive kernel estimation with temporal sequence filtering gives the best joint fit between the earthquake catalogue and the earthquake occurrence model, and between two earthquake occurrence models obtained from data from two independent time intervals. For the development of a geological earthquake occurrence model based on fault information, earthquake source relationships (i.e. rupture length versus rupture width scaling) have been revised. It has been found that large dip-slip and strike-slip earthquakes scale differently. Using these source relationships a dynamic stochastic fault model has been introduced. Whereas earthquake hazard studies often do not allow individual fault segments to produce compound ruptures, this model allows the linking of fault segments by chance. The moment release of simulated fault ruptures has been compared with the theoretical deformation along the plate boundary. When comparing the seismological and the geological earthquake occurrence model, it has been found that a 'good' occurrence model for large dip-slip earthquakes is given by the seismological occurrence model using the Gutenberg-Richter magnitude frequency distribution. In contrast, regions dominated by long strike-slip faults produce large earthquakes but not many small earthquakes and the occurrence of earthquakes on such faults should be inferred from the dynamic fault model.</p>


2021 ◽  
Author(s):  
◽  
Christian Stock

<p>For the development of earthquake occurrence models, historical earthquake catalogues and compilations of mapped, active faults are often used. The goal of this study is to develop new methodologies for the generation of an earthquake occurrence model for New Zealand that is consistent with both data sets. For the construction of a seismological earthquake occurrence model based on the historical earthquake record, 'adaptive kernel estimation' has been used in this study. Based on this method a technique has been introduced to filter temporal sequences (e.g. aftershocks). Finally, a test has been developed for comparing different earthquake occurrence models. It has been found that the adaptive kernel estimation with temporal sequence filtering gives the best joint fit between the earthquake catalogue and the earthquake occurrence model, and between two earthquake occurrence models obtained from data from two independent time intervals. For the development of a geological earthquake occurrence model based on fault information, earthquake source relationships (i.e. rupture length versus rupture width scaling) have been revised. It has been found that large dip-slip and strike-slip earthquakes scale differently. Using these source relationships a dynamic stochastic fault model has been introduced. Whereas earthquake hazard studies often do not allow individual fault segments to produce compound ruptures, this model allows the linking of fault segments by chance. The moment release of simulated fault ruptures has been compared with the theoretical deformation along the plate boundary. When comparing the seismological and the geological earthquake occurrence model, it has been found that a 'good' occurrence model for large dip-slip earthquakes is given by the seismological occurrence model using the Gutenberg-Richter magnitude frequency distribution. In contrast, regions dominated by long strike-slip faults produce large earthquakes but not many small earthquakes and the occurrence of earthquakes on such faults should be inferred from the dynamic fault model.</p>


2021 ◽  
Author(s):  
Jared Adolf-Bryfogle ◽  
Jason W Labonte ◽  
John C Kraft ◽  
Maxim Shapavolov ◽  
Sebastian Raemisch ◽  
...  

Carbohydrates and glycoproteins modulate key biological functions. Computational approaches inform function to aid in carbohydrate structure prediction, structure determination, and design. However, experimental structure determination of sugar polymers is notoriously difficult as glycans can sample a wide range of low energy conformations, thus limiting the study of glycan-mediated molecular interactions. In this work, we expanded the RosettaCarbohydrate framework, developed and benchmarked effective tools for glycan modeling and design, and extended the Rosetta software suite to better aid in structural analysis and benchmarking tasks through the SimpleMetrics framework. We developed a glycan-modeling algorithm, GlycanTreeModeler, that computationally builds glycans layer-by-layer, using adaptive kernel density estimates (KDE) of common glycan conformations derived from data in the Protein Data Bank (PDB) and from quantum mechanics (QM) calculations. After a rigorous optimization of kinematic and energetic considerations to improve near-native sampling enrichment and decoy discrimination, GlycanTreeModeler was benchmarked on a test set of diverse glycan structures, or "trees". Structures predicted by GlycanTreeModeler agreed with native structures at high accuracy for both de novo modeling and experimental density-guided building. GlycanTreeModeler algorithms and associated tools were employed to design de novo glycan trees into a protein nanoparticle vaccine that are able to direct the immune response by shielding regions of the scaffold from antibody recognition. This work will inform glycoprotein model prediction, aid in both X-ray and electron microscopy density solutions and refinement, and help lead the way towards a new era of computational glycobiology.


Sign in / Sign up

Export Citation Format

Share Document