baseline method
Recently Published Documents


TOTAL DOCUMENTS

112
(FIVE YEARS 70)

H-INDEX

8
(FIVE YEARS 5)

2022 ◽  
Vol 16 (2) ◽  
pp. 1-18
Author(s):  
Xueyuan Wang ◽  
Hongpo Zhang ◽  
Zongmin Wang ◽  
Yaqiong Qiao ◽  
Jiangtao Ma ◽  
...  

Cross-network anchor link discovery is an important research problem and has many applications in heterogeneous social network. Existing schemes of cross-network anchor link discovery can provide reasonable link discovery results, but the quality of these results depends on the features of the platform. Therefore, there is no theoretical guarantee to the stability. This article employs user embedding feature to model the relationship between cross-platform accounts, that is, the more similar the user embedding features are, the more similar the two accounts are. The similarity of user embedding features is determined by the distance of the user features in the latent space. Based on the user embedding features, this article proposes an embedding representation-based method Con&Net(Content and Network) to solve cross-network anchor link discovery problem. Con&Net combines the user’s profile features, user-generated content (UGC) features, and user’s social structure features to measure the similarity of two user accounts. Con&Net first trains the user’s profile features to get profile embedding. Then it trains the network structure of the nodes to get structure embedding. It connects the two features through vector concatenating, and calculates the cosine similarity of the vector based on the embedding vector. This cosine similarity is used to measure the similarity of the user accounts. Finally, Con&Net predicts the link based on similarity for account pairs across the two networks. A large number of experiments in Sina Weibo and Twitter networks show that the proposed method Con&Net is better than state-of-the-art method. The area under the curve (AUC) value of the receiver operating characteristic (ROC) curve predicted by the anchor link is 11% higher than the baseline method, and Precision@30 is 25% higher than the baseline method.


2021 ◽  
Vol 10 (1) ◽  
pp. 26
Author(s):  
Alejandro Gómez-Pazo ◽  
Andres Payo ◽  
María Victoria Paz-Delgado ◽  
Miguel A. Delgadillo-Calzadilla

In this study, we propose a new baseline and transect method, the open-source digital shoreline analysis system (ODSAS), which is specifically designed to deal with very irregular coastlines. We have compared the ODSAS results with those obtained using the digital shoreline analysis system (DSAS). Like DSAS, our proposed method uses a single baseline parallel to the shoreline and offers the user different smoothing and spacing options to generate the transects. Our method differs from DSAS in the way that the transects’ starting points and orientation are delineated by combining raster and vector objects. ODSAS uses SAGA GIS and R, which are both free open-source software programs. In this paper, we delineate the ODSAS workflow, apply it to ten study sites along the very irregular Galician coastline (NW Iberian Peninsula), and compare it with the one obtained using DSAS. We show how ODSAS produces similar values of coastline changes in terms of the most common indicators at the aggregated level (i.e., using all transects), but the values differ when compared at the transect-by-transect level. We argue herein that explicitly requesting the user to define a minimum resolution is important to reduce the subjectivity of the transect and baseline method.


2021 ◽  
Author(s):  
Jacek Gondzio ◽  
Matti Lassas ◽  
Salla-Maaria Latva-Äijö ◽  
Samuli Siltanen ◽  
Filippo Zanetti

Abstract Dual-energy X-ray tomography is considered in a context where the target under imaging consists of two distinct materials. The materials are assumed to be possibly intertwined in space, but at any given location there is only one material present. Further, two X-ray energies are chosen so that there is a clear difference in the spectral dependence of the attenuation coefficients of the two materials. A novel regularizer is presented for the inverse problem of reconstructing separate tomographic images for the two materials. A combination of two things, (a) non-negativity constraint, and (b) penalty term containing the inner product between the two material images, promotes the presence of at most one material in a given pixel. A preconditioned interior point method is derived for the minimization of the regularization functional. Numerical tests with digital phantoms suggest that the new algorithm outperforms the baseline method, Joint Total Variation regularization, in terms of correctly material-characterized pixels. While the method is tested only in a two-dimensional setting with two materials and two energies, the approach readily generalizes to three dimensions and more materials. The number of materials just needs to match the number of energies used in imaging.


2021 ◽  
Vol 72 ◽  
pp. 1281-1305
Author(s):  
Atefe Pakzad ◽  
Morteza Analoui

Distributional semantic models represent the meaning of words as vectors. We introduce a selection method to learn a vector space that each of its dimensions is a natural word. The selection method starts from the most frequent words and selects a subset, which has the best performance. The method produces a vector space that each of its dimensions is a word. This is the main advantage of the method compared to fusion methods such as NMF, and neural embedding models. We apply the method to the ukWaC corpus and train a vector space of N=1500 basis words. We report tests results on word similarity tasks for MEN, RG-65, SimLex-999, and WordSim353 gold datasets. Also, results show that reducing the number of basis vectors from 5000 to 1500 reduces accuracy by about 1.5-2%. So, we achieve good interpretability without a large penalty. Interpretability evaluation results indicate that the word vectors obtained by the proposed method using N=1500 are more interpretable than word embedding models, and the baseline method. We report the top 15 words of 1500 selected basis words in this paper.


Author(s):  
Majid Seyfi ◽  
Richi Nayak ◽  
Yue Xu ◽  
Shlomo Geva

We tackle the problem of discriminative itemset mining. Given a set of datasets, we want to find the itemsets that are frequent in the target dataset and have much higher frequencies compared with the same itemsets in other datasets. Such itemsets are very useful for dataset discrimination. We demonstrate that this problem has important applications and, at a same time, is very challenging. We present the DISSparse algorithm, a mining method that uses two determinative heuristics based on the sparsity characteristics of the discriminative itemsets as a small subset of the frequent itemsets. We prove that the DISSparse algorithm is sound and complete. We experimentally investigate the performance of the proposed DISSparse on a range of datasets, evaluating its efficiency and stability and demonstrating it is substantially faster than the baseline method.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Shi Meng ◽  
Hao Yang ◽  
Xijuan Liu ◽  
Zhenyue Chen ◽  
Jingwen Xuan ◽  
...  

Graphs have been widely used to model the complex relationships among entities. Community search is a fundamental problem in graph analysis. It aims to identify cohesive subgraphs or communities that contain the given query vertices. In social networks, a user is usually associated with a weight denoting its influence. Recently, some research is conducted to detect influential communities. However, there is a lack of research that can support personalized requirement. In this study, we propose a novel problem, named personalized influential k -ECC (PIKE) search, which leverages the k -ECC model to measure the cohesiveness of subgraphs and tries to find the influential community for a set of query vertices. To solve the problem, a baseline method is first proposed. To scale for large networks, a dichotomy-based algorithm is developed. To further speed up the computation and meet the online requirement, we develop an index-based algorithm. Finally, extensive experiments are conducted on 6 real-world social networks to evaluate the performance of proposed techniques. Compared with the baseline method, the index-based approach can achieve up to 7 orders of magnitude speedup.


Machines ◽  
2021 ◽  
Vol 9 (12) ◽  
pp. 321
Author(s):  
Xinyu Zhang ◽  
Weijie Lv ◽  
Long Zeng

Most industrial parts are instantiated from different parametric templates. The 6DoF (6D) pose estimation tasks are challenging, since some part objects from a known template may be unseen before. This paper releases a new and well-annotated 6D pose estimation dataset for multiple parametric templates in stacked scenarios donated as Multi-Parametric Dataset, where a training set (50K scenes) and a test set (2K scenes) are obtained by automatical labeling techniques. In particular, the test set is further divided into a TEST-L dataset for learning evaluation and a TEST-G dataset for generalization evaluation. Since the part objects from the same template are regarded as a class in the Multi-Parametric Dataset and the number of part objects is infinite, we propose a new 6D pose estimation network as our baseline method, Multi-templates Parametric Pose Network (MPP-Net), aiming to have sufficient generalization ability for parametric part objects in stacked scenarios. To our best knowledge, our dataset and method are the first to jointly achieve 6D pose estimation and parameter values prediction for multiple parametric templates. Many experiments are conducted on the Multi-Parametric Dataset. The mIoU and Overall Accuracy of foreground segmentation and template segmentation on the two test datasets exceed 99.0%. Besides, MPP-Net achieves 92.9% and 90.8% on mAP under the threshold of 0.5cm for translation prediction, achieves 41.9% and 36.8% under the threshold of 5∘ for rotation prediction, and achieves 51.0% and 6.0% under the threshold of 5% for parameter values prediction, on the two test set, respectively. The results have shown that our dataset has exploratory value for 6D pose estimation and parameter values prediction tasks.


2021 ◽  
Vol 8 ◽  
Author(s):  
Ricardo A. Gonzales ◽  
Qiang Zhang ◽  
Bartłomiej W. Papież ◽  
Konrad Werys ◽  
Elena Lukaschuk ◽  
...  

Background: Quantitative cardiovascular magnetic resonance (CMR) T1 mapping has shown promise for advanced tissue characterisation in routine clinical practise. However, T1 mapping is prone to motion artefacts, which affects its robustness and clinical interpretation. Current methods for motion correction on T1 mapping are model-driven with no guarantee on generalisability, limiting its widespread use. In contrast, emerging data-driven deep learning approaches have shown good performance in general image registration tasks. We propose MOCOnet, a convolutional neural network solution, for generalisable motion artefact correction in T1 maps.Methods: The network architecture employs U-Net for producing distance vector fields and utilises warping layers to apply deformation to the feature maps in a coarse-to-fine manner. Using the UK Biobank imaging dataset scanned at 1.5T, MOCOnet was trained on 1,536 mid-ventricular T1 maps (acquired using the ShMOLLI method) with motion artefacts, generated by a customised deformation procedure, and tested on a different set of 200 samples with a diverse range of motion. MOCOnet was compared to a well-validated baseline multi-modal image registration method. Motion reduction was visually assessed by 3 human experts, with motion scores ranging from 0% (strictly no motion) to 100% (very severe motion).Results: MOCOnet achieved fast image registration (<1 second per T1 map) and successfully suppressed a wide range of motion artefacts. MOCOnet significantly reduced motion scores from 37.1±21.5 to 13.3±10.5 (p < 0.001), whereas the baseline method reduced it to 15.8±15.6 (p < 0.001). MOCOnet was significantly better than the baseline method in suppressing motion artefacts and more consistently (p = 0.007).Conclusion: MOCOnet demonstrated significantly better motion correction performance compared to a traditional image registration approach. Salvaging data affected by motion with robustness and in a time-efficient manner may enable better image quality and reliable images for immediate clinical interpretation.


Author(s):  
Lorena Romero-Medrano ◽  
Pablo Moreno-Muñoz ◽  
Antonio Artés-Rodríguez

AbstractBayesian change-point detection, with latent variable models, allows to perform segmentation of high-dimensional time-series with heterogeneous statistical nature. We assume that change-points lie on a lower-dimensional manifold where we aim to infer a discrete representation via subsets of latent variables. For this particular model, full inference is computationally unfeasible and pseudo-observations based on point-estimates of latent variables are used instead. However, if their estimation is not certain enough, change-point detection gets affected. To circumvent this problem, we propose a multinomial sampling methodology that improves the detection rate and reduces the delay while keeping complexity stable and inference analytically tractable. Our experiments show results that outperform the baseline method and we also provide an example oriented to a human behavioral study.


2021 ◽  
Vol 51 (3) ◽  
pp. 225-243
Author(s):  
Abhishek YADAV ◽  
Suresh KANNAUJIYA ◽  
Prashant Kumar CHAMPATI RAY ◽  
Rajeev Kumar YADAV ◽  
Param Kirti GAUTAM

GPS measurements have proved extremely useful in quantifying strain accumulation rate and assessing seismic hazard in a region. Continuous GPS measurements provide estimates of secular motion used to understand the earthquake and other geodynamic processes. GNSS stations extending from the South of India to the Higher Himalayan region have been used to quantify the strain build-up rate in Central India and the Himalayan region to assess the seismic hazard potential in this realm. Velocity solution has been determined after the application of Markov noise estimated from GPS time series data. The recorded GPS data are processed along with the closest International GNSS stations data for estimation of daily basis precise positioning. The baseline method has been used for the estimation of the linear strain rate between the two stations. Whereas the principal strain axes, maximum shear strain, rotation rate, and crustal shortening rate has been calculated through the site velocity using an independent approach; least-square inversion approach-based triangulation method. The strain rate analysis estimated by the triangulation approach exhibits a mean value of extension rate of 26.08 nano-strain/yr towards N131°, the compression rate of –25.38 nano-strain/yr towards N41°, maximum shear strain rate of 51.47 nano-strain/yr, dilation of –37.57 nano-strain/yr and rotation rate of 0.7°/Ma towards anti-clockwise. The computed strain rate from the Baseline method and the Triangulation method reports an extensive compression rate that gradually increases from the Indo-Gangetic Plain in South to Higher Himalaya in North. The slip deficit rate between India and Eurasia Plate in Kumaun Garhwal Himalaya has been computed as 18±1.5 mm/yr based on elastic dislocation theory. Thus, in this study, present-day surface deformation rate and interseismic strain accumulation rate in the Himalayan region and the Central Indian region have been estimated for seismic hazard analysis using continuous GPS measurements.


Sign in / Sign up

Export Citation Format

Share Document