scholarly journals Predicting publication productivity for authors: Shallow or deep architecture?

2021 ◽  
Author(s):  
Wumei Du ◽  
Zheng Xie ◽  
Yiqin Lv
Author(s):  
Sonali Basu ◽  
Robin Horak ◽  
Murray M. Pollack

AbstractOur objective was to associate characteristics of pediatric critical care medicine (PCCM) fellowship training programs with career outcomes of PCCM physicians, including research publication productivity and employment characteristics. This is a descriptive study using publicly available data from 2557 PCCM physicians from the National Provider Index registry. We analyzed data on a systematic sample of 690 PCCM physicians representing 62 fellowship programs. There was substantial diversity in the characteristics of fellowship training programs in terms of fellowship size, intensive care unit (ICU) bed numbers, age of program, location, research rank of affiliated medical school, and academic metrics based on publication productivity of their graduates standardized over time. The clinical and academic attributes of fellowship training programs were associated with publication success and characteristics of their graduates' employment hospital. Programs with greater publication rate per graduate had more ICU beds and were associated with higher ranked medical schools. At the physician level, training program attributes including larger size, older program, and higher academic metrics were associated with graduates with greater publication productivity. There were varied characteristics of current employment hospitals, with graduates from larger, more academic fellowship training programs more likely to work in larger pediatric intensive care units (24 [interquartile range, IQR: 16–35] vs. 19 [IQR: 12–24] beds; p < 0.001), freestanding children's hospitals (52.6 vs. 26.3%; p < 0.001), hospitals with fellowship programs (57.3 vs. 40.3%; p = 0.01), and higher affiliated medical school research ranks (35.5 [IQR: 14–72] vs. 62 [IQR: 32, unranked]; p < 0.001). Large programs with higher academic metrics train physicians with greater publication success (H index 3 [IQR: 1–7] vs. 2 [IQR: 0–6]; p < 0.001) and greater likelihood of working in large academic centers. These associations may guide prospective trainees as they choose training programs that may foster their career values.


2020 ◽  
Vol 2020 (8) ◽  
pp. 114-1-114-7
Author(s):  
Bryan Blakeslee ◽  
Andreas Savakis

Change detection in image pairs has traditionally been a binary process, reporting either “Change” or “No Change.” In this paper, we present LambdaNet, a novel deep architecture for performing pixel-level directional change detection based on a four class classification scheme. LambdaNet successfully incorporates the notion of “directional change” and identifies differences between two images as “Additive Change” when a new object appears, “Subtractive Change” when an object is removed, “Exchange” when different objects are present in the same location, and “No Change.” To obtain pixel annotated change maps for training, we generated directional change class labels for the Change Detection 2014 dataset. Our tests illustrate that LambdaNet would be suitable for situations where the type of change is unstructured, such as change detection scenarios in satellite imagery.


2021 ◽  
Vol 11 (12) ◽  
pp. 5656
Author(s):  
Yufan Zeng ◽  
Jiashan Tang

Graph neural networks (GNNs) have been very successful at solving fraud detection tasks. The GNN-based detection algorithms learn node embeddings by aggregating neighboring information. Recently, CAmouflage-REsistant GNN (CARE-GNN) is proposed, and this algorithm achieves state-of-the-art results on fraud detection tasks by dealing with relation camouflages and feature camouflages. However, stacking multiple layers in a traditional way defined by hop leads to a rapid performance drop. As the single-layer CARE-GNN cannot extract more information to fix the potential mistakes, the performance heavily relies on the only one layer. In order to avoid the case of single-layer learning, in this paper, we consider a multi-layer architecture which can form a complementary relationship with residual structure. We propose an improved algorithm named Residual Layered CARE-GNN (RLC-GNN). The new algorithm learns layer by layer progressively and corrects mistakes continuously. We choose three metrics—recall, AUC, and F1-score—to evaluate proposed algorithm. Numerical experiments are conducted. We obtain up to 5.66%, 7.72%, and 9.09% improvements in recall, AUC, and F1-score, respectively, on Yelp dataset. Moreover, we also obtain up to 3.66%, 4.27%, and 3.25% improvements in the same three metrics on the Amazon dataset.


2019 ◽  
Vol 332 ◽  
pp. 224-235 ◽  
Author(s):  
Sheng-hua Zhong ◽  
Jiaxin Wu ◽  
Jianmin Jiang

Author(s):  
Chandan Biswas ◽  
Partha Sarathi Mukherjee ◽  
Koyel Ghosh ◽  
Ujjwal Bhattacharya ◽  
Swapan K. Parui

2018 ◽  
Vol 54 (2) ◽  
pp. 225-244 ◽  
Author(s):  
Min Yen Wu ◽  
Chih-Ya Shen ◽  
En Tzu Wang ◽  
Arbee L. P. Chen

Author(s):  
Xuanlu Xiang ◽  
Zhipeng Wang ◽  
Zhicheng Zhao ◽  
Fei Su

In this paper, aiming at two key problems of instance-level image retrieval, i.e., the distinctiveness of image representation and the generalization ability of the model, we propose a novel deep architecture - Multiple Saliency and Channel Sensitivity Network(MSCNet). Specifically, to obtain distinctive global descriptors, an attention-based multiple saliency learning is first presented to highlight important details of the image, and then a simple but effective channel sensitivity module based on Gram matrix is designed to boost the channel discrimination and suppress redundant information. Additionally, in contrast to most existing feature aggregation methods, employing pre-trained deep networks, MSCNet can be trained in two modes: the first one is an unsupervised manner with an instance loss, and another is a supervised manner, which combines classification and ranking loss and only relies on very limited training data. Experimental results on several public benchmark datasets, i.e., Oxford buildings, Paris buildings and Holidays, indicate that the proposed MSCNet outperforms the state-of-the-art unsupervised and supervised methods.


Sign in / Sign up

Export Citation Format

Share Document