Construction of a platform for the transformation from general pharmacists to information pharmacists using the quality control circle

2020 ◽  
Vol 20 (5) ◽  
pp. 367-370
Author(s):  
Haiying PENG ◽  
◽  
Ruoxi SHI ◽  
Hongju LIN ◽  
Xinrong WU ◽  
...  

Objective: To explore and construct a platform for the transformation from general pharmacists to information pharmacists using quality control circle(QCC) activities.Methods: The QCC activities were applied to the whole training process of information pharmacists.Training started through the approach of having the pharmacists learn programming sentences,and program writing was finally completed after continuous exploration.Results: Through QCC activities,3 information pharmacists were initially trained,the writing of 2 programs were completed and a set of standard training profiles was summarized.Conclusion: The QCC activities could be used for pharmacy management and personnel training.It is particularly the case when more difficult targets are set for the work.Compared with the general team,the team implementing the QCC is more resolute and clearer in work goals and more liable to complete the project.

Author(s):  
Yuzhao Chen ◽  
Yatao Bian ◽  
Xi Xiao ◽  
Yu Rong ◽  
Tingyang Xu ◽  
...  

Recently, the teacher-student knowledge distillation framework has demonstrated its potential in training Graph Neural Networks (GNNs). However, due to the difficulty of training over-parameterized GNN models, one may not easily obtain a satisfactory teacher model for distillation. Furthermore, the inefficient training process of teacher-student knowledge distillation also impedes its applications in GNN models. In this paper, we propose the first teacher-free knowledge distillation method for GNNs, termed GNN Self-Distillation (GNN-SD), that serves as a drop-in replacement of the standard training process. The method is built upon the proposed neighborhood discrepancy rate (NDR), which quantifies the non-smoothness of the embedded graph in an efficient way. Based on this metric, we propose the adaptive discrepancy retaining (ADR) regularizer to empower the transferability of knowledge that maintains high neighborhood discrepancy across GNN layers. We also summarize a generic GNN-SD framework that could be exploited to induce other distillation strategies. Experiments further prove the effectiveness and generalization of our approach, as it brings: 1) state-of-the-art GNN distillation performance with less training cost, 2) consistent and considerable performance enhancement for various popular backbones.


2020 ◽  
Vol 34 (04) ◽  
pp. 4788-4795
Author(s):  
Zhenmao Li ◽  
Yichao Wu ◽  
Ken Chen ◽  
Yudong Wu ◽  
Shunfeng Zhou ◽  
...  

Example weighting algorithm is an effective solution to the training bias problem, however, most previous typical methods are usually limited to human knowledge and require laborious tuning of hyperparameters. In this paper, we propose a novel example weighting framework called Learning to Auto Weight (LAW). The proposed framework finds step-dependent weighting policies adaptively, and can be jointly trained with target networks without any assumptions or prior knowledge about the dataset. It consists of three key components: Stage-based Searching Strategy (3SM) is adopted to shrink the huge searching space in a complete training process; Duplicate Network Reward (DNR) gives more accurate supervision by removing randomness during the searching process; Full Data Update (FDU) further improves the updating efficiency. Experimental results demonstrate the superiority of weighting policy explored by LAW over standard training pipeline. Compared with baselines, LAW can find a better weighting schedule which achieves much more superior accuracy on both biased CIFAR and ImageNet.


2003 ◽  
Vol 118 (3) ◽  
pp. 193-196 ◽  
Author(s):  
Jeffrey W McKenna ◽  
Terry F Pechacek ◽  
Donna F Stroup

1971 ◽  
Vol 127 (1) ◽  
pp. 101-105 ◽  
Author(s):  
L. L. Weed

2009 ◽  
Author(s):  
Morris Goldsmith ◽  
Larry L. Jacoby ◽  
Vered Halamish ◽  
Christopher N. Wahlheim

1956 ◽  
Vol 27 (106) ◽  
pp. 89
Author(s):  
N.R. Bedford
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document