teacher networks
Recently Published Documents


TOTAL DOCUMENTS

36
(FIVE YEARS 19)

H-INDEX

8
(FIVE YEARS 1)

Electronics ◽  
2021 ◽  
Vol 10 (24) ◽  
pp. 3153
Author(s):  
Shouying Wu ◽  
Wei Li ◽  
Binbin Liang ◽  
Guoxin Huang

The self-supervised monocular depth estimation paradigm has become an important branch of computer vision depth-estimation tasks. However, the depth estimation problem arising from object edge depth pulling or occlusion is still unsolved. The grayscale discontinuity of object edges leads to a relatively high depth uncertainty of pixels in these regions. We improve the geometric edge prediction results by taking uncertainty into account in the depth-estimation task. To this end, we explore how uncertainty affects this task and propose a new self-supervised monocular depth estimation technique based on multi-scale uncertainty. In addition, we introduce a teacher–student architecture in models and investigate the impact of different teacher networks on the depth and uncertainty results. We evaluate the performance of our paradigm in detail on the standard KITTI dataset. The experimental results show that the accuracy of our method increased from 87.7% to 88.2%, the AbsRel error rate decreased from 0.115 to 0.11, the SqRel error rate decreased from 0.903 to 0.822, and the RMSE error rate decreased from 4.863 to 4.686 compared with the benchmark Monodepth2. Our approach has a positive impact on the problem of texture replication or inaccurate object boundaries, producing sharper and smoother depth images.


Author(s):  
Xiaobin Liu ◽  
Shiliang Zhang

Recent works show that mean-teaching is an effective framework for unsupervised domain adaptive person re-identification. However, existing methods perform contrastive learning on selected samples between teacher and student networks, which is sensitive to noises in pseudo labels and neglects the relationship among most samples. Moreover, these methods are not effective in cooperation of different teacher networks. To handle these issues, this paper proposes a Graph Consistency based Mean-Teaching (GCMT) method with constructing the Graph Consistency Constraint (GCC) between teacher and student networks. Specifically, given unlabeled training images, we apply teacher networks to extract corresponding features and further construct a teacher graph for each teacher network to describe the similarity relationships among training images. To boost the representation learning, different teacher graphs are fused to provide the supervise signal for optimizing student networks. GCMT fuses similarity relationships predicted by different teacher networks as supervision and effectively optimizes student networks with more sample relationships involved. Experiments on three datasets, i.e., Market-1501, DukeMTMCreID, and MSMT17, show that proposed GCMT outperforms state-of-the-art methods by clear margin. Specially, GCMT even outperforms the previous method that uses a deeper backbone. Experimental results also show that GCMT can effectively boost the performance with multiple teacher and student networks. Our code is available at https://github.com/liu-xb/GCMT .


Author(s):  
Yi Xie ◽  
Fei Shen ◽  
Jianqing Zhu ◽  
Huanqiang Zeng

AbstractVehicle re-identification is a challenging task that matches vehicle images captured by different cameras. Recent vehicle re-identification approaches exploit complex deep networks to learn viewpoint robust features for obtaining accurate re-identification results, which causes large computations in their testing phases to restrict the vehicle re-identification speed. In this paper, we propose a viewpoint robust knowledge distillation (VRKD) method for accelerating vehicle re-identification. The VRKD method consists of a complex teacher network and a simple student network. Specifically, the teacher network uses quadruple directional deep networks to learn viewpoint robust features. The student network only contains a shallow backbone sub-network and a global average pooling layer. The student network distills viewpoint robust knowledge from the teacher network via minimizing the Kullback-Leibler divergence between the posterior probability distributions resulted from the student and teacher networks. As a result, the vehicle re-identification speed is significantly accelerated since only the student network of small testing computations is demanded. Experiments on VeRi776 and VehicleID datasets show that the proposed VRKD method outperforms many state-of-the-art vehicle re-identification approaches with better accurate and speed performance.


Electronics ◽  
2021 ◽  
Vol 10 (14) ◽  
pp. 1669
Author(s):  
Ehab Essa ◽  
Xianghua Xie

A deep collaborative learning approach is introduced in which a chain of randomly wired neural networks is trained simultaneously to improve the overall generalization and form a strong ensemble model. The proposed method takes advantage of functional-preserving transfer learning and knowledge distillation to produce an ensemble model. Knowledge distillation is an effective learning scheme for improving the performance of small neural networks by using the knowledge learned by teacher networks. Most of the previous methods learn from one or more teachers but not in a collaborative way. In this paper, we created a chain of randomly wired neural networks based on a random graph algorithm and collaboratively trained the models using functional-preserving transfer learning, so that the small network in the chain could learn from the largest one simultaneously. The training method applies knowledge distillation between randomly wired models, where each model is considered as a teacher to the next model in the chain. The decision of multiple chains of models can be combined to produce a robust ensemble model. The proposed method is evaluated on CIFAR-10, CIFAR-100, and TinyImageNet. The experimental results show that the collaborative training significantly improved the generalization of each model, which allowed for obtaining a small model that can mimic the performance of a large model and produce a more robust ensemble approach.


Author(s):  
Aubrey Hibajene Mweemba ◽  
John McClain, Jr ◽  
Beverley Harris ◽  
Enid F. Newell-McLymont

The teaching and learning enterprise require several inputs and a framework upon which the teacher’s practice and repertoires are put into action and one such input is cognitive coaching. It is important to note that schools that are known to be successful have a distinction and ability to enhance teaching practices, where teachers can collaborate among themselves in designing subject materials and other professional undertakings .Additionally, the ability to inform and critique each other in an honest way has a long lasting feature to ensure growth and improvement in the individual teacher and also in the ability to sustain an effective organization. This paper provides a platform upon which the construct of cognitive coaching can be examined. The paper embodies a critical analysis of chapters two, five and seven of Newell-McLymont (2015). In Chapter two, Collaboration in the classroom context is seen as a critical component in the teaching/learning environment, bringing benefits to both teachers and the students at their disposal. Collaboration has been proven to be the panacea for eliminating teacher isolation and encourages problem solving approaches. An analytic perspective on generating the cognitive coaching approach, while bearing in mind, the power of teacher networks, is the thrust of chapter five. Chapter seven in examining the cognitive approach through application presents several studies that looked at the environment and culture as essential consideration for collaborative learning. Given the benefits of cognitive coaching, the reviewers have sounded the call for this to be fully embraced especially during the COVID 19 period of crisis.


2020 ◽  
Author(s):  
Yi Xie ◽  
Fei Shen ◽  
Jianqing Zhu ◽  
Huanqiang Zeng

Abstract Vehicle re-identification is a challenging task that matches vehicle images captured by different cameras. Recent vehicle re-identification approaches exploit complex deep networks to learn viewpoint robust features for obtaining accurate re-identification results, which causes large computations in their testing phases to restrict the vehicle re-identification speed. In this paper, we propose a viewpoint robust knowledge distillation (VRKD) method for accelerating vehicle re-identification. The VRKD method consists of a complex teacher network and a simple student network. Specifically, the teacher network uses quadruple directional deep networks to learn viewpoint robust features. The student network only contains a shallow backbone sub-network and a global average pooling layer. The student network distills viewpoint robust knowledge from the teacher network via minimizing the Kullback-Leibler divergence between the posterior probability distributions resulted from the student and teacher networks. As a result, the vehicle re-identification speed is significantly accelerated since only the student network of small testing computations is demanded. Experiments on VeRi776 and VehicleID datasets show that the proposed VRKD method outperforms many state-of-the-art vehicle re-identification approaches with better accurate and speed performance.


Sign in / Sign up

Export Citation Format

Share Document