Thesis
Improved knowledge distillation for deep neural networks
Washington State University
Master of Science (MS), Washington State University
12/2020
DOI:
https://doi.org/10.7273/000004176
Handle:
https://hdl.handle.net/2376/125099
Abstract
Despite the fact that deep neural networks are powerful models and achieve appealing results on many tasks, they are too large to be deployed on edge devices like smartphones or embedded sensor nodes. There have been efforts to compress these networks, and a popular method is knowledge distillation, where a large (teacher) pre-trained network is used to train a smaller (student) network. However, in this thesis, we show that the student network performance degrades when the gap between student and teacher is large. Given a fixed student network, one cannot employ an arbitrarily large teacher, or in other words, a teacher can effectively transfer its knowledge to students up to a certain size, not smaller. To alleviate this shortcoming, we introduce multi-step knowledge distillation, which employs an intermediate-sized network (teacher assistant) to bridge the gap between the student and the teacher. Moreover, we study the effect of teacher assistant size and extend the framework to multi-step distillation. Theoretical analysis and extensive experiments on CIFAR10,100 and ImageNet datasets and on CNN and ResNet architectures substantiate the effectiveness of our proposed approach.
Metrics
13 File views/ downloads
33 Record Views
Details
- Title
- Improved knowledge distillation for deep neural networks
- Creators
- Seyed Iman Mirzadeh
- Contributors
- HASSAN GHASEMZADEH (Advisor)
- Awarding Institution
- Washington State University
- Academic Unit
- Electrical Engineering and Computer Science, School of
- Theses and Dissertations
- Master of Science (MS), Washington State University
- Publisher
- Washington State University
- Identifiers
- 99900896441301842
- Language
- English
- Resource Type
- Thesis