Speaker: Chenglong BAO (Tsinghua University)
Time: Jun 2, 2020, 10:00-11:00
Location: Zoom (ID 669 417 9735)
Abstract
Deep neural networks have been widely used in many applications. The classification accuracy increases as the network goes bigger. However, the huge computation and storage have prevented their deployments in resource-limited devices. In this talk, we will first show that there exists redundancy in current CNNs under the PAC framework. Second, we will propose the self-distillation technique that can compress the deep neural networks with dynamic inference. Finally, I will introduce some recent work on improving the robustness of the DNNs.