PhD Student, Northeastern University
Title: Knowledge Transfer for Face Recognition
It is essential to mimic human cognitive process to adapt previous well-learn knowledge to facilitate the new challenging face recognition tasks. In this presentation, I focus on two topics, i.e., missing modality and one-shot learning. First of all, we may always confront the problem that we have no target face available in the training stage, which arises when the face data are multi-modal. To overcome this, we first borrow an auxiliary database with complete modalities, then propose a two-directional knowledge transfer to solve the missing modality issue. Secondly, we may always confront that we only have one training sample for some persons. It is challenging for existing machine learning approaches to mimic this way since limited data cannot well represent the data variance. To this end, we propose to build a large-scale face recognizer, which is capable to fight off the data imbalance difficulty. Specifically, we develop a novel generative model to synthesize meaningful data for one-shot classes by adapting the data variances from other normal classes.
Zhengming Ding received the B.Eng. degree in information security and the M.Eng. degree in computer software and theory from University of Electronic Science and Technology of China (UESTC), China, in 2010 and 2013, respectively. He is currently working toward the Ph.D. degree in the Department of Electrical and Computer Engineering, Northeastern University, USA. His research interests include machine learning and computer vision. Specifically, he devotes himself to develop scalable algorithms for challenging problems in transfer learning and deep learning scenario. He was the recipient of the Student Travel Grant of CVPR 17, FG 17, IJCAI 16, AAAI16, ACM MM 14 and ICDM 14. He received the National Institute of Justice Fellowship. He was the recipients of the best paper award (SPIE) and best paper candidate (ACM MM 2017).