Current Location: Home » People » Faculty » By Last Name » T » Content

People

T

Tian, Yonghong

Title:Professor

Institute:Institute for Visual Technology

Research Interests:Machine learning, computer vision, multimedia big data

Phone:86-10-6275 5965

E-mail:yhtianpku.edu.cn

Tian, Yonghong is is currently a Professor with School of Computer Science, Peking University, China. He received the Ph.D. degree from the Institute of Computing Technology, Chinese Academy of Sciences, China, in 2005, and was also a visiting scientist at Department of Computer Science/Engineering, University of Minnesota from Nov. 2009 to Jul. 2010. His research interests include machine learning, computer vision, and multimedia big data.

Prof. Tian is the author or coauthor of over 140 technical articles in refereed journals and conferences, and has owned more than 38 US and Chinese patents. He is currently an Associate Editor of IEEE Transactions on Multimedia, IEEE Access, and Int’l J. of Multimedia Data Engineering and Management. He initiated IEEE Int’l Conf. on Multimedia Big Data (BigMM) and served as the TPC Co-chair of BigMM 2015, and also served as the Technical Program Co-chair of IEEE ICME 2015 and IEEE ISM 2015, the organizing committee members of ACM Multimedia 2009, IEEE MMSP 2011, IEEE ISCAS 2013, IEEE ISM 2015/2016 and so on. He was a PC Member of more than ten conferences such as CVPR, KDD, AAAI, ACM MM and ECCV. He was the recipient of several national and ministerial prizes in China, and obtained the 2015 EURASIP Best Paper Award for the EURASIP Journal on Image and Video Processing. He is a senior member of IEEE, and a member of ACM.

Prof. Tian has more than twenty research projects including NSFC, National Key R&D programs, 863 project, etc., with the total fund of more than 50M RMB. His research achievements are summarized as follows:

1) Learning-based visual saliency computation: Visual saliency computation, which measures the importance of various visual subsets in an image or video, is key to large-scale visual information process in this coming Big Data age. Instead of directly simulating the “known” mechanisms of the human brain, he proposed to incorporate the modern machine learning algorithms to automatically mine the probable saliency mechanisms from user data. In this process, the prior knowledge, which is believed to be stored in the higher regions of the human brain, can be effectively and efficiently modeled to guide the computation of visual saliency. A multi-task learning technique was developed to infer what a human subject may attend to in an incoming scene by analyzing the users’ activities when watching similar scenes in the past. Moreover, he presented a statistical learning approach to infer such priors from millions of images in an unsupervised manner. Subsequently, he examined the properties of salient targets and describe how to extract a salient object from an image as a whole.

2) Object and action analysis in surveillance videos: Video surveillance systems have become one of most important infrastructures for social security and emergency management applications. In this study, a multi-view Bayesian network model was proposed for precise object detection from a complex scene by modeling and utilizing the homography in the multi-camera scenario. Moreover, several multiple kernel learning methods, deep re-identification and action recognition models were also proposed by exploiting the latent correlation inside the feature space. With these methods, his team won the algorithmic competition in 2012 IEEE Performance Evaluation of Tracking and Surveillance (PETS) and his algorithm ranked as the best algorithm among five-years’ competitions. When applied for video content-based copy detection (CCD) and surveillance event detection (SED) tasks, the algorithms from his team were ranked as one of the best performers in several consecutive TRECVID CCD and SED evaluations since 2009.

3) Ultra-efficient surveillance video coding: With the exponentially increasing deployment of surveillance cameras, one major challenge for a real-time video surveillance system is how to effectively reduce the bandwidth and storage costs. To address this problem, he developed a high-efficiency and low-complexity video coding technology for surveillance videos, by introducing the background modeling module and background-based adaptive predictive coding modes into the traditional video coding framework. Results show that his technology can averagely save 40% bits and reduce the encoding complexity by 45% on surveillance videos, by compared with the recent HEVC/H.265. Moreover, this technology has been standardized into the Chinese national video coding standard, AVS2, and thus will be applied in the practical systems in coming years.

In addition, his technological achievements have been applied in industries. At least five companies have been licensed his technologies and consequently developed their products for smart city, multimedia search engine and interactive video services.