报告人:Mr. Dingquan Li(Peking University)
时间:2018-10-19 12:00-13:30
地点:Room 1560,Sciences Building No. 1
各位数院老师和研究生同学:
研究生学术午餐会是在学院领导的大力支持下,由研究生会负责组织的系列学术交流活动。午餐会每次邀请一位同学作为
主讲人,面向全院各专业背景的研究生介绍自己科研方向的基本问题、概念和方法,并汇报近期的研究成果和进展,是研
究生展示自我、促进交流的学术平台。
研究生会已经举办了三十三期活动,我们将于2018年10月19日(周五)举办第三十四期学术午餐会活动,欢迎感兴趣的
老师和同学积极报名参加。
Bio:李鼎权,122cc太阳集成游戏2015级博士研究生,导师为姜明老师(122cc太阳集成游戏)、蒋婷婷老师(北京大学数
字媒体所/数字视频编解码技术国家工程实验室),主要研究方向为图像视频处理中的图像视频质量评价。曾获2015-2016
年度院长奖学金、2016-2017年度宜信互联网金融奖学金(院级)、2017年数字视频编解码技术国家工程实验室优秀个
人、2018-2019年度校长奖学金。
Abstract: Image content variation is a typical and challenging problem in no-reference image quality assessment (NR-IQA). This work pays special attention to the impact of image content variation on NR-IQA methods. To better analyze this impact, we focus on blur-dominated distortions to exclude the impacts of distortion-type variations. We empirically show that current NR-IQA methods are inconsistent with human visual perception when predicting the relative quality of image pairs with different image contents. In view of that deep semantic features of pre-trained image classification neural networks always contain discriminative image content information, we put forward a new NR-IQA method based on semantic feature aggregation (SFA) to alleviate the impact of image content variation. Specifically, instead of resizing the image, we first crop multiple overlapping patches over the entire distorted image to avoid introducing geometric deformations. Then, according to an adaptive layer selection procedure, we extract deep semantic features by leveraging the power of a pre-trained image classification model for its inherent content-aware property. After that, the local patch features are aggregated using several statistical structures. Finally, a linear regression model is trained for mapping the aggregated global features to image quality scores. The proposed method, SFA, is compared with 9 representative blur-specific NR-IQA methods, 2 general-purpose NR-IQA methods and 2 extra full-reference IQA methods on Gaussian blur images (with and without Gaussian noise/JPEG compression) and realistic blur images from multiple databases, including LIVE, TID2008, TID2013, MLIVE1, MLIVE2, BID and CLIVE. Experimental results show that SFA is superior to the state-of-the-art NR methods on all seven databases. It is also verified that deep semantic features play a crucial role in addressing image content variation, and this provides a new perspective for NR-IQA.
报名方式:请有意参加的老师在2018年10月18日(周四)中午12点前发送邮件至smsxueshu@126.com,我们将回复
邮件和您确认。邮件报名方式仅限于老师,有意参加的同学请点击链接https://www.wjx.top/jq/28980427.aspx报名,
谢谢!