Welcome to the IKCEST

IEEE Transactions on Image Processing | Vol.28, Issue.5 | | Pages 2200-2211

IEEE Transactions on Image Processing

Two-Stream Convolutional Networks for Blind Image Quality Assessment

Qingsen YanDong GongYanning Zhang  
Abstract

Traditional image quality assessment (IQA) methods do not perform robustly due to the shallow hand-designed features. It has been demonstrated that deep neural network can learn more effective features than ever. In this paper, we describe a new deep neural network to predict the image quality accurately without relying on the reference image. To learn more effective feature representations for non-reference IQA, we propose a two-stream convolution network that includes two subcomponents for image and gradient image. The motivation for this design is using a two-stream scheme to capture different-level information of inputs and easing the difficulty of extracting features from one steam. The gradient stream focuses on extracting structure features in details, and the image stream pays more attention to the information in intensity. In addition, to consider the locally non-uniform distribution of distortion in images, we add a region-based fully convolutional layer for using the information around the center of the input image patch. The final score of the overall image is calculated by averaging of the patch scores. The proposed network performs in an end-to-end manner in both the training and testing phases. The experimental results on a series of benchmark datasets, e.g., LIVE, CISQ, IVC, TID2013, and Waterloo Exploration Database, show that the proposed algorithm outperforms the state-of-the-art methods, which verifies the effectiveness of our network architecture.

Original Text (This is the original text for your reference.)

Two-Stream Convolutional Networks for Blind Image Quality Assessment

Traditional image quality assessment (IQA) methods do not perform robustly due to the shallow hand-designed features. It has been demonstrated that deep neural network can learn more effective features than ever. In this paper, we describe a new deep neural network to predict the image quality accurately without relying on the reference image. To learn more effective feature representations for non-reference IQA, we propose a two-stream convolution network that includes two subcomponents for image and gradient image. The motivation for this design is using a two-stream scheme to capture different-level information of inputs and easing the difficulty of extracting features from one steam. The gradient stream focuses on extracting structure features in details, and the image stream pays more attention to the information in intensity. In addition, to consider the locally non-uniform distribution of distortion in images, we add a region-based fully convolutional layer for using the information around the center of the input image patch. The final score of the overall image is calculated by averaging of the patch scores. The proposed network performs in an end-to-end manner in both the training and testing phases. The experimental results on a series of benchmark datasets, e.g., LIVE, CISQ, IVC, TID2013, and Waterloo Exploration Database, show that the proposed algorithm outperforms the state-of-the-art methods, which verifies the effectiveness of our network architecture.

+More

Cite this article
APA

APA

MLA

Chicago

Qingsen YanDong GongYanning Zhang,.Two-Stream Convolutional Networks for Blind Image Quality Assessment. 28 (5),2200-2211.

Disclaimer: The translated content is provided by third-party translation service providers, and IKCEST shall not assume any responsibility for the accuracy and legality of the content.
Translate engine
Article's language
English
中文
Pусск
Français
Español
العربية
Português
Kikongo
Dutch
kiswahili
هَوُسَ
IsiZulu
Action
Recommended articles

Report

Select your report category*



Reason*



By pressing send, your feedback will be used to improve IKCEST. Your privacy will be protected.

Submit
Cancel