Technical Program

Paper Detail

Paper IDD-3-3.7
Paper Title SUPER-RESOLUTION OF MULTI-VIEW ERP 360-DEGREE IMAGES WITH TWO-STAGE DISPARITY REFINEMENT
Authors Hee-Jae Kim, Jewon Kang, Byung-Uk Lee, Ewha W. University, Korea (South)
Session D-3-3: Image and video processing based on deep learning
TimeThursday, 10 December, 17:30 - 19:30
Presentation Time:Thursday, 10 December, 19:00 - 19:15 Check your Time Zone
All times are in New Zealand Time (UTC +13)
Topic Image, Video, and Multimedia (IVM): Special Session: Image and video processing based on deep learning
Abstract In this paper, we propose a novel super-resolution (SR) technique for multi-view 360-degree images in equi-rectangular projection (ERP) format. To the best of our knowledge, the proposed algorithm is the first study of multi-view 360-degree images in ERP. In multi-view SR (MV-SR), it is important to fuse the knowledge of features at different viewpoints, but the task is hardly achieved using a conventional CNN because conventional convolution is shift invariant. Thus, to solve the problem, we take a coarse-to-fine approach to exploit the correlation among multi-views in an ERP domain. First, we conduct depth-based warping on reference ERP to synthesize the image with the same viewpoint of the target low-resolution (LR) ERP. The non-linear distortion between the two ERP images can be remarkably reduced after the proposed warping. Second, we employ a flow estimator to refine the remaining flow between the warped reference image and the LR image. Our CNN architecture generates the SR at the end of the network by combining the features of LR-ERP and the warped reference ERP. It is demonstrated with experimental results that the proposed algorithm provides significantly improved quality of multi-view 360-degree images for SR as compared to the state-of-the-art in MV-SR.