Omnidirectional Videos (or 360° videos) are widely used in Virtual Reality (VR) to facilitate immersive and interactive viewing experiences. However, the limited spatial resolution in 360° videos does not allow for each degree of view to be represented with adequate pixels, limiting the visual quality offered in the immersive experience. Deep learning Video Super-Resolution (VSR) techniques used for conventional videos could provide a promising software-based solution; however, these techniques do not tackle the distortion present in equirecentagular projections of 360° video signals. An additional obstacle is the limited 360° video datasets to study. To address these issues, this paper creates a novel 360° Video Dataset (360VDS) with a study of the extensibility of conventional VSR models to 360° videos. This paper further proposes a novel deep learning model for 360° Video Super-Resolution (360° VSR), called Spherical Signal Super-resolution with Proportioned Optimisation (S3PO). S3PO adopts recurrent modelling with attention mechanism, unbound from conventional VSR techniques like alignment. With a purpose built feature extractor and a novel loss function addressing spherical distortion, S3PO outperforms most state-of-the-art conventional VSR models and 360° specific super-resolution models on 360° video datasets. A step-wise ablation study is presented to understand and demonstrate the impact of the chosen architectural sub-components, targeted training and optimisation.
Recurrent Neural Networks (RNN) are widespread for Video Super-Resolution (VSR) because of their proven ability to learn spatiotemporal inter-dependencies across the temporal dimension. Despite RNN’s ability to propagate memory across longer sequences of frames, vanishing gradient and error accumulation remain major obstacles to unidirectional RNNs in VSR. Several bi-directional recurrent models are suggested in the literature to alleviate this issue; however, these models are only applicable to offline use cases due to heavy demands for computational resources and the number of frames required per input. This paper proposes a novel unidirectional recurrent model for VSR, namely “Replenished Recurrency with Dual-Duct” (R2D2), that can be used in an online application setting. R2D2 incorporates a recurrent architecture with a sliding-window-based local alignment resulting in a recurrent hybrid architecture. It also uses a dual-duct residual network for concurrent and mutual refinement of local features along with global memory for full utilisation of the information available at each timestamp. With novel modelling and sophisticated optimisation, R2D2 demonstrates competitive performance and efficiency despite the lack of information available at each time-stamp compared to its offline (bi-directional) counterparts. Ablation analysis confirms the additive benefits of the proposed sub-components of R2D2 over baseline RNN models.The PyTorch-based code for the R2D2 model will be released at R2D2 GitRepo.