Abstract
This paper addresses fast semantic segmentation on video.
Video segmentation often calls for real-time, or even faster than real-time, processing.
One common recipe for conserving computation arising from feature extraction is to propagate features of few selected keyframes.
However, recent advances in fast image segmentation make these solutions less attractive.
To leverage fast image segmentation for furthering video segmentation, we introduce a guided spatially-varying convolution for fusing segmentations derived from the previous and current frames,
to mitigate propagation error and enable lightweight feature extraction on non-keyframes.