SemanticNVS: Improving Semantic Scene Understanding in Generative Novel View Synthesis

Max Planck Institute for Informatics, Saarland Informatics Campus, Germany

TL;DR: SemanticNVS improves long-range novel view synthesis by integrating pre-trained semantic features—incorporating warped semantics and an alternating scheme of understanding and generation at each denoising step, achieving 4.69%–15.26% FID gains over prior SOTA.

Abstract

We present SemanticNVS, a camera-conditioned multi-view diffusion model for novel view synthesis (NVS), which improves generation quality and consistency by integrating pre-trained semantic feature extractors. Existing NVS methods perform well for views near the input view, however, they tend to generate semantically implausible and distorted images under long-range camera motion, revealing severe degradation. We speculate that this degradation is due to current models failing to fully understand their conditioning or intermediate generated scene content. Here, we propose to integrate pre-trained semantic feature extractors to incorporate stronger scene semantics as conditioning to achieve high-quality generation even at distant viewpoints. We investigate two different strategies, (1) warped semantic features and (2) an alternating scheme of understanding and generation at each denoising step. Experimental results on multiple datasets demonstrate the clear qualitative and quantitative (4.69%-15.26% in FID) improvement over state-of-the-art alternatives.

How it works

SemanticNVS integrates semantic DINO features into the multi-view diffusion setup in two different ways. Firstly, it provides warped features from the given input view and uses them as additional conditioning. Second, it extracts DINO features from intermediate generations x ^ 0 t of the previous iteration and uses them to complete the warped DINO features.



Comparison with Baselines on Full Trajectories

In this setting, we evaluate on full trajectories (more than 250 frames) for comparison.

Comparison with Baselines on Subsampled Trajectories

In this setting, we uniformly subsample the full trajectories to obtain 20-frame trajectories for a fair comparison.

BibTeX

@article{Chen2026SemanticNVS,
  author  = {Chen, Xinya and Wewer, Christopher and Xie, Jiahao and Hu, Xinting and Lenssen, Jan Eric},
  title   = {SemanticNVS: Improving Semantic Scene Understanding in Generative Novel View Synthesis},
  journal = {arXiv preprint arXiv: 2602.20079},
  year    = {2026}
  }