Novel View Pose Synthesis with Geometry-Aware Regularization for Enhanced 3D Gaussian Splatting

Project Goal

  • Enhance the quality of 3D reconstruction
  • Improve multi view consistency
  • Incorporate geometry-aware loss terms for accurate surface reconstruction

Project Page

Detailed information about the project can be found in the project page above!


Project Overview

cg

cg

I developed a method to enhance indoor 3D reconstruction with 3D Gaussian Splatting (3DGS) by generating novel view camera poses, refining them with DIFIX, and applying geometry-aware loss terms. This approach improved geometry accuracy, multi-view consistency, and reduced artifacts.


Contributions

  1. Novel view camera pose generation
    • Expanded spatial coverage and ensured consistency between viewpoints.
    • Removed artifacts in scenes rendered from novel view camera poses using DIFIX.
  2. Introduction of additional loss terms
    • Added a perceptual LPIPS loss applied only to novel views to preserve not only pixel information but also structural details.
    • Applied normal consistency loss and depth smoothness loss to all views to improve geometry reconstruction quality.

Results

methodinitial point#PSNR↑SSIM↑Training timeframe#
3DGS10000020.4230.8562h 13m168
2DGS10000019.2190.8282h 1m168
2DGS_novel10000020.3750.8421h 59m208
Ours_novel10000021.6050.8612h 6m208
Ours_novel_loss10000021.6750.8623h 55m208

  • Compared to 3DGS, our method achieved a PSNR improvement from 20.423 to 21.675 and an SSIM increase from 0.856 to 0.862.
  • Applying our method to 2DGS also yielded higher scores, demonstrating its generalizability.


🧑‍💻 My Role: Conceived the research idea, designed the methodology, and carried out the entire implementation — including dataset preparation, novel view generation, loss function integration, and experimental evaluation — with advisory input from a doctoral researcher.



GitHub 3dgs-quality-enhancement