Neural Radiance Fields for Fisheye Driving Scenes Using Edge-Aware Integrated Depth Supervision

Research output: Contribution to journalJournal articlepeer-review

Abstract

Neural radiance fields (NeRF) have become an effective method for encoding scenes into neural representations, allowing for the synthesis of photorealistic views of unseen views from given input images. However, the applicability of traditional NeRF is significantly limited by its assumption that images are captured for object-centric scenes with a pinhole camera. Expanding these boundaries, we focus on driving scenarios using a fisheye camera, which offers the advantage of capturing visual information from a wide field of view. To address the challenges due to the unbounded and distorted characteristics of fisheye images, we propose an edge-aware integration loss function. This approach leverages sparse LiDAR projections and dense depth maps estimated from a learning-based depth model. The proposed algorithm assigns larger weights to neighboring points that have depth values similar to the sensor data. Experiments were conducted on the KITTI-360 and JBNU-Depth360 datasets, which are public and real-world datasets of driving scenarios using fisheye cameras. Experimental results demonstrated that the proposed method is effective in synthesizing novel view images, outperforming existing approaches.

Original languageEnglish
Article number6790
JournalSensors
Volume24
Issue number21
DOIs
StatePublished - 2024.11

Keywords

  • depth supervision
  • fisheye camera
  • neural radiance field
  • view synthesis

Quacquarelli Symonds(QS) Subject Topics

  • Computer Science & Information Systems
  • Engineering - Electrical & Electronic
  • Engineering - Petroleum
  • Chemistry
  • Physics & Astronomy
  • Biological Sciences

Fingerprint

Dive into the research topics of 'Neural Radiance Fields for Fisheye Driving Scenes Using Edge-Aware Integrated Depth Supervision'. Together they form a unique fingerprint.

Cite this