Leveraging Spatial Attention and Edge Context for Optimized Feature Selection in Visual Localization

  • Nanda Febri Istighfarin
  • , Hyung Gi Jo*
  • *Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

Abstract

Visual localization determines an agent’s precise position and orientation within an environment using visual data. It has become a critical task in the field of robotics, particularly in applications such as autonomous navigation. This is due to the ability to determine an agent’s pose using cost-effective sensors such as RGB cameras. Recent methods in visual localization employ scene coordinate regression to determine the agent’s pose. However, these methods face challenges as they attempt to regress 2D-3D correspondences across the entire image region, despite not all regions providing useful information. To address this issue, we introduce an attention network that selectively targets informative regions of the image. Using this network, we identify the highest-scoring features to improve the feature selection process and combine the result with edge detection. This integration ensures that the features chosen for the training buffer are located within robust regions, thereby improving 2D-3D correspondence and overall localization performance. Our approach was tested on the outdoor benchmark dataset, demonstrating superior results compared to previous methods.

Original languageEnglish
Pages (from-to)418-428
Number of pages11
JournalInternational Journal of Control, Automation and Systems
Volume23
Issue number2
DOIs
StatePublished - 2025.02

Keywords

  • Attention network
  • computer vision
  • edge detector
  • scene coordinate regression
  • visual localization

Quacquarelli Symonds(QS) Subject Topics

  • Computer Science & Information Systems
  • Data Science

Fingerprint

Dive into the research topics of 'Leveraging Spatial Attention and Edge Context for Optimized Feature Selection in Visual Localization'. Together they form a unique fingerprint.

Cite this