PerLA PerLA Logo Perceptive 3D Language Assistant

Guofeng Mei\(^{1}\), Wei Lin\(^2\), Luigi Riz\(^1\), Yujiao Wu\(^3\), Fabio Poiesi\(^1\), Yiming Wang\(^1\)
\(^{1}\) Fondazione Bruno Kessler, Italy; \(^{2}\) JKU Linz, Austria; \(^{3}\) CSIRO, Australia

Abstract

Enabling Large Language Models (LLMs) to understand the 3D physical world is an emerging yet challenging research direction. Current strategies for processing point clouds typically downsample the scene or divide it into smaller parts for separate analysis. However, both approaches risk losing key local details or global contextual information.

This paper introduces PerLA, a 3D language assistant designed to be perceptive to both details and context, making visual representations more informative for the LLM.

PerLA captures high-resolution (local) details in parallel from different point cloud areas and integrates them with (global) context obtained from a lower-resolution whole point cloud. We present a novel algorithm that preserves point cloud locality through the Hilbert curve and effectively aggregates local-to-global information via cross-attention and a graph neural network.

PerLA outperforms state-of-the-art 3D language assistants, with gains of up to +1.34 CiDEr on ScanQA for question answering, and +4.22 on ScanRefer and +3.88 on Nr3D for dense captioning.


3D Question Answering on ScanQA [1]

3D Question Answering on ScanQA Results
PerLA successfully identifies and reasons about objects and their relationships within the scene, outperforming LL3DA [2].

3D Dense Captioning on ScanRefer [3]

3D Dense Captioning on ScanRefer Results
PerLA demonstrates robust descriptive capabilities on ScanRefer, surpassing LL3DA [2] by effectively capturing object attributes such as “the rectangular brown desk” and “the round table in the center of the room.”

3D Dense Captioning on Nr3D [5]

3D Dense Captioning on Nr3D Results
PerLA showcases fine-grained spatial reasoning on Nr3D by identifying intricate object relationships within complex scenes, outperforming LL3DA [2].

BibTeX

            
            @article{mei2024perla,
                title     = {PerLA: Perceptive 3D Language Assistant},
                author    = {Guofeng Mei and Wei Lin and Luigi Riz and Yujiao Wu and Fabio Poiesi and Yiming Wang},
                journal   = {arXiv preprint arXiv:2411.19774},
                year      = {2024}
            }
            
        

References

  1. BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models
  2. ScanQA: 3D Question Answering Dataset and Benchmark
  3. LL3DA: Leveraging Language for 3D Applications
  4. ScanRefer: 3D Object Localization and Description
  5. ReferIt3D: Neural Listeners for Fine-Grained 3D Object Identification in Real-World Scenes