PerLA PerLA Logo Perceptive 3D Language Assistant

Guofeng Mei\(^{1}\), Wei Lin\(^2\), Luigi Riz\(^1\), Yujiao Wu\(^3\), Fabio Poiesi\(^1\), Yiming Wang\(^1\)
\(^{1}\) Fondazione Bruno Kessler, Italy; \(^{2}\) JKU Linz, Austria; \(^{3}\) CSIRO, Australia

Abstract

Enabling Large Language Models (LLMs) to understand the 3D physical world is an emerging yet challenging research direction. Current strategies for processing point clouds typically downsample the scene or divide it into smaller parts for separate analysis. However, both approaches risk losing key local details or global contextual information.

This paper introduces PerLA, a 3D language assistant designed to be perceptive to both details and context, making visual representations more informative for the LLM.

PerLA captures high-resolution (local) details in parallel from different point cloud areas and integrates them with (global) context obtained from a lower-resolution whole point cloud. We present a novel algorithm that preserves point cloud locality through the Hilbert curve and effectively aggregates local-to-global information via cross-attention and a graph neural network.

PerLA outperforms state-of-the-art 3D language assistants, with gains of up to +1.34 CiDEr on ScanQA for question answering, and +4.22 on ScanRefer and +3.88 on Nr3D for dense captioning.


Method

Architecture of PerLA

Our method takes as inputs: (i) a text prompt in natural language, (ii) the 3D scene represented as a point cloud, and (iii) a visual prompt provided as either a user click or a bounding box.

The text prompt is processed by a text prompt encoder, which generates text representations. These representations are input to both the Large Language Model (LLM) and the multimodal adapter (MMA). The text encoder is a transformer architecture based on BLIP-2.

The 3D scene, represented as a point cloud, is processed by our perceptive scene encoder. This encoder generates scene representations that are utilized by the MMA and subsequent processing components. Details of the perceptive scene encoder will be provided in the following sections.

The visual prompt, whether a user click or a bounding box, is handled by the visual prompt encoder. By integrating representations from the perceptive scene encoder, the visual prompt encoder outputs refined scene representations, which are subsequently processed by the MMA. For more details on visual prompts, please refer to the supplementary material.

The MMA takes the multimodal representations as input and outputs tokens for the LLM. The MMA is implemented as a Q-former. Its outputs are projected into the LLM's representation space through a linear projector. Finally, these projected representations are processed by the LLM to generate the output response.


3D Question Answering on ScanQA [1]

3D Question Answering on ScanQA Results
PerLA successfully identifies and reasons about objects and their relationships within the scene, outperforming LL3DA [2].

3D Dense Captioning on ScanRefer [3]

3D Dense Captioning on ScanRefer Results
PerLA demonstrates robust descriptive capabilities on ScanRefer, surpassing LL3DA [4] by effectively capturing object attributes such as “the rectangular brown desk” and “the round table in the center of the room.”

3D Dense Captioning on Nr3D [5]

3D Dense Captioning on Nr3D Results
PerLA showcases fine-grained spatial reasoning on Nr3D by identifying intricate object relationships within complex scenes, outperforming LL3DA [6].

BibTeX

            
        @article{mei2024perla,
          title={PerLA: Perceptive 3D language assistant},
          author={Mei, Guofeng and Lin, Wei and Riz, Luigi and Wu, Yujiao and Poiesi, Fabio and Wang, Yiming},
          journal={arXiv preprint arXiv:2411.19774},
          year={2024}}
            
        

References

  1. BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models
  2. ScanQA: 3D Question Answering Dataset and Benchmark
  3. LL3DA: Leveraging Language for 3D Applications
  4. ScanRefer: 3D Object Localization and Description
  5. ReferIt3D: Neural Listeners for Fine-Grained 3D Object Identification in Real-World Scenes