Enabling Large Language Models (LLMs) to understand the 3D physical world is an emerging yet challenging research direction. Current strategies for processing point clouds typically downsample the scene or divide it into smaller parts for separate analysis. However, both approaches risk losing key local details or global contextual information.
This paper introduces PerLA, a 3D language assistant designed to be perceptive to both details and context, making visual representations more informative for the LLM.
PerLA captures high-resolution (local) details in parallel from different point cloud areas and integrates them with (global) context obtained from a lower-resolution whole point cloud. We present a novel algorithm that preserves point cloud locality through the Hilbert curve and effectively aggregates local-to-global information via cross-attention and a graph neural network.
PerLA outperforms state-of-the-art 3D language assistants, with gains of up to +1.34 CiDEr on ScanQA for question answering, and +4.22 on ScanRefer and +3.88 on Nr3D for dense captioning.
Our method takes as inputs: (i) a text prompt in natural language, (ii) the 3D scene represented as a point cloud, and (iii) a visual prompt provided as either a user click or a bounding box.
The text prompt is processed by a text prompt encoder, which generates text representations. These representations are input to both the Large Language Model (LLM) and the multimodal adapter (MMA). The text encoder is a transformer architecture based on BLIP-2.
The 3D scene, represented as a point cloud, is processed by our perceptive scene encoder. This encoder generates scene representations that are utilized by the MMA and subsequent processing components. Details of the perceptive scene encoder will be provided in the following sections.
The visual prompt, whether a user click or a bounding box, is handled by the visual prompt encoder. By integrating representations from the perceptive scene encoder, the visual prompt encoder outputs refined scene representations, which are subsequently processed by the MMA. For more details on visual prompts, please refer to the supplementary material.
The MMA takes the multimodal representations as input and outputs tokens for the LLM. The MMA is implemented as a Q-former. Its outputs are projected into the LLM's representation space through a linear projector. Finally, these projected representations are processed by the LLM to generate the output response.
@article{mei2024perla,
title={PerLA: Perceptive 3D language assistant},
author={Mei, Guofeng and Lin, Wei and Riz, Luigi and Wu, Yujiao and Poiesi, Fabio and Wang, Yiming},
journal={arXiv preprint arXiv:2411.19774},
year={2024}}