BLIP-2 Q-Former Projects .

Technology

BLIP-2 Q-Former

A lightweight query transformer that bridges the modality gap between frozen image encoders and large language models.

BLIP-2 leverages the Q-Former to extract fixed-length visual features from a frozen ViT-g/14 encoder, feeding them directly into LLMs like Vicuna or Flan-T5. This architecture uses a two-stage pre-training strategy: first, it forces the Q-Former to learn visual representations most relevant to text; second, it trains the module to act as an informative bottleneck for language generation. By keeping the heavy-duty backbones frozen, BLIP-2 achieves state-of-the-art zero-shot VQA performance with 54x fewer trainable parameters than Flamingo, making high-tier multimodal reasoning computationally accessible.

https://arxiv.org/abs/2301.12597
1 project · 1 city

Related technologies

Recent Talks & Demos

Showing 1-1 of 1

Members-Only

Sign in to see who built these projects