[2502.09992] Large Language Diffusion Models

Klenance
2 Min Read

View a PDF of the paper titled Large Language Diffusion Models, by Shen Nie and 9 other authors

View PDF
HTML (experimental)

Abstract:Autoregressive models (ARMs) are widely regarded as the cornerstone of large language models (LLMs). We challenge this notion by introducing LLaDA, a diffusion model trained from scratch under the pre-training and supervised fine-tuning (SFT) paradigm. LLaDA models distributions through a forward data masking process and a reverse process, parameterized by a vanilla Transformer to predict masked tokens. By optimizing a likelihood bound, it provides a principled generative approach for probabilistic inference. Across extensive benchmarks, LLaDA demonstrates strong scalability, outperforming our self-constructed ARM baselines. Remarkably, LLaDA 8B is competitive with strong LLMs like LLaMA3 8B in in-context learning and, after SFT, exhibits impressive instruction-following abilities in case studies such as multi-turn dialogue. Moreover, LLaDA addresses the reversal curse, surpassing GPT-4o in a reversal poem completion task. Our findings establish diffusion models as a viable and promising alternative to ARMs, challenging the assumption that key LLM capabilities discussed above are inherently tied to ARMs. Project page and codes: this https URL.

Submission history

From: Shen Nie [view email]
[v1]
Fri, 14 Feb 2025 08:23:51 UTC (1,069 KB)
[v2]
Tue, 18 Feb 2025 16:08:59 UTC (1,070 KB)

Source link

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *