Better and Faster Large Language Models via Multi-Token Prediction

https://arxiv.org/abs/2404.19737

Computer Science > Computation and Language

arXiv:2404.19737 (cs)

View a PDF of the paper titled Better & Faster Large Language Models via Multi-token Prediction, by Fabian Gloeckle and 4 other authors

View PDF

Abstract:Large language models such as GPT and Llama are trained with a next-token prediction loss. In this work, we suggest that training language models to predict multiple future tokens at once results in higher sample efficiency. More specifically, at each position in the training corpus, we ask the model to predict the following n tokens using n independent output heads, operating on top of a shared model trunk. Considering multi-token prediction as an auxiliary training task, we measure improved downstream capabilities with no overhead in training time for both code and natural language models. The method is increasingly useful for larger model sizes, and keeps its appeal when training for multiple epochs. Gains are especially pronounced on generative benchmarks like coding, where our models consistently outperform strong baselines by several percentage points. Our 13B parameter models solves 12 % more problems on HumanEval and 17 % more on MBPP than comparable next-token models. Experiments on small algorithmic tasks demonstrate that multi-token prediction is favorable for the development of induction heads and algorithmic reasoning capabilities. As an additional benefit, models trained with 4-token prediction are up to 3 times faster at inference, even with large batch sizes.

Submission history

From: Fabian Gloeckle [view email]
[v1] Tue, 30 Apr 2024 17:33:57 UTC (1,300 KB)

Full-text links:

Access Paper:

    View a PDF of the paper titled Better & Faster Large Language Models via Multi-token Prediction, by Fabian Gloeckle and 4 other authors

  • View PDF
  • TeX Source
  • Other Formats

Current browse context:

cs.CL

export BibTeX citation

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Code, Data and Media Associated with this Article

Demos

Recommenders and Search Tools

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

{
"by": "jasondavies",
"descendants": 128,
"id": 40220851,
"kids": [
40221918,
40221237,
40222141,
40221564,
40221130,
40222035,
40221229,
40232608,
40222317,
40221027,
40221815,
40221750,
40223166,
40224991,
40221347,
40223550,
40225691
],
"score": 302,
"time": 1714552098,
"title": "Better and Faster Large Language Models via Multi-Token Prediction",
"type": "story",
"url": "https://arxiv.org/abs/2404.19737"
}
{
"author": "Fabian Gloeckle",
"date": "2024-04-30T12:00:00.000Z",
"description": "Large language models such as GPT and Llama are trained with a next-token prediction loss. In this work, we suggest that training language models to predict multiple future tokens at once results in higher sample efficiency. More specifically, at each position in the training corpus, we ask the model to predict the following n tokens using n independent output heads, operating on top of a shared model trunk. Considering multi-token prediction as an auxiliary training task, we measure improved downstream capabilities with no overhead in training time for both code and natural language models. The method is increasingly useful for larger model sizes, and keeps its appeal when training for multiple epochs. Gains are especially pronounced on generative benchmarks like coding, where our models consistently outperform strong baselines by several percentage points. Our 13B parameter models solves 12 % more problems on HumanEval and 17 % more on MBPP than comparable next-token models. Experiments on small algorithmic tasks demonstrate that multi-token prediction is favorable for the development of induction heads and algorithmic reasoning capabilities. As an additional benefit, models trained with 4-token prediction are up to 3 times faster at inference, even with large batch sizes.",
"image": "https://arxiv.org/static/browse/0.3.4/images/arxiv-logo-fb.png",
"logo": "https://logo.clearbit.com/arxiv.org",
"publisher": "arXiv.org",
"title": "Better & Faster Large Language Models via Multi-token Prediction",
"url": "https://arxiv.org/abs/2404.19737v1"
}
{
"url": "https://arxiv.org/abs/2404.19737",
"title": "Better & Faster Large Language Models via Multi-token Prediction",
"description": "Large language models such as GPT and Llama are trained with a next-token prediction loss. In this work, we suggest that training language models to predict multiple future tokens at once results...",
"links": [
"https://arxiv.org/abs/2404.19737v1",
"https://arxiv.org/abs/2404.19737"
],
"image": "https://static.arxiv.org/icons/twitter/arxiv-logo-twitter-square.png",
"content": "<div>\n <div>\n <p>\n </p><h2>Computer Science &gt; Computation and Language</h2>\n <p></p>\n <p><strong>arXiv:2404.19737</strong> (cs)\n </p>\n<div>\n <p>View a PDF of the paper titled Better &amp; Faster Large Language Models via Multi-token Prediction, by Fabian Gloeckle and 4 other authors</p>\n <p><a target=\"_blank\" href=\"https://arxiv.org/pdf/2404.19737\">View PDF</a></p><blockquote>\n <span>Abstract:</span>Large language models such as GPT and Llama are trained with a next-token prediction loss. In this work, we suggest that training language models to predict multiple future tokens at once results in higher sample efficiency. More specifically, at each position in the training corpus, we ask the model to predict the following n tokens using n independent output heads, operating on top of a shared model trunk. Considering multi-token prediction as an auxiliary training task, we measure improved downstream capabilities with no overhead in training time for both code and natural language models. The method is increasingly useful for larger model sizes, and keeps its appeal when training for multiple epochs. Gains are especially pronounced on generative benchmarks like coding, where our models consistently outperform strong baselines by several percentage points. Our 13B parameter models solves 12 % more problems on HumanEval and 17 % more on MBPP than comparable next-token models. Experiments on small algorithmic tasks demonstrate that multi-token prediction is favorable for the development of induction heads and algorithmic reasoning capabilities. As an additional benefit, models trained with 4-token prediction are up to 3 times faster at inference, even with large batch sizes.\n </blockquote>\n </div>\n <div>\n <h2>Submission history</h2><p> From: Fabian Gloeckle [<a target=\"_blank\" href=\"https://arxiv.org/show-email/dd7c4b50/2404.19737\">view email</a>] <br /> <strong>[v1]</strong>\n Tue, 30 Apr 2024 17:33:57 UTC (1,300 KB)<br />\n</p></div>\n </div>\n<div> <div>\n <p><a></a>\n <span>Full-text links:</span></p><h2>Access Paper:</h2>\n <ul>\n <p>\nView a PDF of the paper titled Better &amp; Faster Large Language Models via Multi-token Prediction, by Fabian Gloeckle and 4 other authors</p><li><a target=\"_blank\" href=\"https://arxiv.org/pdf/2404.19737\">View PDF</a></li><li><a target=\"_blank\" href=\"https://arxiv.org/src/2404.19737\">TeX Source</a></li><li><a target=\"_blank\" href=\"https://arxiv.org/format/2404.19737\">Other Formats</a></li></ul>\n </div>\n <div><p>\n Current browse context: </p><p>cs.CL</p>\n </div>\n<p>\n <span>export BibTeX citation</span>\n</p>\n<div>\n <p></p><h3>Bookmark</h3><p></p><p><a target=\"_blank\" href=\"http://www.bibsonomy.org/BibtexHandler?requTask=upload&amp;url=https://arxiv.org/abs/2404.19737&amp;description=Better%20&amp;%20Faster%20Large%20Language%20Models%20via%20Multi-token%20Prediction\" title=\"Bookmark on BibSonomy\">\n <img src=\"https://arxiv.org/static/browse/0.3.4/images/icons/social/bibsonomy.png\" alt=\"BibSonomy logo\" />\n </a>\n <a target=\"_blank\" href=\"https://reddit.com/submit?url=https://arxiv.org/abs/2404.19737&amp;title=Better%20&amp;%20Faster%20Large%20Language%20Models%20via%20Multi-token%20Prediction\" title=\"Bookmark on Reddit\">\n <img src=\"https://arxiv.org/static/browse/0.3.4/images/icons/social/reddit.png\" alt=\"Reddit logo\" />\n </a>\n</p></div> </div>\n<div><p>\n <label>Bibliographic Tools</label></p><div>\n <h2>Bibliographic and Citation Tools</h2>\n <div>\n <p><label>\n <span></span>\n <span>Bibliographic Explorer Toggle</span>\n </label>\n </p>\n </div>\n </div>\n <p>\n <label>Code, Data, Media</label></p><div>\n <h2>Code, Data and Media Associated with this Article</h2>\n </div>\n <p>\n <label>Demos</label></p><div>\n <h2>Demos</h2>\n </div>\n <p>\n <label>Related Papers</label></p><div>\n <h2>Recommenders and Search Tools</h2>\n </div>\n <p>\n <label>\n About arXivLabs\n </label></p><div>\n <h2>arXivLabs: experimental projects with community collaborators</h2>\n <p>arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.</p>\n <p>Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.</p>\n <p>Have an idea for a project that will add value for arXiv's community? <a target=\"_blank\" href=\"https://info.arxiv.org/labs/index.html\"><strong>Learn more about arXivLabs</strong></a>.</p>\n </div>\n </div>\n</div>",
"author": "",
"favicon": "https://arxiv.org/static/browse/0.3.4/images/icons/favicon-16x16.png",
"source": "arxiv.org",
"published": "",
"ttr": 78,
"type": "website"
}