Examination of Large Language Model Performance on Grade School Arithmetic

https://arxiv.org/abs/2405.00332

View a PDF of the paper titled A Careful Examination of Large Language Model Performance on Grade School Arithmetic, by Hugh Zhang and 13 other authors

View PDF HTML (experimental)

Abstract:Large language models (LLMs) have achieved impressive success on many benchmarks for mathematical reasoning. However, there is growing concern that some of this performance actually reflects dataset contamination, where data closely resembling benchmark questions leaks into the training data, instead of true reasoning ability. To investigate this claim rigorously, we commission Grade School Math 1000 (GSM1k). GSM1k is designed to mirror the style and complexity of the established GSM8k benchmark, the gold standard for measuring elementary mathematical reasoning. We ensure that the two benchmarks are comparable across important metrics such as human solve rates, number of steps in solution, answer magnitude, and more. When evaluating leading open- and closed-source LLMs on GSM1k, we observe accuracy drops of up to 13%, with several families of models (e.g., Phi and Mistral) showing evidence of systematic overfitting across almost all model sizes. At the same time, many models, especially those on the frontier, (e.g., Gemini/GPT/Claude) show minimal signs of overfitting. Further analysis suggests a positive relationship (Spearman's r^2=0.32) between a model's probability of generating an example from GSM8k and its performance gap between GSM8k and GSM1k, suggesting that many models may have partially memorized GSM8k.

Submission history

From: Hugh Zhang [view email]
[v1] Wed, 1 May 2024 05:52:05 UTC (3,499 KB)
[v2] Thu, 2 May 2024 17:18:51 UTC (3,499 KB)
[v3] Fri, 3 May 2024 17:53:26 UTC (3,684 KB)

{
"by": "s-macke",
"descendants": 0,
"id": 40244660,
"score": 2,
"time": 1714716805,
"title": "Examination of Large Language Model Performance on Grade School Arithmetic",
"type": "story",
"url": "https://arxiv.org/abs/2405.00332"
}
{
"author": "Hugh Zhang",
"date": "2024-05-01T12:00:00.000Z",
"description": "Large language models (LLMs) have achieved impressive success on many benchmarks for mathematical reasoning. However, there is growing concern that some of this performance actually reflects dataset contamination, where data closely resembling benchmark questions leaks into the training data, instead of true reasoning ability. To investigate this claim rigorously, we commission Grade School Math 1000 (GSM1k). GSM1k is designed to mirror the style and complexity of the established GSM8k benchmark, the gold standard for measuring elementary mathematical reasoning. We ensure that the two benchmarks are comparable across important metrics such as human solve rates, number of steps in solution, answer magnitude, and more. When evaluating leading open- and closed-source LLMs on GSM1k, we observe accuracy drops of up to 13%, with several families of models (e.g., Phi and Mistral) showing evidence of systematic overfitting across almost all model sizes. At the same time, many models, especially those on the frontier, (e.g., Gemini/GPT/Claude) show minimal signs of overfitting. Further analysis suggests a positive relationship (Spearman’s r^2=0.32) between a model’s probability of generating an example from GSM8k and its performance gap between GSM8k and GSM1k, suggesting that many models may have partially memorized GSM8k.",
"image": "https://arxiv.org/static/browse/0.3.4/images/arxiv-logo-fb.png",
"logo": "https://logo.clearbit.com/arxiv.org",
"publisher": "arXiv.org",
"title": "A Careful Examination of Large Language Model Performance on Grade School Arithmetic",
"url": "https://arxiv.org/abs/2405.00332v3"
}
{
"url": "https://arxiv.org/abs/2405.00332",
"title": "A Careful Examination of Large Language Model Performance on Grade...",
"description": "Large language models (LLMs) have achieved impressive success on many benchmarks for mathematical reasoning. However, there is growing concern that some of this performance actually reflects...",
"links": [
"https://arxiv.org/abs/2405.00332v3",
"https://arxiv.org/abs/2405.00332"
],
"image": "https://static.arxiv.org/icons/twitter/arxiv-logo-twitter-square.png",
"content": "<div>\n <div><p><span>Authors:</span><a target=\"_blank\" href=\"https://arxiv.org/search/cs?searchtype=author&amp;query=Zhang,+H\">Hugh Zhang</a>, <a target=\"_blank\" href=\"https://arxiv.org/search/cs?searchtype=author&amp;query=Da,+J\">Jeff Da</a>, <a target=\"_blank\" href=\"https://arxiv.org/search/cs?searchtype=author&amp;query=Lee,+D\">Dean Lee</a>, <a target=\"_blank\" href=\"https://arxiv.org/search/cs?searchtype=author&amp;query=Robinson,+V\">Vaughn Robinson</a>, <a target=\"_blank\" href=\"https://arxiv.org/search/cs?searchtype=author&amp;query=Wu,+C\">Catherine Wu</a>, <a target=\"_blank\" href=\"https://arxiv.org/search/cs?searchtype=author&amp;query=Song,+W\">Will Song</a>, <a target=\"_blank\" href=\"https://arxiv.org/search/cs?searchtype=author&amp;query=Zhao,+T\">Tiffany Zhao</a>, <a target=\"_blank\" href=\"https://arxiv.org/search/cs?searchtype=author&amp;query=Raja,+P\">Pranav Raja</a>, <a target=\"_blank\" href=\"https://arxiv.org/search/cs?searchtype=author&amp;query=Slack,+D\">Dylan Slack</a>, <a target=\"_blank\" href=\"https://arxiv.org/search/cs?searchtype=author&amp;query=Lyu,+Q\">Qin Lyu</a>, <a target=\"_blank\" href=\"https://arxiv.org/search/cs?searchtype=author&amp;query=Hendryx,+S\">Sean Hendryx</a>, <a target=\"_blank\" href=\"https://arxiv.org/search/cs?searchtype=author&amp;query=Kaplan,+R\">Russell Kaplan</a>, <a target=\"_blank\" href=\"https://arxiv.org/search/cs?searchtype=author&amp;query=Lunati,+M\">Michele Lunati</a>, <a target=\"_blank\" href=\"https://arxiv.org/search/cs?searchtype=author&amp;query=Yue,+S\">Summer Yue</a></p></div> <p>View a PDF of the paper titled A Careful Examination of Large Language Model Performance on Grade School Arithmetic, by Hugh Zhang and 13 other authors</p>\n <p><a target=\"_blank\" href=\"https://arxiv.org/pdf/2405.00332\">View PDF</a>\n <a target=\"_blank\" href=\"https://arxiv.org/html/2405.00332v3\">HTML (experimental)</a></p><blockquote>\n <span>Abstract:</span>Large language models (LLMs) have achieved impressive success on many benchmarks for mathematical reasoning. However, there is growing concern that some of this performance actually reflects dataset contamination, where data closely resembling benchmark questions leaks into the training data, instead of true reasoning ability. To investigate this claim rigorously, we commission Grade School Math 1000 (GSM1k). GSM1k is designed to mirror the style and complexity of the established GSM8k benchmark, the gold standard for measuring elementary mathematical reasoning. We ensure that the two benchmarks are comparable across important metrics such as human solve rates, number of steps in solution, answer magnitude, and more. When evaluating leading open- and closed-source LLMs on GSM1k, we observe accuracy drops of up to 13%, with several families of models (e.g., Phi and Mistral) showing evidence of systematic overfitting across almost all model sizes. At the same time, many models, especially those on the frontier, (e.g., Gemini/GPT/Claude) show minimal signs of overfitting. Further analysis suggests a positive relationship (Spearman's r^2=0.32) between a model's probability of generating an example from GSM8k and its performance gap between GSM8k and GSM1k, suggesting that many models may have partially memorized GSM8k.\n </blockquote>\n </div><div>\n <h2>Submission history</h2><p> From: Hugh Zhang [<a target=\"_blank\" href=\"https://arxiv.org/show-email/432fd0d8/2405.00332\">view email</a>] <br /> <strong><a target=\"_blank\" href=\"https://arxiv.org/abs/2405.00332v1\">[v1]</a></strong>\n Wed, 1 May 2024 05:52:05 UTC (3,499 KB)<br />\n <strong><a target=\"_blank\" href=\"https://arxiv.org/abs/2405.00332v2\">[v2]</a></strong>\n Thu, 2 May 2024 17:18:51 UTC (3,499 KB)<br />\n <strong>[v3]</strong>\n Fri, 3 May 2024 17:53:26 UTC (3,684 KB)<br />\n</p></div>",
"author": "",
"favicon": "https://arxiv.org/static/browse/0.3.4/images/icons/favicon-16x16.png",
"source": "arxiv.org",
"published": "",
"ttr": 57,
"type": "website"
}