MATH-V

Measuring Multimodal Mathematical Reasoning with
the MATH-Vision Dataset

geometric reasoning

[2024-05-20] (a) Zero-shot accuracies of four prominent Large Multimodal Models (LMMs), random chance, and human performance are evaluated on our proposed MATH-V across 16 subjects. Teal means newly introduced subjects. (b) Examples of easy problems in MATH-V failed by top-performing LMMs on MathVista. The three questions come from tests designed for elementary school students.

geometric reasoning

[2024-02-21] The accuracies of four prominent Large Multimodal Models (LMMs), random chance, and human performance are evaluated on our proposed MATH-Vision (MATH-V) across 16 subjects and 5 levels of difficulty, with Level 1 being the easiest and Level 5 the most challenging. Human performance is assessed using the testmini subset.

Introduction

Recent advancements in Large Multimodal Models (LMMs) have shown promising results in mathematical reasoning within visual contexts, with models approaching human-level performance on existing benchmarks such as MathVista. However, we observe significant limitations in the diversity of questions and breadth of subjects covered by these benchmarks.

To address this issue, we present the Math-Vision (Math-V) dataset, a meticulously curated collection of 3,040 high-quality mathematical problems with visual contexts sourced from real math competitions. Spanning 16 distinct mathematical disciplines and graded across 5 levels of difficulty, our dataset provides a comprehensive and diverse set of challenges for evaluating the mathematical reasoning abilities of LMMs.

Through extensive experimentation, we unveil a notable performance gap between current LMMs and human performance on Math-V, underscoring the imperative for further advancements in LMMs. Moreover, our detailed categorization allows for a thorough error analysis of LMMs, offering valuable insights to guide future research and development.

Leaderboard on test

Accuracy scores on the test set of MATH-Vision (MATH-V).

🚨 To submit your results to the leaderboard, please send to this email with your result in this format.

# Model Source Date ALL Alg AnaG Ari CombG Comb Cnt DescG GrphT Log Angle Area Len SolG Stat Topo TransG
0 Human Link 2024-04-05 68.82 55.1 78.6 99.6 98.4 43.5 98.5 91.3 62.2 61.3 33.5 47.2 73.5 87.3 93.1 99.8 69.0
1 GPT-4o 🥇 Link 2024-05-19 30.39 42.0 39.3 49.3 28.9 25.6 22.4 24.0 23.3 29.4 17.3 29.8 30.1 29.1 44.8 34.8 17.9
2 GPT-4 Turbo 🥇 Link 2024-05-19 30.26 37.7 33.3 46.4 25.0 28.6 25.3 15.4 27.8 31.9 30.6 29.0 31.9 28.7 37.9 17.4 23.2
3 CoT GPT4V 🥈 Link 2024-02-21 23.98 26.7 26.2 38.6 22.1 24.4 19.4 27.9 23.3 25.2 17.3 21.4 23.4 23.8 25.9 4.4 25.6
4 GPT4V 🥈 Link 2024-02-21 22.76 27.3 32.1 35.7 21.1 16.7 13.4 22.1 14.4 16.8 22.0 22.2 20.9 23.8 24.1 21.7 25.6
5 Gemini-1.5 Pro 🥉 Link 2024-05-17 19.24 20.3 35.7 34.3 19.8 15.5 20.9 26.0 26.7 22.7 14.5 14.4 16.5 18.9 10.3 26.1 17.3
6 Gemini Pro Link 2024-02-21 17.66 15.1 10.7 20.7 20.1 11.9 7.5 20.2 21.1 16.8 19.1 19.0 20.0 14.3 13.8 17.4 20.8
7 InternVL-Chat-V1-2-Plus Link 2024-02-22 16.97 11.3 25.0 15.7 16.9 10.1 11.9 16.4 15.6 19.3 22.5 16.4 22.5 14.3 17.2 4.4 20.8
8 Math-LLaVA-13B Link 2024-06-26 15.69 9.0 20.2 15.7 18.2 10.1 10.5 16.4 14.4 16.0 20.2 18.4 17.6 9.4 24.1 21.7 17.9
9 Qwen-VL-Max Link 2024-02-21 15.59 10.7 19.1 20.0 16.9 12.5 17.9 16.4 12.2 21.0 13.3 14.2 19.8 11.5 20.7 13.0 17.3
10 InternLM-XComposer2-VL Link 2024-02-21 14.54 9.3 15.5 12.1 15.3 11.3 10.5 14.4 22.2 19.3 19.7 15.6 15.0 11.9 15.5 26.1 15.5
11 GPT 4-CoT (caption) Link 2024-02-21 13.10 16.5 20.2 34.3 10.4 17.9 19.4 7.7 11.1 10.1 9.8 9.6 9.1 13.5 13.8 8.7 12.5
12 ShareGPT4V-13B Link 2024-02-21 11.88 7.5 15.5 16.4 10.7 8.9 9.0 11.5 8.9 7.6 11.6 13.0 17.4 10.3 8.6 8.7 12.5
13 LLaVA-v1.5-13B Link 2024-02-21 11.12 7.0 14.3 14.3 9.1 6.6 6.0 13.5 5.6 13.5 10.4 12.6 14.7 11.5 13.8 13.0 10.7
14 Qwen-VL-Plus Link 2024-02-21 10.72 11.3 17.9 14.3 12.7 4.8 10.5 15.4 8.9 14.3 11.6 6.4 10.0 14.3 6.9 8.7 11.31
15 ShareGPT4V-7B Link 2024-02-21 10.53 5.5 3.6 12.9 10.1 4.8 7.5 11.5 14.4 10.9 16.2 11.8 12.3 9.8 15.5 17.4 11.3
16 SPHINX (V2) Link 2024-02-21 9.70 6.7 7.1 12.9 7.5 7.7 6.0 9.6 16.7 10.1 11.0 11.8 12.5 8.2 8.6 8.7 6.0
17 LLaVA-v1.5-7B Link 2024-02-21 8.52 7.0 7.1 10.7 7.1 4.8 10.5 7.7 10.0 9.2 15.6 10.2 9.8 5.3 8.6 4.4 4.8
* Random Chance Link 2024-02-21 7.17 1.5 11.9 7.1 9.7 4.8 6.0 22.1 1.1 7.6 0.6 9.4 6.7 8.2 8.6 13.0 7.1
Human*: Average human performance from annotators who have high school diplomas or above.
Subjects: Alg: algebra, AnaG: analytic geometry, Ari: arithmetic, CombG: combinatorial geometry,
Comb: combinatorics, Cnt: counting, DescG: descriptive geometry, GrphT: graph theory, Log: logic,
Angle: metric geometry - angle, Area: metric geometry - area, Len: metric geometry-length,
SolG: solid geometry, Stat: statistics, Topo: topology, TransG: transformation geometry.

Math-V Dataset

Overview

data-overview

Key statistics of Math-V

data-composition

Comparison of the level distribution between our
Math-V and the MATH dataset.

Distribution

levels, subjects and sources distribution of MATH-V.

Visualization

Some Images from 16 Subjects

BibTeX

        @misc{wang2024measuring,
        title={Measuring Multimodal Mathematical Reasoning with MATH-Vision Dataset}, 
        author={Ke Wang and Junting Pan and Weikang Shi and Zimu Lu and Mingjie Zhan and Hongsheng Li},
        year={2024},
        eprint={2402.14804},
        archivePrefix={arXiv},
        primaryClass={cs.CV}
        }

Acknowledgement

We would like to thank MathVista for this website, which is adapted from Nerfies, licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.