MATH-V

Measuring Multimodal Mathematical Reasoning with
the MATH-Vision Dataset

geometric reasoning

The accuracies of four prominent Large Multimodal Models (LMMs), random chance, and human performance are evaluated on our proposed MATH-Vision (MATH-V) across 16 subjects and 5 levels of difficulty, with Level 1 being the easiest and Level 5 the most challenging. Human performance is assessed using the testmini subset.

Introduction

Recent advancements in Large Multimodal Models (LMMs) have shown promising results in mathematical reasoning within visual contexts, with models approaching human-level performance on existing benchmarks such as MathVista. However, we observe significant limitations in the diversity of questions and breadth of subjects covered by these benchmarks.

To address this issue, we present the Math-Vision (Math-V) dataset, a meticulously curated collection of 3,040 high-quality mathematical problems with visual contexts sourced from real math competitions. Spanning 16 distinct mathematical disciplines and graded across 5 levels of difficulty, our dataset provides a comprehensive and diverse set of challenges for evaluating the mathematical reasoning abilities of LMMs.

Through extensive experimentation, we unveil a notable performance gap between current LMMs and human performance on Math-V, underscoring the imperative for further advancements in LMMs. Moreover, our detailed categorization allows for a thorough error analysis of LMMs, offering valuable insights to guide future research and development.

Leaderboard on test

Accuracy scores on the test set of Math-Vision (Math-V).

# Model Date ALL Alg AnaG Ari CombG Comb Cnt DescG GrphT Log Angle Area Len SolG Stat Topo TransG
0 Human* (testmini) 2024-02-21 75.66 57.9 79.0 100.0 100.0 47.4 94.7 89.5 63.2 63.2 36.8 52.6 73.7 89.5 89.5 100.0 73.7
1 CoT GPT4V 🥇 2024-02-21 23.98 26.7 26.2 38.6 22.1 24.4 19.4 27.9 23.3 25.2 17.3 21.4 23.4 23.8 25.9 4.4 25.6
2 GPT4V 🥈 2024-02-21 22.76 27.3 32.1 35.7 21.1 16.7 13.4 22.1 14.4 16.8 22.0 22.2 20.9 23.8 24.1 21.7 25.6
3 Gemini Pro 🥉 2024-02-21 17.66 15.1 10.7 20.7 20.1 11.9 7.5 20.2 21.1 16.8 19.1 19.0 20.0 14.3 13.8 17.4 20.8
4 Qwen-VL-Max 2024-02-21 15.59 10.7 19.1 20.0 16.9 12.5 17.9 16.4 12.2 21.0 13.3 14.2 19.8 11.5 20.7 13.0 17.3
5 InternLM-XComposer2-VL 2024-02-21 14.54 9.3 15.5 12.1 15.3 11.3 10.5 14.4 22.2 19.3 19.7 15.6 15.0 11.9 15.5 26.1 15.5
6 GPT 4-CoT (caption) 2024-02-21 13.10 16.5 20.2 34.3 10.4 17.9 19.4 7.7 11.1 10.1 9.8 9.6 9.1 13.5 13.8 8.7 12.5
7 ShareGPT4V-13B 2024-02-21 11.88 7.5 15.5 16.4 10.7 8.9 9.0 11.5 8.9 7.6 11.6 13.0 17.4 10.3 8.6 8.7 12.5
8 LLaVA-v1.5-13B 2024-02-21 11.12 7.0 14.3 14.3 9.1 6.6 6.0 13.5 5.6 13.5 10.4 12.6 14.7 11.5 13.8 13.0 10.7
9 Qwen-VL-Plus 2024-02-21 10.72 11.3 17.9 14.3 12.7 4.8 10.5 15.4 8.9 14.3 11.6 6.4 10.0 14.3 6.9 8.7 11.31
10 ShareGPT4V-7B 2024-02-21 10.53 5.5 3.6 12.9 10.1 4.8 7.5 11.5 14.4 10.9 16.2 11.8 12.3 9.8 15.5 17.4 11.3
11 SPHINX (V2) 2024-02-21 9.70 6.7 7.1 12.9 7.5 7.7 6.0 9.6 16.7 10.1 11.0 11.8 12.5 8.2 8.6 8.7 6.0
12 LLaVA-v1.5-7B 2024-02-21 8.52 7.0 7.1 10.7 7.1 4.8 10.5 7.7 10.0 9.2 15.6 10.2 9.8 5.3 8.6 4.4 4.8
* Random Chance 2024-02-21 7.17 1.5 11.9 7.1 9.7 4.8 6.0 22.1 1.1 7.6 0.6 9.4 6.7 8.2 8.6 13.0 7.1
Human*: Average human performance from annotators who have high school diplomas or above.
Subjects: Alg: algebra, AnaG: analytic geometry, Ari: arithmetic, CombG: combinatorial geometry,
Comb: combinatorics, Cnt: counting, DescG: descriptive geometry, GrphT: graph theory, Log: logic,
Angle: metric geometry - angle, Area: metric geometry - area, Len: metric geometry-length,
SolG: solid geometry, Stat: statistics, Topo: topology, TransG: transformation geometry.

Math-V Dataset

Overview

data-overview

Key statistics of Math-V

data-composition

Comparison of the level distribution between our
Math-V and the MATH dataset.

Distribution

levels, subjects and sources distribution of MATH-V.

Visualization

Some Images from 16 Subjects

BibTeX

        @misc{wang2024measuring,
        title={Measuring Multimodal Mathematical Reasoning with MATH-Vision Dataset}, 
        author={Ke Wang and Junting Pan and Weikang Shi and Zimu Lu and Mingjie Zhan and Hongsheng Li},
        year={2024},
        eprint={2402.14804},
        archivePrefix={arXiv},
        primaryClass={cs.CV}
        }

Acknowledgement

We would like to thank MathVista for this website, which is adapted from Nerfies, licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
                @inproceedings{lu2024mathvista,
                  author    = {Lu, Pan and Bansal, Hritik and Xia, Tony and Liu, Jiacheng and Li, Chunyuan and Hajishirzi, Hannaneh and Cheng, Hao and Chang, Kai-Wei and Galley, Michel and Gao, Jianfeng},
                  title     = {MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts},
                  booktitle={International Conference on Learning Representations (ICLR)},
                  year      = {2024}
                }