Uni-MMMU : A Massive Multi-discipline Multimodal Unified Benchmark

(* equal contributions, + corresponding authors)
1 Shanghai Artificial Intelligence Laboratory    2 S-Lab, Nanyang Technological University   
3 University of Science and Technology of China    4 The Chinese University of Hong Kong
Uni-MMMU Framework

Overview of Uni-MMMU. Eight tasks are grouped into two paradigms: generation aids understanding (Maze, Sliding, Geometry, Jigsaw) and understanding guides generation (Science: Physics/Chemistry/Biology; Code Rendering). Each task reports dual-channel scores (text + image).

Video

Overview

Unified multimodal models aim to jointly enable visual understanding and generation, yet current benchmarks rarely examine their true integration. Existing evaluations either treat the two abilities in isolation or overlook tasks that inherently couple them. To address this gap, we present Uni-MMMU, a comprehensive and discipline-aware benchmark that systematically unfolds the bidirectional synergy between generation and understanding across eight reasoning-centric domains, including science, coding, mathematics, and puzzles. Each task is bidirectionally coupled, demanding models to (i) leverage conceptual understanding to guide precise visual synthesis, or (ii) utilize generation as a cognitive scaffold for analytical reasoning. Uni-MMMU incorporates verifiable intermediate reasoning steps, unique ground truths, and a reproducible scoring protocol for both textual and visual outputs. Through extensive evaluation of state-of-the-art unified, generation-only, and understanding-only models, we reveal substantial performance disparities and cross-modal dependencies, offering new insights into when and how these abilities reinforce one another, and establishing a reliable foundation for advancing unified models.

Comparison with Prior Benchmarks

MMU Gen&Edit Multi-Turn Dual Eval
MMMU
WISE
RISEBench
OpenING
MME-Unify
UniEval
Uni-MMMU

We compare across four key dimensions: multimodal understanding (MMU), generation and editing (Gen&Edit), multi-turn evaluation (Multi-Turn), and dual evaluation of the process and result (Dual Eval).


Data Distribution in Uni-MMMU

Data distribution in Uni-MMMU

Uni-MMMU Benchmark Evaluation Results

Model Generation aids Understanding Understanding aids Generation Avg.
Jigsaw (Image) Jigsaw (Text) Maze Nav. (Image) Maze Nav. (Text) Sliding (Image) Sliding (Text) Math (Image) Math (Text) Science (Reasoning) Science (Text) Science (Image) Code (Text) Code (Shape&Color) Code (Position)
Bagel 56.0 48.0 0.0/0.0 0.0/0.0 0.0/0.0 5.6/1.2 8.5 32.8 63.1 57.3 28.0 53.0 2.2 1.8 22.0
OmniGen2 70.3 48.0 0.0/0.0 0.0/0.0 0.0/0.0 0.0/0.0 4.2 5.7 42.0 33.1 8.9 17.0 13.2 13.3 16.0
Ovis-U1 57.0 53.0 0.0/0.0 12.5/0 0.0/0.0 0.0/0.0 7.1 3.5 42.7 36.3 24.8 18.0 8.0 10.5 16.5
Qwen-Image-Edit 72.0 43.3 0.0/0.0 13.8/0.7 0.0/0.0 3.6/0.0 12.8 8.5 61.1 50.3 26.7 36.0 23.5 20.3 26.3
nano-banana 48.9 57.0 1.8/0.0 23.3/4.7 1.0/0.0 6.2/0.0 21.4 47.8 91.7 79.6 43.9 75.0 36.5 33.7 37.3
GPT4.1 + GPT-image 80.7 80.0 0.8/0.7 49.0/18.1 8.4/0.0 25.1/1.2 25.7 17.1 93.6 91.1 61.8 71.6 83.6 68.6 44.1

All scores are normalized to a [0, 100] scale for consistency. For multi-step tasks (Maze Navigation, Sliding Puzzle), scores in the format a/b represent step-level accuracy / sample-level accuracy.

Task Visualization

Image 1

Maze Navigation

Image 2

Code Rendering

Image 2

Science

Image 2

Math

Image 2

Sliding Puzzles

BibTeX

If you find our work useful, please consider citing our paper:

@article{zou2025unimmmumassivemultidisciplinemultimodal,
      title={{Uni-MMMU}: A Massive Multi-discipline Multimodal Unified Benchmark},
      author = {Kai Zou and Ziqi Huang and Yuhao Dong and Shulin Tian and Dian Zheng and Hongbo Liu and Jingwen He and Bin Liu and Yu Qiao and Ziwei Liu},
      journal={arXiv preprint arXiv:2510.13759},
      year = {2025}
}