VBench : Comprehensive Benchmark
Suite for Video Generative Models

CVPR 2024
(* equal contributions, † corresponding authors)
1 S-Lab, Nanyang Technological University    2 Shanghai Artificial Intelligence Laboratory   
3 The Chinese University of Hong Kong    4 Nanjing University   
Teaser.

Overview of Vbench. We propose VBench, a comprehensive benchmark suite for video generative models. We design a comprehensive and hierarchical Evaluation Dimension Suite to decompose "video generation quality" into multiple well-defined dimensions to facilitate fine-grained and objective evaluation. For each dimension and each content category, we carefully design a Prompt Suite as test cases, and sample Generated Videos from a set of video generation models. For each evaluation dimension, we specifically design an Evaluation Method Suite, which uses carefully crafted method or designated pipeline for automatic objective evaluation. We also conduct Human Preference Annotation for the generated videos for each dimension, and show that VBench evaluation results are well aligned with human perceptions. VBench provides valuable insights and will be open-sourced.

Video

Abstract

Video generation has witnessed significant advancements, yet evaluating these models remains a challenge. A comprehensive evaluation benchmark for video generation is indispensable for two reasons: 1) Existing metrics do not fully align with human perceptions; 2) An ideal evaluation system should provide insights to inform future developments of video generation. To this end, we present VBench, a comprehensive benchmark suite that dissects "video generation quality" into specific, hierarchical, and disentangled dimensions, each with tailored prompts and evaluation methods. VBench has three appealing properties: 1) Comprehensive Dimensions: VBench comprises 16 dimensions in video generation (e.g., subject identity inconsistency, motion smoothness, temporal flickering, and spatial relationship, etc). The evaluation metrics with fine-grained levels reveal individual models' strengths and weaknesses. 2) Human Alignment: We also provide a dataset of human preference annotations to validate our benchmarks' alignment with human perception, for each evaluation dimension respectively. 3) Valuable Insights: We look into current models' ability across various evaluation dimensions, and various content types. We also investigate the gaps between video and image generation models. We will open-source VBench, including all prompts, evaluation methods, generated videos, and human preference annotations, and also include more video generation models in VBench to drive forward the field of video generation.

VBench Evaluation Results of Video Generative Models

We visualize the evaluation results of various publicly available video generation models across 16 VBench dimensions. We normalize the results per dimension for clearer comparisons.

The values have been normalized for better readability of the chart. The normalization process involves scaling each set of performance values to a common scale between 0.3 and 0.8. The formula used for normalization is: (value - min_value) / (max_value - min_value).

VBench Evaluation Results of Gen-2 and Pika

We visualize the VBench evaluation results of Gen-2 and Pika. To enhance clarity, we include the results of VideoCrafter-1.0 and Show-1 as references.

The values have been normalized for better readability of the chart. The normalization process involves scaling each set of performance values to a common scale between 0.3 and 0.8. The formula used for normalization is: (value - min_value) / (max_value - min_value).

Leaderboard

Video Quality Dimensions

Display of Evaluation Examples in Different Dimensions.
(the larger score denotes a better performance)

Video-Condition Consistency Dimensions

Display of Evaluation Examples in Different Dimensions.
(the larger score denotes a better performance)

Prompt Suite Statistics

The two graphs provide an overview of our prompt suites. Left: the word cloud to visualize word distribution of our prompt suites. Right: the number of prompts across different evaluation dimensions and different content categories.

T2V vs. T2I

We use VBench to evaluate other models and baselines for further comparative analysis of T2V models, such as text-to-image (T2I) generation models

The values have been normalized for better readability of the chart. The normalization process involves scaling each set of performance values to a common scale between 0.3 and 0.8. The formula used for normalization is: (value - min_value) / (max_value - min_value).

VBench Results across Eight Content Categories

For each chart, we plot the VBench evaluation results across eight different content categories, benchmarked by our Prompt Suite per Category.

The results are linearly normalized between 0 and 1 for better visibility across categories.

BibTeX

If you find our work useful, please consider citing our paper:

@InProceedings{huang2023vbench,
      title={{VBench}: Comprehensive Benchmark Suite for Video Generative Models},
      author={Huang, Ziqi and He, Yinan and Yu, Jiashuo and Zhang, Fan and Si, Chenyang and Jiang, Yuming and Zhang, Yuanhan and Wu, Tianxing and Jin, Qingyang and Chanpaisit, Nattapol and Wang, Yaohui and Chen, Xinyuan and Wang, Limin and Lin, Dahua and Qiao, Yu and Liu, Ziwei},
      booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
      year={2024}
}