ManipBench: Benchmarking Vision-Language Models for Low-Level Robot Manipulation

Anonymous Authors

Abstract

Vision-Language Models (VLMs) have revolutionized artificial intelligence and robotics due to their commonsense reasoning capabilities. In robotic manipulation, VLMs are used primarily as high-level planners, but recent work has also studied their lower-level reasoning ability, which refers to making decisions about precise robot movements. However, the community currently lacks a clear and common benchmark that can evaluate how well VLMs can aid low-level reasoning in robotics. Consequently, we propose a novel benchmark, ManipBench, to evaluate the low-level robot manipulation reasoning capabilities of VLMs across various dimensions, including how well they understand object-object interactions and deformable object manipulation. We extensively test 33 representative VLMs across 10 model families on our benchmark, including variants to test different model sizes. Our evaluation shows that the performance of VLMs significantly varies across tasks, and there is a strong correlation between this performance and trends in our real-world manipulation tasks. It also shows that there remains a significant gap between these models and human-level understanding.

Type of Questions in ManipBench

ManipBench includes a total of 12596 multiple choice questions to evaluate the reasoning capabilities of the VLMs as robotic manipulation agents. These questions are spread across numerous categories and dimensions as shown in the table below. Hover over the different Question Types / Tasks to know more about what those questions are evaluating.

Category Question Types / Tasks Number of Questions
From Public Robotic Manipulation Datasets
(Question Type 1)
DROID pick and place (Q1) 2020
DROID articulated (Q1) 1640
Bridge (Q1) 2500
From Public Robotic Manipulation Datasets
(Question Type 2)
DROID pick and place (Q2) 1010
DROID articulated (Q2) 820
Bridge (Q2) 1250
For Evaluating Fabric Manipulation (Manually Curated) Task Planning Understanding 240
Fabric State Understanding 234
Spatial Reasoning Abilities 325
Keypoint Mapping Abilities 312
Temporal Understanding of Action Sequence 240
Action Length Understanding 240
Inverse Dynamics Understanding 240
Fabric-Solid Body Interaction Understanding 282
Fabric-Fabric Interaction Understanding 280
Counter Factual Understanding 269
From Existing Simulation Environments Place carrot
(pick and place task)
277
Close Drawer
(articulated manipulation task)
83
Straighten Rope
(deformable manipulation task)
140
Sweep Object
(tool manipulation task)
194
Ball Shoot
(dynamic manipulation task)
81

Sample Questions in ManipBench

Kindly select the question type from the dropdown below in order to view a sample multiple-choice question for the same in ManipBench. Some details specific to the prompts given to the VLMs are omitted for simplicity.