Mars Mars-Bench :

A Benchmark for Evaluating Foundation Models for
Mars Science Tasks

*Equal Contribution
1School of Computing and Augmented Intelligence, Arizona State University, Tempe, AZ, USA
2School of Earth and Space Exploration, Arizona State University, Tempe, AZ, USA
3Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA, USA

NeurIPS 2025

Mars-Bench Tasks: Classification, Segmentation, and Object Detection

Representative samples from the Mars-Bench benchmark across three task categories. Each dataset has been validated by domain experts to ensure data quality and correctness.

Abstract

Foundation models have enabled rapid progress across many specialized domains by leveraging large-scale pre-training on unlabeled data, demonstrating strong generalization to a variety of downstream tasks. While such models have gained significant attention in fields like Earth Observation, their application to Mars science remains limited. A key enabler of progress in other domains has been the availability of standardized benchmarks that support systematic evaluation. In contrast, Mars science lacks such benchmarks and standardized evaluation frameworks, which have limited progress toward developing foundation models for Martian tasks. To address this gap, we introduce Mars-Bench, the first benchmark designed to systematically evaluate models across a broad range of Mars-related tasks using both orbital and surface imagery. Mars-Bench comprises 20 datasets spanning classification, segmentation, and object detection, focused on key geologic features such as craters, cones, boulders, and frost. We provide standardized, ready- to-use datasets and baseline evaluations using models pre-trained on natural images, Earth satellite data, and state-of-the-art vision-language models. Results from all analyses suggest that Mars-specific foundation models may offer advantages over general-domain counterparts, motivating further exploration of domain-adapted pre- training. Mars-Bench aims to establish a standardized foundation for developing and comparing machine learning models for Mars science.

Datasets

Mars-Bench Datasets Overview

Overview of the datasets included in Mars-Bench, comprising 20 tasks across classification, segmentation, and object detection. The benchmark integrates data from both orbital and rover sources (indicated under Observation Source as O for Orbiter and R for Rover), spanning multiple sensors and a diverse range of geologic features such as craters, cones, and boulders that are of significant interest to planetary scientists and geologists. Each dataset includes standardized splits and metadata to ensure consistent and reproducible evaluation.

Performance

Classification Tasks

Classification Performance

Classification Benchmark under Feature Extraction setting: Normalized F1-score of all baselines across six selected classification datasets (higher the better). The aggregated plot shows the average over all classification datasets.



Segmentation Tasks

Segmentation Performance

Segmentation Benchmark under Feature Extraction setting: Normalized IoU of all baselines across six selected segmentation datasets (higher the better). The aggregated plot shows the average over all segmentation datasets.



Object Detection Tasks

Object Detection Performance

Object Detection Benchmark under Feature Extraction setting: Normalized mAP of all baselines across object detection datasets (higher the better). The aggregated plot shows the average over all object detection datasets.



Vision-Language Model Results

VLM Performance Table

Performance of Gemini and GPT models on six Mars-Bench datasets spanning classification and segmentation tasks. The datasets were selected to evaluate model generalization across a diverse range of geologic features. Segmentation tasks were reformulated as multi-label classification, with system instructions defining each class for both task types. Experiments were conducted using the Gemini 2.0 Flash and GPT-4o Mini models (May 2025 checkpoints).

Acknowledgment

Part of this research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration.

BibTeX

@inproceedings{purohit2025marsbench,
    title={Mars-Bench: A Benchmark for Evaluating Foundation Models for Mars Science Tasks},
    author={Mirali Purohit and Bimal Gajera and Vatsal Malaviya and Irish Mehta and Kunal Sunil Kasodekar and Jacob Adler and Steven Lu and Umaa Rebbapragada and Hannah Kerner},
    booktitle={The Thirty-ninth Annual Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
    year={2025},
    url={https://arxiv.org/pdf/2510.24010}
}