Documentation
Benchmarks
Submit Results

Submit Results

There are two ways to contribute to OLTP Benchmark:

  1. Add a new database - Submit a PR with a new benchmark workflow
  2. Submit results - Share benchmark results from your own testing

Adding a New Database

Each database has its own workflow file. When you submit a PR, only your database's benchmark runs - not all benchmarks.

Steps

  1. Copy the Template - Use PostgreSQL as the template:

    cp .github/workflows/benchmark-postgresql.yml .github/workflows/benchmark-{database}.yml
    mkdir -p databases/{database}
  2. Create Configuration - Add databases/{database}/config.yaml with your database settings.

  3. Update the Workflow - Modify the workflow file with your database's Docker image, connection settings, and BenchBase configuration.

  4. Submit PR - Open a PR - only your database's benchmark will run, letting you test before merge.

Path-Based Triggers

Each workflow only runs when its files change:

on:
  push:
    paths:
      - 'databases/mongodb/**'           # Config changes
      - '.github/workflows/benchmark-mongodb.yml'  # Workflow changes
  pull_request:
    paths:
      - 'databases/mongodb/**'
      - '.github/workflows/benchmark-mongodb.yml'

This means:

  • PRs only run relevant benchmarks
  • Updating PostgreSQL config doesn't trigger MySQL benchmark
  • You can test your changes without affecting other databases

File Structure

.github/workflows/
├── benchmark-postgresql.yml  # Template - copy this
├── benchmark-mysql.yml
└── benchmark-{your-db}.yml   # Your new workflow

databases/
├── postgresql/config.yaml
├── mysql/config.yaml
└── {your-db}/config.yaml     # Your new config

See the full contributing guide (opens in a new tab) for more details.


Submitting Benchmark Results

If you've run benchmarks on your own infrastructure, we welcome submissions.

Requirements

  1. Follow Standard Methodology - Your benchmark must follow our documented methodology. Any deviations must be clearly noted.
  2. Provide Configuration Files - Include complete configuration files for the database being tested.
  3. Submit Raw Data - We require raw latency samples and throughput measurements, not just summary statistics.
  4. Document Environment - Provide detailed hardware specifications, OS version, and any relevant environment details.

Submission Format

# submission.yaml
database:
  name: "PostgreSQL"
  version: "16.1"
 
environment:
  cloud_provider: "AWS"
  instance_type: "r6i.4xlarge"
  region: "us-east-1"
  os: "Ubuntu 22.04"
  storage:
    type: "gp3"
    size_gb: 1000
    iops: 16000
 
benchmark:
  workload: "tpcc"
  warehouses: 100
  duration_seconds: 1800
 
submitter:
  name: "Your Name"
  organization: "Your Organization (optional)"
  date: "2025-01-15"

How to Submit

Option 1: GitHub Pull Request (Preferred)

  1. Fork the repository
  2. Add your results to community-results/{database}/
  3. Include: submission.yaml, raw results, configuration files
  4. Open a pull request

Option 2: GitHub Issue

  1. Open an issue with the "benchmark-submission" label
  2. Attach your results as a zip file
  3. We'll review and add to the repository

Result Categories

CategoryDescription
OfficialRun via our GitHub Actions workflows
VerifiedCommunity submission validated by our team
CommunityCommunity submission, not independently verified

Re-running Benchmarks

To trigger a benchmark re-run:

  1. Manual dispatch: Go to Actions → Select workflow → "Run workflow"
  2. Config change: Make a small change to the database's config and submit a PR
  3. Workflow change: Update the workflow file

Questions?