Submit Results
There are two ways to contribute to OLTP Benchmark:
- Add a new database - Submit a PR with a new benchmark workflow
- Submit results - Share benchmark results from your own testing
Adding a New Database
Each database has its own workflow file. When you submit a PR, only your database's benchmark runs - not all benchmarks.
Steps
-
Copy the Template - Use PostgreSQL as the template:
cp .github/workflows/benchmark-postgresql.yml .github/workflows/benchmark-{database}.yml mkdir -p databases/{database} -
Create Configuration - Add
databases/{database}/config.yamlwith your database settings. -
Update the Workflow - Modify the workflow file with your database's Docker image, connection settings, and BenchBase configuration.
-
Submit PR - Open a PR - only your database's benchmark will run, letting you test before merge.
Path-Based Triggers
Each workflow only runs when its files change:
on:
push:
paths:
- 'databases/mongodb/**' # Config changes
- '.github/workflows/benchmark-mongodb.yml' # Workflow changes
pull_request:
paths:
- 'databases/mongodb/**'
- '.github/workflows/benchmark-mongodb.yml'This means:
- PRs only run relevant benchmarks
- Updating PostgreSQL config doesn't trigger MySQL benchmark
- You can test your changes without affecting other databases
File Structure
.github/workflows/
├── benchmark-postgresql.yml # Template - copy this
├── benchmark-mysql.yml
└── benchmark-{your-db}.yml # Your new workflow
databases/
├── postgresql/config.yaml
├── mysql/config.yaml
└── {your-db}/config.yaml # Your new configSee the full contributing guide (opens in a new tab) for more details.
Submitting Benchmark Results
If you've run benchmarks on your own infrastructure, we welcome submissions.
Requirements
- Follow Standard Methodology - Your benchmark must follow our documented methodology. Any deviations must be clearly noted.
- Provide Configuration Files - Include complete configuration files for the database being tested.
- Submit Raw Data - We require raw latency samples and throughput measurements, not just summary statistics.
- Document Environment - Provide detailed hardware specifications, OS version, and any relevant environment details.
Submission Format
# submission.yaml
database:
name: "PostgreSQL"
version: "16.1"
environment:
cloud_provider: "AWS"
instance_type: "r6i.4xlarge"
region: "us-east-1"
os: "Ubuntu 22.04"
storage:
type: "gp3"
size_gb: 1000
iops: 16000
benchmark:
workload: "tpcc"
warehouses: 100
duration_seconds: 1800
submitter:
name: "Your Name"
organization: "Your Organization (optional)"
date: "2025-01-15"How to Submit
Option 1: GitHub Pull Request (Preferred)
- Fork the repository
- Add your results to
community-results/{database}/ - Include:
submission.yaml, raw results, configuration files - Open a pull request
Option 2: GitHub Issue
- Open an issue with the "benchmark-submission" label
- Attach your results as a zip file
- We'll review and add to the repository
Result Categories
| Category | Description |
|---|---|
| Official | Run via our GitHub Actions workflows |
| Verified | Community submission validated by our team |
| Community | Community submission, not independently verified |
Re-running Benchmarks
To trigger a benchmark re-run:
- Manual dispatch: Go to Actions → Select workflow → "Run workflow"
- Config change: Make a small change to the database's config and submit a PR
- Workflow change: Update the workflow file