Buildkite agent cost in 2026
./agents --per-agent --aws --gcp --mac
Buildkite charges per seat. The agents that run your builds are yours. That sounds simple until you have to size, host and pay for them. This page lists every realistic agent hosting option in 2026, the per-agent monthly cost, the operational trade-offs, and where each one breaks down. Numbers come from current public cloud pricing pages, verified May 2026, with hardware costs amortised over three years.
For an introduction to how the per-seat-plus-BYO pricing model works, see Buildkite pricing per user, unlimited builds. For the full platform deep dive, see the Buildkite overview.
Per-agent cost by hosting option
All numbers assume one always-on agent running 24 hours a day, seven days a week. Real-world fleets typically run a baseline of always-on capacity plus an autoscaled Spot pool, which lowers blended cost by 30-50 percent versus the always-on column below.
| Host | Instance | $ / month | Best for |
|---|---|---|---|
| AWS on-demand | t3.medium | ~$30 | Baseline Linux capacity |
| AWS Spot | t3.medium | ~$12 | Burst pool for PR checks |
| AWS Spot | m5.large | ~$20 | Memory-heavy test suites |
| GCP on-demand | e2-medium | ~$27 | GCP-native shops |
| GCP preemptible | e2-medium | ~$8 | Cheapest cloud option |
| Hetzner Cloud | CX22 | ~$5 | EU teams not on hyperscalers |
| Mac mini (M2) | 8GB / 256GB | ~$25 | iOS / macOS builds |
| Cloud macOS | MacStadium M2 | ~$240 | Compliance-driven iOS |
| Colocated bare metal | 8-core Xeon | ~$15 | High-volume always-on |
# Cloud rates from each provider's pricing page: AWS, GCP, Hetzner. Mac mini amortised over 36 months including electricity.
Sizing the fleet: how many agents do you need?
Agent count is concurrency capacity. To size your fleet, calculate the peak hour of demand and add headroom. A team that runs 200 builds per day at 8 minutes average pulls roughly 27 agent hours of work per day. Spread that across an 8-hour engineering window and you need about 3.3 concurrent agents to clear the queue. Spread it across a 4-hour push peak and you need closer to 7. Most fleets size for the peak hour, not the average, because queueing during a 10:30am rush is the visible failure mode that ruins developer experience.
A useful heuristic: take your daily build hours, divide by 4, and add 50 percent headroom. That sizes you for a 4-hour peak with a margin. For the 200-builds example, 27 hours divided by 4 equals 6.75, plus 50 percent equals about 10 agents. Run six of those always-on, four in an autoscaled Spot pool, blended cost lands around $200 per month for the entire fleet.
Mixed-architecture fleets need separate sizing. A 25-developer team shipping a backend on Linux and an iOS app on macOS typically runs 6-8 Linux agents and 2-3 Mac mini agents, because iOS builds are slower and there are fewer of them per day. The Linux pool autoscales, the Mac pool is fixed-size because Mac minis are physical hardware.
Always-on versus autoscaled: the operator trade-off
Always-on agents are simple. You provision them, install the agent, point at Buildkite, and forget about them. Bill is predictable, queue handling is immediate, and the only operational task is patching the OS once a quarter. The downside is paying full price for capacity that is idle 60-80 percent of the time.
Autoscaling is cheaper but operationally heavier. The Buildkite Elastic CI Stack on AWS uses a CloudFormation template that runs an autoscaling group, scales up when queue depth exceeds a threshold, scales down after a configurable cool-down. Job latency takes a hit: a brand-new agent takes 60-90 seconds to spin up, install dependencies, and start polling. For pipelines that run dozens of builds a day this is invisible. For pipelines where the first PR check of the morning blocks a deploy, the cold-start matters.
The standard production pattern is a hybrid: keep two to four always-on agents to absorb the first build of every burst, autoscale a Spot pool for everything beyond that. Blended cost lands roughly halfway between pure always-on and pure autoscaled, with no queueing on cold starts. Most teams using Buildkite at scale run some variant of this hybrid.
The Mac mini fleet, in detail
For iOS-shipping teams, the Mac mini agent fleet is the single biggest cost decision. A typical iOS team running on hosted GitHub Actions macOS pays $0.08 per minute for 3-core macOS runners, plus the new $0.002 platform fee. A team running 50 iOS builds per day at 20 minutes each burns 1,000 build minutes per day, 21,000 per month, $1,722 monthly on hosted macOS alone.
The same team running three M2 Mac minis as Buildkite agents pays approximately $75 per month in amortised hardware and electricity. The minis sit in the office or at a colo, handle 21,000 build minutes between them comfortably (a single M2 can sustain roughly 10,000 minutes a month at full load with cooling), and consume around 12 watts at idle, 30 watts under load. Electricity at typical US business rates is around $4 per mini per month. Over a three-year amortisation period, including replacement parts and one full refresh, monthly cost per mini lands in the $20-30 range.
The catch is operating physical hardware. A Mac mini in an office needs a static IP or VPN tunnel to reach Buildkite's API, occasional macOS upgrades (handled gracefully but they require physical interaction with the device), and a plan for when one fails. Most teams keep a hot spare or accept that one machine being offline for a week is a tolerable degradation. Colo providers like MacStadium rent fully-managed minis for $80-200 per month if you do not want the physical responsibility, which sits between self-hosted minis and hosted cloud macOS on the cost-versus-effort curve.
Kubernetes-native agents
Teams that already run Kubernetes can use the official Buildkite agent-stack-k8s, which runs a controller in the cluster that turns each queued build into a Kubernetes Job. Each pipeline step gets its own pod, scaled by the cluster autoscaler. The model fits well if you already operate Kubernetes and want second-by-second elasticity without a separate VM autoscaling group.
The cost picture depends entirely on the underlying nodes. On EKS with Spot node groups, blended cost lands close to the pure Spot column above. On a fixed-size cluster shared with application workloads, the marginal cost of CI is the difference between the cluster running at 40 percent utilisation versus 80 percent: not zero, but lower than dedicated capacity. The operational complexity is higher than plain VMs and only pays back if you have existing Kubernetes expertise.
Related Buildkite reading
Frequently Asked Questions
# click any question to expand