Real-time aggregate data from 26 developers who track their AI coding usage on clawdboard. All cost figures are estimates based on published API token pricing — not actual bills. Data covers all usage since January 2024.
As of March 15, 2026 at 11:42 PM UTC, 26 developers have tracked $21.7k in estimated AI coding spend and 33812.1M tokens on clawdboard. The average developer has spent an estimated $1.1k (median: $439.93). The most-used model by cost share is Opus 4.6 at 78% of total spend. The longest active streak is 36 consecutive days. Data is updated hourly from opt-in developer usage logs.Last updated: March 15, 2026 at 11:42 PM UTC · Refreshed hourly · API access available
Aggregate AI coding usage across all 26 registered developers since January 2024.
Total Community Spend
$21.7k
across 26 developers
Total Tokens Consumed
33.8B
input + output + cache
Average Cost per Developer
$1.1k
median: $439.93
Busiest Community Day
$1.2k
February 24, 2026
Total Active Days
633
developer-days logged
Longest Active Streak
36d
consecutive days coding with AI
Most Used Model
Opus 4.6
78% of total spend
How much are developers spending on AI coding each day? This chart shows the 7-day moving average of estimated daily cost and active user count over the last 67 days. Spikes often correspond to new model releases or major tool updates.
Last 67 days · 7-day moving average
Which AI models do developers actually use for coding? This breakdown shows estimated cost and token consumption per model across all users — including Claude (Opus, Sonnet, Haiku), OpenAI (GPT-4o, o-series), and others. Cost share reflects how much of total community spend goes to each model — higher-tier models cost more per token, so they can dominate spend even with fewer users. Click any model for detailed usage statistics.
Total estimated cost by model across all users
Where does the data come from? This breakdown shows estimated cost share across the three supported AI coding tools. Each tool reads local usage logs and contributes to the aggregate statistics.
How fast is the AI coding developer community growing? This chart tracks cumulative registrations on clawdboard by week. Growth accelerates around major model releases, new tool launches, and developer announcements.
Cumulative registered developers by week
Based on data from 26 developers, the average AI coding user has an estimated all-time usage of $1.1k in API-equivalent cost. The median is $439.93, reflecting the wide gap between casual users and power users who run AI coding tools daily for extended sessions.
These are not actual bills. They represent what the same token usage would cost at Anthropic's published API rates. Most AI coding tool users pay flat monthly subscriptions rather than per-token billing. The estimated cost is useful for comparing relative usage intensity across developers and understanding which models consume the most resources.
The community has logged 633 active days of AI coding usage, with the longest consecutive streak reaching 36 days. The busiest single day across the community saw $1.2k in estimated usage on February 24, 2026.
Want to see where you stand? View the leaderboard to compare your usage, or read the FAQ to learn how tracking works.
All data on this page comes from developers who voluntarily track their AI coding usage through clawdboard. The clawdboard CLI reads local log files from supported tools — Claude Code, OpenCode, and Codex CLI — on each developer's machine. It extracts aggregate token counts (input, output, cache creation, cache read) and the model used for each session.
Cost estimation:
Token counts are multiplied by published API rates (Anthropic, OpenAI, etc.) for each model at the time of the session. This gives the API-equivalent cost — useful for comparison, but not what subscription users actually pay.
Privacy:
No code, prompts, file paths, project names, or conversation content is ever collected. Only aggregate token counts and model identifiers leave the developer's machine. See our privacy policy for details.
Limitations:
This is a self-selected sample of 26 developers, likely skewing toward heavier users and early adopters. It should not be interpreted as representative of all AI coding tool users.
Need this data programmatically? clawdboard provides free, public API endpoints for aggregate usage statistics:
# All-time aggregate stats
GET https://clawdboard.ai/api/stats
# Filter by period: today, 7d, 30d, this-month, ytd
GET https://clawdboard.ai/api/stats?period=30d
# Custom date range
GET https://clawdboard.ai/api/stats?period=custom&from=2025-01-01&to=2025-03-01
# Leaderboard data
GET https://clawdboard.ai/api/leaderboard?period=7d&sort=cost&limit=10Both APIs return JSON with no authentication required. Rate-limited to 15 requests per minute. The stats endpoint supports the same period filters as the leaderboard: today, 7d, 30d, this-month, ytd, and custom (with from and to dates). Omit the period parameter for all-time aggregates. If you use this data, please cite clawdboard as the source.
Common questions about AI coding costs, model usage, and how this data is collected.
Based on data from 26 developers on clawdboard, the average estimated AI coding usage cost is $1.1k total (not per month). The median is $439.93, meaning half of developers spend less than that. These are estimated API-equivalent costs — most developers pay flat monthly fees through provider subscriptions, not per-token billing.
Model popularity varies over time as providers release new versions. Check the Model Popularity chart above for the latest breakdown by cost share and user count. Among Claude models, Sonnet tends to see the highest volume due to its speed-to-quality ratio, while Opus accounts for a significant share of total spend due to higher per-token pricing. OpenAI models like GPT-4o and o-series are also tracked for Codex CLI and OpenCode users.
Every data point comes from developers who voluntarily track their AI coding usage through clawdboard. The clawdboard CLI reads local log files from supported tools (Claude Code, OpenCode, Codex CLI) on each developer's machine, extracts aggregate token counts and cost estimates, and syncs them. No code, prompts, project names, or conversation content is ever collected — only token counts and estimated costs.
Cost estimates are calculated by multiplying token counts (input, output, cache creation, cache read) by published API rates for each model (Anthropic, OpenAI, etc.). They represent the equivalent API cost — not an actual bill. Since most developers use AI coding tools through subscriptions with flat monthly pricing, the actual amount paid is typically lower than the estimated API-equivalent cost shown here.
Individual developers sync their usage every 2 hours by default. The aggregate statistics on this page are recalculated hourly. The data covers all usage since January 2024.
A streak counts consecutive calendar days where a developer used AI coding tools at least once. Missing a single day resets the streak. The longest active streak in the community is currently 36 days. You can see individual streaks on the leaderboard and profile pages.
Yes. clawdboard provides a free public API at https://clawdboard.ai/api/stats that returns community-wide aggregate statistics including total spend, token counts, model breakdowns, and methodology notes. The API is rate-limited to 15 requests per minute and returns JSON. The leaderboard API at https://clawdboard.ai/api/leaderboard is also public.
No. This is a self-selected sample of developers who choose to track and share their usage on clawdboard. It likely skews toward heavier users and early adopters. It should not be interpreted as representative of all AI coding tool users, but it does provide the largest public dataset of real AI coding usage patterns available.
Join 26 developers on the leaderboard. Free, open-source, takes 30 seconds to set up.
npx clawdboard auth